@interf/compiler 0.2.0 → 0.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,168 +1,120 @@
1
1
  # Interf
2
2
 
3
- The open-source eval-first knowledge compiler.
3
+ Interf Knowledge Compiler uses local agents such as Claude Code and Codex to run a data-processing workflow over your files.
4
4
 
5
- Interf compiles a workspace beside your files for agents: a knowledge representation they can navigate, cross-check against raw source, and prove on your evals.
5
+ It creates a workspace with notes and navigation so the agent can see what is in the folder and what to retrieve.
6
6
 
7
- Your files stay the truth. Interf adds a compiled workspace and benchmark proof.
7
+ Then you test that workspace on your evals.
8
8
 
9
- - point it at a folder you already have
10
- - define or confirm what must be true for the task in `interf.config.json`
11
- - compile a shared knowledge base plus task-specific interfaces
12
- - benchmark raw files vs compiled workspaces and keep the best result
9
+ - your files stay on your machine
10
+ - you choose the local agent
11
+ - you decide what must be true
13
12
 
14
- Most "AI knowledge base" tools optimize for a demo. Interf optimizes for proof. It keeps the raw files on disk, compiles a visible workspace your agent can use, and makes workflows compete on your evals instead of on marketing claims.
13
+ Agents start missing things when a task spans PDFs, charts, and several files in one folder. That usually shows up when the job depends on:
15
14
 
16
- Interf is not a chat shell, a hosted notes app, or a generic agent OS. It is the compile + benchmark loop for turning real folders into better agent workspaces and proving they help on a real task.
15
+ - reading reports and filings
16
+ - extracting a number from a chart
17
+ - understanding what is inside a folder before doing work
18
+ - pulling context together across several files
19
+ - checking the raw source when the answer has to be exact
17
20
 
18
- ## What Happens
21
+ The workspace exists so the agent does not have to rediscover the folder from scratch on every run.
19
22
 
20
- ```text
21
- raw folder
22
- -> compiled workspace beside the raw files
23
- -> benchmark proof on your evals
24
- ```
25
-
26
- The compiled workspace is for agents. It gives them:
27
-
28
- - a clearer map of the data
29
- - task-specific outputs when broad summarization is not enough
30
- - better evidence paths back to the raw source
31
- - proof of whether the compiled workspace actually helped
32
-
33
- Interf does not replace your agent. It gives your agent a better workspace to use.
34
-
35
- ## Trust Boundary
36
-
37
- Interf keeps one trust boundary:
38
-
39
- - raw files in the source folder are the content truth
40
- - `interf.config.json` is the user-approved task truth
41
- - the compiled workspace is the generated working surface
42
- - the benchmark result is the proof of whether that generated surface is good enough
43
-
44
- That means:
23
+ The point is proof on your data, not generic AI claims.
45
24
 
46
- - agents may draft evals
47
- - users approve accepted task truth
48
- - raw files remain the final source of evidence
49
- - compiled workspaces earn trust only if they pass the benchmark
25
+ The simplest way to use Interf is to compare the same task before and after compilation:
50
26
 
51
- ## Who It’s For
27
+ - run the task on the raw folder
28
+ - compile the folder with Interf
29
+ - run the same task again from the workspace
30
+ - if you want a recorded pass/fail result, add evals and run `interf benchmark`
52
31
 
53
- Interf is for people already trying to get real work done with agents on real folders:
54
-
55
- - Claude Code and Codex users
56
- - OpenClaw and Hermes-style local-agent users
57
- - technical founders, researchers, and operators with messy source folders
58
- - teams who want to test whether compiled workspaces beat raw files on their own tasks
59
-
60
- If you want a generic chat UI, this is not that product.
61
-
62
- ## Mental Model
63
-
64
- - **Source folder**: your real files stay where they are
65
- - **Compiled workspace**: the generated workspace beside those files for agents
66
- - **Knowledge base**: the shared compiled workspace over the folder
67
- - **Interface**: the task-specific compiled workspace for one job
68
- - **Workflow**: the reusable compile method
69
- - **Eval**: what must be true for the task
70
- - **Benchmark**: the proof loop that compares raw and compiled results
71
-
72
- One source folder can host multiple knowledge bases under `interf/` if you want to compare workflows on the same data.
73
-
74
- ## Install
32
+ ## Quick Start
75
33
 
76
34
  Requirements:
77
35
 
78
36
  - Node.js 20+
79
- - one local coding agent executor: Claude Code or Codex
37
+ - a local coding agent: Claude Code or Codex
80
38
 
81
- Install the published package:
39
+ Install and check setup:
82
40
 
83
41
  ```bash
84
42
  npm install -g @interf/compiler
85
- ```
86
-
87
- Sanity check the local setup:
88
-
89
- ```bash
90
43
  interf doctor
91
44
  ```
92
45
 
93
- If you already use Claude Code or Codex locally, that is the intended path. Interf uses your local agent as the executor for compile and benchmark runs.
94
-
95
- ## Quick Start
96
-
97
- Initialize Interf in any folder:
46
+ Then run Interf in any folder:
98
47
 
99
48
  ```bash
100
49
  cd ~/my-folder
101
50
  interf init
51
+ interf compile
52
+ interf benchmark
102
53
  ```
103
54
 
104
- That flow can:
55
+ That is the whole first loop:
105
56
 
106
- 1. choose an executor like Claude Code or Codex
107
- 2. optionally install helper skills
108
- 3. attach the current folder as a knowledge base
109
- 4. optionally compile the knowledge base immediately
57
+ - point Interf at a folder you already have
58
+ - let `interf init` write the first evals in `interf.config.json`
59
+ - compile the workspace
60
+ - ask the agent to use it
61
+ - run `interf benchmark` to compare raw vs compiled
110
62
 
111
- Then:
63
+ `interf init` chooses your local agent, can draft `interf.config.json` if it is missing, and can attach the current folder right away. It does not move or replace your files.
64
+
65
+ Fastest sample loop:
112
66
 
113
67
  ```bash
114
- interf create interface
68
+ cp -r examples/benchmark-demo /tmp/interf-demo
69
+ cd /tmp/interf-demo
70
+ interf init
115
71
  interf compile
116
72
  interf benchmark
117
73
  ```
118
74
 
119
- Fastest way to see the full loop:
75
+ If you want a task-specific workspace for one job, add an interface:
120
76
 
121
77
  ```bash
122
- cp -r examples/benchmark-demo /tmp/benchmark-demo
123
- cd /tmp/benchmark-demo
124
- interf init
78
+ interf create interface
125
79
  interf compile
126
80
  interf benchmark
127
81
  ```
128
82
 
129
- What success looks like on disk:
83
+ ## Start With One Small Eval
130
84
 
131
- - `interf/<kb>/` = shared compiled workspace over the folder
132
- - `interf/<kb>/interfaces/<name>/` = task-specific compiled workspace
133
- - `interf/benchmarks/runs/...` = saved benchmark evidence for that folder
85
+ `interf.config.json` is where you write what must be true.
134
86
 
135
- ## 5-Minute Example
87
+ If the file is missing, `interf init` can draft it with you before the first compile.
136
88
 
137
- Try the full loop on the shipped sample folder:
89
+ Use it for:
138
90
 
139
- ```bash
140
- cp -r examples/benchmark-demo /tmp/interf-demo
141
- cd /tmp/interf-demo
142
- interf init
143
- interf compile
144
- interf benchmark
145
- ```
146
-
147
- This sample already includes an `interf.config.json`, so you can see the compile and benchmark loop without writing your own evals first.
148
-
149
- ## Simple Eval Example
91
+ - top-level `evals` for the main folder
92
+ - `interfaces[].evals` for task-specific checks
150
93
 
151
- The default public eval file is `interf.config.json` at the source-folder root.
94
+ Example shape:
152
95
 
153
- Minimal example:
96
+ Top-level `evals` are for the main folder. Each entry in `interfaces` adds evals for one dedicated job.
154
97
 
155
98
  ```json
156
99
  {
100
+ "evals": [
101
+ {
102
+ "question": "What was Bristol annual take-up in 2018, in millions of square feet?",
103
+ "answer": "About 0.5 million square feet. Accept answers between 0.3 and 0.6 if they clearly refer to Bristol annual take-up in 2018."
104
+ }
105
+ ],
157
106
  "interfaces": [
158
107
  {
159
- "name": "weekly-briefing",
160
- "about": "Summarize what changed, why it matters, and what to do next.",
108
+ "name": "market-briefing",
109
+ "about": "Prepare a short briefing from the office market report.",
161
110
  "evals": [
162
111
  {
163
- "question": "From the compiled interface only, what changed and what should the operator do next?",
164
- "answer": "A good answer says what changed, why it matters, and the next action.",
165
- "strictness": "approximate"
112
+ "question": "What was Bristol availability in 2018, in millions of square feet?",
113
+ "answer": "About 0.6 million square feet. Accept answers between 0.5 and 0.7 if they clearly refer to Bristol availability in 2018."
114
+ },
115
+ {
116
+ "question": "Did Bristol annual take-up rise or fall between 2016 and 2018?",
117
+ "answer": "It fell. The chart shows roughly 0.7 to 0.8 million square feet in 2016 and about 0.5 million square feet in 2018."
166
118
  }
167
119
  ]
168
120
  }
@@ -170,221 +122,158 @@ Minimal example:
170
122
  }
171
123
  ```
172
124
 
173
- That is enough to start. You do not need a large benchmark harness to use Interf:
125
+ Good first evals are small and practical:
174
126
 
175
- 1. write one or two questions that matter
176
- 2. say what a good answer must preserve
177
- 3. compile the workspace
178
- 4. run `interf benchmark`
127
+ - one exact number from a chart, table, or filing
128
+ - one short statement that should be true or false
129
+ - one simple comparison across years, files, or sections
179
130
 
180
- If the compiled workspace does not beat raw files on those evals, do not trust it yet.
131
+ Then run:
181
132
 
182
- ## Use It With Your Agent
133
+ ```bash
134
+ interf compile
135
+ interf benchmark
136
+ ```
183
137
 
184
- For many users, the agent is the operator.
138
+ If the benchmark does not show an improvement over raw files, keep iterating or move to the experiment loop described below.
185
139
 
186
- A practical agent-native loop looks like this:
140
+ ## Compare Three Things
187
141
 
188
- 1. the agent gets a real task against a real folder
189
- 2. it inspects raw files or prior benchmark evidence
190
- 3. it drafts or updates evals in `interf.config.json`
191
- 4. it asks the user to confirm the task truth when needed
192
- 5. it runs compile + benchmark
193
- 6. it only promotes the compiled workspace for real use once the benchmark says it helped
142
+ Compare:
194
143
 
195
- Paste something like this into Claude Code, Codex, OpenClaw, or Hermes:
144
+ 1. the raw folder
145
+ 2. the workspace
146
+ 3. an interface for one specific job
196
147
 
197
- ```text
198
- Install @interf/compiler, run `interf init` in this folder, choose the local agent executor, and compile the workspace.
148
+ `interf benchmark` is how you compare those on the same evals.
199
149
 
200
- If `interf.config.json` is missing or incomplete, draft evals for what must be true for this task and ask me to confirm them before benchmarking.
150
+ That gives you one clear question:
201
151
 
202
- Then run `interf benchmark` and tell me whether raw files or the compiled workspace performed better.
203
- ```
152
+ - is the raw folder enough?
153
+ - does the workspace retrieve better?
154
+ - does a dedicated interface do better than both?
204
155
 
205
- ## What The Agent Sees
156
+ ## What `interf compile` Actually Does
206
157
 
207
- The compiled folder is the agent-facing product surface.
158
+ `interf compile` runs a workflow over your folder.
208
159
 
209
- Important files in a knowledge base or interface:
160
+ That workflow is the compilation pipeline:
210
161
 
211
- - `interf.json` = what this workspace is
212
- - `AGENTS.md` = canonical bootstrap and navigation
213
- - `CLAUDE.md` = generated compatibility mirror of `AGENTS.md`
214
- - `workflow/` = the editable local method package
215
- - `home.md` = entry document
216
- - `summaries/`, `knowledge/`, and `briefs/` = compiled outputs
162
+ - read the files
163
+ - write processed notes and navigation files
164
+ - build the workspace your agent can use
165
+ - optionally build an interface for one specific job
217
166
 
218
- Interf supports two agent modes:
167
+ The default workflow is built in. If you want a different method, you can define your own workflow package and benchmark it on the same folder.
219
168
 
220
- - **executor mode**: the CLI launches a local agent to satisfy one stage contract during create, compile, or benchmark flows
221
- - **use mode**: a human opens the compiled knowledge base or interface and asks an agent to navigate the finished workspace
169
+ ## Experiment Loop
222
170
 
223
- Manual use looks like this:
171
+ Interf Knowledge Compiler also supports an experiment loop above compile + benchmark.
224
172
 
225
- 1. open the knowledge base or interface folder
226
- 2. read `AGENTS.md`
227
- 3. follow `workflow/use/query/SKILL.md`
228
- 4. for interfaces, use local interface artifacts first, then the parent knowledge-base loop, then raw files if needed
173
+ It runs controlled experiments against the same folder and the same evals. Each attempt reruns the compilation workflow, reruns the benchmark, and records what changed. It stops when:
229
174
 
230
- Interf does not require globally installed slash skills for workspace behavior. Local `workflow/.../SKILL.md` files are workspace instruction docs routed by `AGENTS.md` and stage contracts.
175
+ - the evals pass
176
+ - or the experiment budget is exhausted
231
177
 
232
- ## Benchmark Proof
178
+ In practice, that means:
233
179
 
234
- Interf is benchmark-first.
180
+ - `retry_policy.max_attempts_per_profile` controls how many experiment attempts each compile profile gets
181
+ - stronger diagnostic profiles can be used only after the default ones fail
182
+ - the loop is still judged on the same eval truth from your folder
183
+ - failure summaries can be captured between attempts for diagnosis
235
184
 
236
- The default eval file lives at the source-folder root:
185
+ Today that advanced path is configured through eval packs and explained in the deeper docs. The workflow is the part you change. The experiment loop is the controller that runs those experiments against the same evals with a fixed attempt budget.
237
186
 
238
- ```text
239
- source-folder/
240
- interf.config.json
241
- ```
187
+ Use the simple loop first. Use the experiment loop when you want to test workflow or profile changes against the same evals until one passes or the attempt budget runs out.
242
188
 
243
- Saved benchmark runs live under:
189
+ ## Use It With Your Agent
244
190
 
245
- ```text
246
- source-folder/
247
- interf/
248
- benchmarks/
249
- runs/
250
- ```
191
+ If you already work through Claude Code, Codex, OpenClaw, or Hermes, the agent can run this loop for you.
251
192
 
252
- Use benchmarks to answer questions like:
193
+ Paste something like this into Claude Code, Codex, OpenClaw, or Hermes:
253
194
 
254
- - does the compiled workspace beat raw files on this task?
255
- - which workflow wins on this folder?
256
- - which interface is best for this job?
257
- - which model performs best on the same compiled target?
195
+ ```text
196
+ Install @interf/compiler, run `interf init` in this folder, and use the local agent executor.
258
197
 
259
- `interf benchmark` uses your evals, opens the compiled target like a real user session, asks the questions, and grades the answers. The point is not a hidden score. The point is a benchmark artifact you can inspect, diff, and rerun locally.
198
+ If `interf.config.json` is missing, draft evals for what must be true for this task and ask me to confirm them.
260
199
 
261
- ## Power Mode
200
+ Then run `interf compile` and `interf benchmark`.
262
201
 
263
- Most users do not need to think about improvement loops.
202
+ Tell me whether the processed workspace beat raw files, and only recommend it if it did.
203
+ ```
264
204
 
265
- The basic story is:
205
+ That is the basic loop:
266
206
 
267
- 1. compile
268
- 2. benchmark
269
- 3. trust the result only if it passes
207
+ - the user or agent defines what must be true
208
+ - Interf prepares processed data for retrieval
209
+ - the benchmark shows whether that helped
270
210
 
271
- Power users and agent-native setups can go further:
211
+ ## What Gets Created
272
212
 
273
- - compare workflows on the same folder
274
- - compare models on the same compiled target
275
- - draft custom local workflows
276
- - rerun compile + benchmark until a task-specific interface passes
213
+ After compile, Interf writes into `./interf/` beside your source files.
277
214
 
278
- That improvement loop is a real capability, but it is not the main thing users need to understand first.
215
+ - `interf/<name>/` is the shared workspace over the folder
216
+ - `interf/<name>/interfaces/<name>/` is a task-specific workspace for one job
217
+ - `interf/benchmarks/runs/...` stores saved benchmark runs
279
218
 
280
- ## Layout On Disk
219
+ Inside those workspaces you will see things like:
281
220
 
282
- ```text
283
- source-folder/
284
- ...your files...
285
- interf.config.json
286
- interf/
287
- workflows/
288
- benchmarks/
289
- {knowledge-base-name}/
290
- interf.json
291
- AGENTS.md
292
- CLAUDE.md
293
- home.md
294
- workflow/
295
- summaries/
296
- knowledge/
297
- interfaces/
298
- {interface-name}/
299
- interf.json
300
- compile-plan.md
301
- AGENTS.md
302
- CLAUDE.md
303
- home.md
304
- workflow/
305
- knowledge/
306
- briefs/
307
- summaries/
308
- ```
221
+ - summaries of source files
222
+ - navigation notes and entrypoints for agents
223
+ - task-specific outputs for one interface
224
+ - benchmark artifacts you can inspect later
309
225
 
310
- ## Core Commands
226
+ In the CLI, the main Interf workspace is called a **knowledge base**. A task-specific workspace inside it is called an **interface**.
311
227
 
312
- - `interf init` = global setup first; if run inside a normal folder, it can also attach and compile a knowledge base there
313
- - `interf create interface` = create an interface for the current folder's knowledge base
314
- - `interf compile` = compile the current knowledge base or interface
315
- - `interf benchmark` = compare compiled knowledge bases or interfaces with evals from `interf.config.json` or an explicit spec file
228
+ ## When To Create An Interface
316
229
 
317
- Advanced commands still exist for workflow authoring and diagnostics:
230
+ Start with one workspace.
318
231
 
319
- - `interf create workflow`
320
- - `interf doctor`
321
- - `interf status`
322
- - `interf verify <check>`
323
- - `interf reset <scope>`
232
+ Create an interface when your agent needs outputs shaped for one specific job, for example:
324
233
 
325
- Useful run flags:
234
+ - weekly briefing
235
+ - diligence on a deal room
236
+ - extracting chart values from research PDFs
237
+ - a focused research assistant for one question set
326
238
 
327
- - `--model <name>` = pin the agent model for this run
328
- - `--profile <name>` = pass an agent-specific profile when supported
329
- - `--effort <level>` = override model reasoning effort
330
- - `--timeout-ms <ms>` = interrupt the local executor after this much inactivity
239
+ If that workspace is enough for the job, you do not need an interface yet.
331
240
 
332
- ## Workflows
241
+ ## Custom Workflows
333
242
 
334
- A workflow is a package, not just a prompt.
243
+ Interf ships with a default workflow.
335
244
 
336
- It has two layers:
245
+ If you want to change how compilation happens on your data, this is the part you customize:
337
246
 
338
- - machine layer: `workflow.json`
339
- - human/agent layer: `workflow/` docs
340
-
341
- Typical reusable workflow package:
342
-
343
- ```text
344
- interf/workflows/knowledge-base/<workflow-id>/
345
- workflow.json
346
- README.md
347
- create/
348
- SKILL.md
349
- compile/
350
- stages/
351
- <stage-id>/
352
- SKILL.md
353
- use/
354
- query/
355
- SKILL.md
247
+ ```bash
248
+ interf create workflow
249
+ interf verify workflow --path <path>
356
250
  ```
357
251
 
358
- Interf keeps the public command surface stable while letting workflows vary the internal stage pipeline. The engine still owns contract kinds, required artifacts, and state flow.
252
+ Then benchmark that workflow on the same folder and the same evals.
359
253
 
360
- Current shipped policy:
254
+ Workflow package docs live in [docs/workflow-spec.md](./docs/workflow-spec.md).
361
255
 
362
- - built-in knowledge-base workflows: `interf`, `karpathy`
363
- - built-in interface workflow: `interf`
364
- - if you need a custom interface method, create a local workflow package and benchmark it before treating it as better than the default
365
-
366
- ## Builder Docs
367
-
368
- If you want to create your own workflows, start here:
369
-
370
- 1. [`docs/workflow-spec.md`](./docs/workflow-spec.md)
371
- 2. [`docs/runtime-contract.md`](./docs/runtime-contract.md)
372
- 3. [`docs/architecture.md`](./docs/architecture.md)
373
- 4. [`docs/eval-loop.md`](./docs/eval-loop.md)
256
+ ## Core Commands
374
257
 
375
- Contributor and release-testing commands live in [`CONTRIBUTING.md`](./CONTRIBUTING.md).
258
+ - `interf init` = choose your local executor and optionally attach the current folder
259
+ - `interf create knowledge-base` = create the shared processed workspace for this folder
260
+ - `interf create interface` = create a task-specific workspace on top
261
+ - `interf create workflow` = create a reusable local workflow package
262
+ - `interf compile` = build the current workspace
263
+ - `interf benchmark` = compare raw files vs processed workspaces on your evals
264
+ - `interf doctor` = check local executor setup
265
+ - `interf verify <check>` = run deterministic checks on major workflow steps
266
+ - `interf reset <scope>` = remove generated state while keeping source files
376
267
 
377
- ## Design Choices
268
+ ## More Docs
378
269
 
379
- - filesystem-first, not service-first
380
- - raw files remain the truth
381
- - compiled workspaces remain visible on disk
382
- - workflow packages over hidden orchestration
383
- - contract-checked stages instead of prompt-only trust
384
- - benchmarkability as a core product feature
270
+ - [docs/workflow-spec.md](./docs/workflow-spec.md) for custom workflow packages
271
+ - [docs/runtime-contract.md](./docs/runtime-contract.md) for the exact on-disk contract
272
+ - [docs/architecture.md](./docs/architecture.md) for the deeper system model
273
+ - [docs/eval-loop.md](./docs/eval-loop.md) for the advanced benchmark and experiment loop
385
274
 
386
- Interf is not trying to win by hiding complexity. It is trying to make the method visible, enforceable, and comparable.
275
+ Maintainers should use [CONTRIBUTING.md](./CONTRIBUTING.md) for test and release gates.
387
276
 
388
277
  ## License
389
278
 
390
- Code is licensed under Apache 2.0. The `Interf` name and branding are reserved; see [`TRADEMARKS.md`](./TRADEMARKS.md).
279
+ Code is licensed under Apache 2.0. The `Interf` name and branding are reserved; see [TRADEMARKS.md](./TRADEMARKS.md).
@@ -1,5 +1,6 @@
1
1
  import type { CommandModule } from "yargs";
2
2
  import type { WorkflowExecutionProfile } from "../lib/executors.js";
3
+ import type { SourceInterfaceConfig } from "../lib/schema.js";
3
4
  export { type WorkflowWizardPrompts, buildKnowledgeBaseWorkflowOptions, buildInterfaceWorkflowOptions, buildStandaloneInterfaceWorkflowOptions, selectWorkflowTargetType, formatWorkflowLabel, CREATE_NEW_WORKFLOW_VALUE, chooseKnowledgeBaseWorkflow, chooseInterfaceWorkflow, createWorkflowWizard, createKnowledgeBaseWorkflowWizard, createInterfaceWorkflowWizard, } from "./create-workflow-wizard.js";
4
5
  export declare const createCommand: CommandModule;
5
6
  export declare function createKnowledgeBaseWizard(options?: {
@@ -12,5 +13,6 @@ export declare function createInterfaceWizard(options?: {
12
13
  knowledgeBasePathOverride?: string;
13
14
  knowledgeBaseNameOverride?: string;
14
15
  executionProfile?: WorkflowExecutionProfile;
16
+ suggestedInterface?: Pick<SourceInterfaceConfig, "name" | "about">;
15
17
  }): Promise<void>;
16
18
  //# sourceMappingURL=create.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"create.d.ts","sourceRoot":"","sources":["../../src/commands/create.ts"],"names":[],"mappings":"AAiCA,OAAO,KAAK,EAAE,aAAa,EAAE,MAAM,OAAO,CAAC;AAO3C,OAAO,KAAK,EAAE,wBAAwB,EAAE,MAAM,qBAAqB,CAAC;AAOpE,OAAO,EACL,KAAK,qBAAqB,EAC1B,iCAAiC,EACjC,6BAA6B,EAC7B,uCAAuC,EACvC,wBAAwB,EACxB,mBAAmB,EACnB,yBAAyB,EACzB,2BAA2B,EAC3B,uBAAuB,EACvB,oBAAoB,EACpB,iCAAiC,EACjC,6BAA6B,GAC9B,MAAM,6BAA6B,CAAC;AAwErC,eAAO,MAAM,aAAa,EAAE,aA8D3B,CAAC;AAEF,wBAAsB,yBAAyB,CAC7C,OAAO,GAAE;IACP,KAAK,CAAC,EAAE,OAAO,CAAC;IAChB,gBAAgB,CAAC,EAAE,OAAO,CAAC;IAC3B,gBAAgB,CAAC,EAAE,wBAAwB,CAAC;CACxC,iBAwJP;AAED,wBAAsB,qBAAqB,CACzC,OAAO,GAAE;IACP,KAAK,CAAC,EAAE,OAAO,CAAC;IAChB,yBAAyB,CAAC,EAAE,MAAM,CAAC;IACnC,yBAAyB,CAAC,EAAE,MAAM,CAAC;IACnC,gBAAgB,CAAC,EAAE,wBAAwB,CAAC;CACxC,iBAqQP"}
1
+ {"version":3,"file":"create.d.ts","sourceRoot":"","sources":["../../src/commands/create.ts"],"names":[],"mappings":"AAiCA,OAAO,KAAK,EAAE,aAAa,EAAE,MAAM,OAAO,CAAC;AAO3C,OAAO,KAAK,EAAE,wBAAwB,EAAE,MAAM,qBAAqB,CAAC;AAEpE,OAAO,KAAK,EAAE,qBAAqB,EAAE,MAAM,kBAAkB,CAAC;AAO9D,OAAO,EACL,KAAK,qBAAqB,EAC1B,iCAAiC,EACjC,6BAA6B,EAC7B,uCAAuC,EACvC,wBAAwB,EACxB,mBAAmB,EACnB,yBAAyB,EACzB,2BAA2B,EAC3B,uBAAuB,EACvB,oBAAoB,EACpB,iCAAiC,EACjC,6BAA6B,GAC9B,MAAM,6BAA6B,CAAC;AAwErC,eAAO,MAAM,aAAa,EAAE,aA8D3B,CAAC;AAEF,wBAAsB,yBAAyB,CAC7C,OAAO,GAAE;IACP,KAAK,CAAC,EAAE,OAAO,CAAC;IAChB,gBAAgB,CAAC,EAAE,OAAO,CAAC;IAC3B,gBAAgB,CAAC,EAAE,wBAAwB,CAAC;CACxC,iBA8JP;AAED,wBAAsB,qBAAqB,CACzC,OAAO,GAAE;IACP,KAAK,CAAC,EAAE,OAAO,CAAC;IAChB,yBAAyB,CAAC,EAAE,MAAM,CAAC;IACnC,yBAAyB,CAAC,EAAE,MAAM,CAAC;IACnC,gBAAgB,CAAC,EAAE,wBAAwB,CAAC;IAC5C,kBAAkB,CAAC,EAAE,IAAI,CAAC,qBAAqB,EAAE,MAAM,GAAG,OAAO,CAAC,CAAC;CAC/D,iBAuQP"}