@mindstudio-ai/remy 0.1.146 → 0.1.147
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/automatedActions/buildFromInitialSpec.md +2 -10
- package/dist/automatedActions/buildFromRoadmap.md +2 -1
- package/dist/automatedActions/postBuildPolish.md +18 -0
- package/dist/automatedActions/postRoadmapBuild.md +13 -0
- package/dist/automatedActions/publish.md +2 -0
- package/dist/headless.js +262 -116
- package/dist/index.js +265 -121
- package/dist/prompt/static/authoring.md +1 -1
- package/dist/prompt/static/instructions.md +2 -2
- package/dist/prompt/static/team.md +1 -1
- package/package.json +1 -1
|
@@ -1,10 +1,11 @@
|
|
|
1
1
|
---
|
|
2
2
|
trigger: buildFromInitialSpec
|
|
3
|
+
next: postBuildPolish
|
|
3
4
|
---
|
|
4
5
|
|
|
5
6
|
This is an automated action triggered by the user pressing "Build" in the editor after reviewing the spec.
|
|
6
7
|
|
|
7
|
-
The user has reviewed the spec and is ready to build. There are
|
|
8
|
+
The user has reviewed the spec and is ready to build. There are three phases: planning, coding, and verifying. Execute each phase in order in a single turn.
|
|
8
9
|
|
|
9
10
|
## Planning
|
|
10
11
|
Think about your approach and then get a quick sanity check from `codeSanityCheck` to make sure you aren't missing anything.
|
|
@@ -21,12 +22,3 @@ Then, build everything in one turn: tables, methods, interfaces, manifest update
|
|
|
21
22
|
- If the app has a web frontend, check the browser logs to make sure there are no errors rendering it.
|
|
22
23
|
- Use `runAutomatedBrowserTest` to smoke-test the main UI flow. The dev database is a disposable snapshot, so don't worry about being destructive. Fix any errors before finishing.
|
|
23
24
|
- If there is a scenario that seeds the app with mock data, use it to present the app to the user with initial data seeded, so they can see and play with the real app. Let the user know they can reset the app using a scenario to empty it if they wish. Showing the user something they can play with immediately is important when it comes to landing a strong first impression.
|
|
24
|
-
|
|
25
|
-
## Polishing
|
|
26
|
-
When verification is complete, take a step back and do an explicit polish pass before verifying. Re-read the spec files and the design expert's guidance, then walk through each frontend file looking for design details that got skipped in the initial build: animations, transitions, hover states, micro-interactions, spring physics, entrance reveals, gesture handling, layout issues, and anything else.
|
|
27
|
-
|
|
28
|
-
The initial build prioritizes getting everything connected and functional, but this pass closes the gap between "it works" and "it feels great." In many ways this is *the* most important part of the initial build, as the user's first experience of the deliverable will set their expectations for every iteration that follows. Don't mess this up.
|
|
29
|
-
|
|
30
|
-
Then, ask the `visualDesignExpert` to take a screenshot and verity that the visual design looks correct. Fix any issues it flags - we want the user's first time seeing the finished product to truly wow them.
|
|
31
|
-
|
|
32
|
-
When everything is working, use `productVision` to mark the MVP roadmap item as done, then call `setProjectOnboardingState({ state: "onboardingFinished" })`. Finally, call `compactConversation` to summarize the build session and free up context for the next phase of work.
|
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
trigger: buildFromRoadmap
|
|
3
|
+
next: postRoadmapBuild
|
|
3
4
|
---
|
|
4
5
|
|
|
5
6
|
This is an automated action triggered by the user pressing "Build Now" on the roadmap item {{path}}
|
|
@@ -12,4 +13,4 @@ Then, put together a plan to build out the feature. Write the plan with `writePl
|
|
|
12
13
|
|
|
13
14
|
When they've approved the plan, be sure to update the spec first - remember, the spec is the source of truth about the product. Then, build everything in one turn, using the spec as the master plan.
|
|
14
15
|
|
|
15
|
-
When you're finished, verify your work
|
|
16
|
+
When you're finished building, verify your work and give the user a summary of what was done.
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
---
|
|
2
|
+
trigger: postBuildPolish
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
This is an automated follow-up after the initial build. The code is written and verified. Now it's time to polish and finalize so we can deliver something beautiful and magical as the user's first experience with our work.
|
|
6
|
+
|
|
7
|
+
## Polishing
|
|
8
|
+
Take a step back and do an explicit polish pass. Re-read the spec files and the design expert's guidance, then walk through each frontend file looking for design details that got skipped in the initial build: layout animations, transitions, hover states, micro-interactions, spring physics, entrance reveals, gesture handling, layout issues, responsiveness, and anything else. We need this to feel truly amazing and wow the user - it's worth it to take the time to get it right.
|
|
9
|
+
|
|
10
|
+
The initial build prioritizes getting everything connected and functional, but this pass closes the gap between "it works" and "it feels great." In many ways this is *the* most important part of the initial build, as the user's first experience of the deliverable will set their expectations for every iteration that follows. Don't mess this up.
|
|
11
|
+
|
|
12
|
+
When you have finished, ask the `visualDesignExpert` to take a screenshot and verify that the visual design looks correct. Fix any issues it flags. We want the user's first time seeing the finished product to truly wow them.
|
|
13
|
+
|
|
14
|
+
## Finalizing
|
|
15
|
+
When everything is working and polished:
|
|
16
|
+
1. Use `productVision` to mark the MVP roadmap item as done.
|
|
17
|
+
2. Call `setProjectOnboardingState({ state: "onboardingFinished" })`.
|
|
18
|
+
3. Call `compactConversation` to summarize the build session and free up context for the next phase of work.
|
|
@@ -0,0 +1,13 @@
|
|
|
1
|
+
---
|
|
2
|
+
trigger: postRoadmapBuild
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
This is an automated follow-up after building a roadmap feature. The code is written and verified. Now it's time to polish and finalize.
|
|
6
|
+
|
|
7
|
+
## Polishing
|
|
8
|
+
Take a step back and do an explicit polish pass. Re-read the spec files and the design expert's guidance, then walk through each frontend file you changed looking for design details that got skipped: animations, transitions, hover states, micro-interactions, and anything else that closes the gap between "it works" and "it feels great."
|
|
9
|
+
|
|
10
|
+
## Finalizing
|
|
11
|
+
When everything is working:
|
|
12
|
+
1. Tell `productVision` what was done so it can update the roadmap to reflect the progress.
|
|
13
|
+
2. Call `compactConversation` to summarize the build session and free up context.
|
|
@@ -14,4 +14,6 @@ If approved:
|
|
|
14
14
|
- Use `mindstudio-prod releases status --wait` to poll the build until it completes. Let the user know it's deploying, then report back when it's live.
|
|
15
15
|
- Once deployed, offer to help with next steps. This includes technical steps likesetting up a custom domain (`mindstudio-prod domains`), checking for errors (`mindstudio-prod requests stats`), seeding production data (`mindstudio-prod db`), managing env vars/secrets, or anything else they need for launch. It also includes going above and beyond and helping holistically. If it's the initial deploy, offer to help create collateral to announce the launch (e.g., an image for sharing on social media, text copy for a post, etc); if it's a meaningful incremental update, an annoucement post or something similar - go above and beyond here to help the user see that you care about the product from end-to-end, not just writing code! They will be appreciative, grateful, and pleased with your creativity here. Refer to the design guidance in the spec for how to talk about the product, and consider consulting the design expert to generate images or other marketing collateral.
|
|
16
16
|
|
|
17
|
+
After everything is done, call `compactConversation` to summarize the current session and free up context for the next phase of work.
|
|
18
|
+
|
|
17
19
|
If dismissed, acknowledge and do nothing.
|
package/dist/headless.js
CHANGED
|
@@ -6,7 +6,15 @@ var __export = (target, all) => {
|
|
|
6
6
|
|
|
7
7
|
// src/headless.ts
|
|
8
8
|
import { createInterface } from "readline";
|
|
9
|
-
import {
|
|
9
|
+
import {
|
|
10
|
+
writeFileSync,
|
|
11
|
+
readFileSync,
|
|
12
|
+
unlinkSync,
|
|
13
|
+
mkdirSync,
|
|
14
|
+
existsSync
|
|
15
|
+
} from "fs";
|
|
16
|
+
import { writeFile } from "fs/promises";
|
|
17
|
+
import { basename, join, extname } from "path";
|
|
10
18
|
|
|
11
19
|
// src/logger.ts
|
|
12
20
|
import fs from "fs";
|
|
@@ -139,87 +147,9 @@ function readJsonAsset(fallback, ...segments) {
|
|
|
139
147
|
}
|
|
140
148
|
}
|
|
141
149
|
|
|
142
|
-
// src/tools/_helpers/sidecar.ts
|
|
143
|
-
var log2 = createLogger("sidecar");
|
|
144
|
-
var baseUrl = null;
|
|
145
|
-
function setSidecarBaseUrl(url) {
|
|
146
|
-
baseUrl = url;
|
|
147
|
-
log2.info("Configured", { url });
|
|
148
|
-
}
|
|
149
|
-
function isSidecarConfigured() {
|
|
150
|
-
return baseUrl !== null;
|
|
151
|
-
}
|
|
152
|
-
async function sidecarRequest(endpoint, body = {}, options) {
|
|
153
|
-
if (!baseUrl) {
|
|
154
|
-
throw new Error("Sidecar not available");
|
|
155
|
-
}
|
|
156
|
-
const url = `${baseUrl}${endpoint}`;
|
|
157
|
-
try {
|
|
158
|
-
const res = await fetch(url, {
|
|
159
|
-
method: "POST",
|
|
160
|
-
headers: { "Content-Type": "application/json" },
|
|
161
|
-
body: JSON.stringify(body),
|
|
162
|
-
signal: options?.timeout ? AbortSignal.timeout(options.timeout) : void 0
|
|
163
|
-
});
|
|
164
|
-
if (!res.ok) {
|
|
165
|
-
log2.error("Sidecar error", { endpoint, status: res.status });
|
|
166
|
-
throw new Error(`Sidecar error: ${res.status}`);
|
|
167
|
-
}
|
|
168
|
-
const data = await res.json();
|
|
169
|
-
if (data?.success === false) {
|
|
170
|
-
const code = data.errorCode ? ` [${data.errorCode}]` : "";
|
|
171
|
-
throw new Error(`${data.error || "Unknown error"}${code}`);
|
|
172
|
-
}
|
|
173
|
-
return data;
|
|
174
|
-
} catch (err) {
|
|
175
|
-
if (err.message.startsWith("Sidecar error")) {
|
|
176
|
-
throw err;
|
|
177
|
-
}
|
|
178
|
-
log2.error("Sidecar connection error", { endpoint, error: err.message });
|
|
179
|
-
throw new Error(`Sidecar connection error: ${err.message}`);
|
|
180
|
-
}
|
|
181
|
-
}
|
|
182
|
-
|
|
183
|
-
// src/tools/_helpers/lsp.ts
|
|
184
|
-
var setLspBaseUrl = setSidecarBaseUrl;
|
|
185
|
-
var isLspConfigured = isSidecarConfigured;
|
|
186
|
-
async function lspRequest(endpoint, body) {
|
|
187
|
-
return sidecarRequest(endpoint, body);
|
|
188
|
-
}
|
|
189
|
-
|
|
190
150
|
// src/prompt/static/projectContext.ts
|
|
191
151
|
import fs4 from "fs";
|
|
192
152
|
import path3 from "path";
|
|
193
|
-
var AGENT_INSTRUCTION_FILES = [
|
|
194
|
-
"CLAUDE.md",
|
|
195
|
-
"claude.md",
|
|
196
|
-
".claude/instructions.md",
|
|
197
|
-
"AGENTS.md",
|
|
198
|
-
"agents.md",
|
|
199
|
-
".agents.md",
|
|
200
|
-
"COPILOT.md",
|
|
201
|
-
"copilot.md",
|
|
202
|
-
".copilot-instructions.md",
|
|
203
|
-
".github/copilot-instructions.md",
|
|
204
|
-
"REMY.md",
|
|
205
|
-
"remy.md",
|
|
206
|
-
".cursorrules",
|
|
207
|
-
".cursorules"
|
|
208
|
-
];
|
|
209
|
-
function loadProjectInstructions() {
|
|
210
|
-
for (const file of AGENT_INSTRUCTION_FILES) {
|
|
211
|
-
try {
|
|
212
|
-
const content = fs4.readFileSync(file, "utf-8").trim();
|
|
213
|
-
if (content) {
|
|
214
|
-
return `
|
|
215
|
-
## Project Instructions (${file})
|
|
216
|
-
${content}`;
|
|
217
|
-
}
|
|
218
|
-
} catch {
|
|
219
|
-
}
|
|
220
|
-
}
|
|
221
|
-
return "";
|
|
222
|
-
}
|
|
223
153
|
function loadProjectManifest() {
|
|
224
154
|
try {
|
|
225
155
|
const manifest = fs4.readFileSync("mindstudio.json", "utf-8");
|
|
@@ -346,7 +276,6 @@ function resolveIncludes(template) {
|
|
|
346
276
|
}
|
|
347
277
|
function buildSystemPrompt(onboardingState, viewContext) {
|
|
348
278
|
const projectContext = [
|
|
349
|
-
loadProjectInstructions(),
|
|
350
279
|
loadProjectManifest(),
|
|
351
280
|
loadSpecFileMetadata(),
|
|
352
281
|
loadProjectFileListing()
|
|
@@ -421,29 +350,26 @@ Current date: ${now}
|
|
|
421
350
|
{{compiled/msfm.md}}
|
|
422
351
|
</mindstudio_flavored_markdown_spec_docs>
|
|
423
352
|
|
|
424
|
-
<project_context>
|
|
425
|
-
${projectContext}
|
|
426
|
-
</project_context>
|
|
427
|
-
|
|
428
353
|
<intake_mode_instructions>
|
|
429
|
-
{{static/intake.md}}
|
|
354
|
+
{{static/intake.md}}
|
|
430
355
|
</intake_mode_instructions>
|
|
431
356
|
|
|
432
357
|
<spec_authoring_instructions>
|
|
433
|
-
{{static/authoring.md}}
|
|
358
|
+
{{static/authoring.md}}
|
|
434
359
|
</spec_authoring_instructions>
|
|
435
360
|
|
|
436
|
-
|
|
361
|
+
<team>
|
|
362
|
+
{{static/team.md}}
|
|
363
|
+
</team>
|
|
437
364
|
|
|
438
365
|
<code_authoring_instructions>
|
|
439
366
|
{{static/coding.md}}
|
|
440
|
-
|
|
367
|
+
|
|
368
|
+
<typescript_lsp>
|
|
441
369
|
{{static/lsp.md}}
|
|
442
|
-
</typescript_lsp
|
|
370
|
+
</typescript_lsp>
|
|
443
371
|
</code_authoring_instructions>
|
|
444
372
|
|
|
445
|
-
{{static/instructions.md}}
|
|
446
|
-
${loadPlanStatus()}
|
|
447
373
|
<conversation_summaries>
|
|
448
374
|
Your conversation history may include <prior_conversation_summary> blocks in the user's messages. These are automated summaries of earlier messages that have been compacted to save context space. The user does not see this summary, they see the full conversation history in their UI. Treat the summary as ground truth for what happened before, but do not reference it directly to the user ("as mentioned in the summary..."). Just continue naturally as if you remember the prior work.
|
|
449
375
|
|
|
@@ -457,30 +383,38 @@ New projects progress through four onboarding states. The user might skip this e
|
|
|
457
383
|
- **initialSpecAuthoring**: Writing and refining the first spec. The user can see it in the editor as it streams in and can give feedback to iterate on it. This phase covers both the initial draft and any back-and-forth refinement before code generation.
|
|
458
384
|
- **initialCodegen**: First code generation from the spec. The agent is generating methods, tables, interfaces, manifest updates, and scenarios. This can take a while and involves heavy tool use. The user sees a full-screen build progress view.
|
|
459
385
|
- **onboardingFinished**: The project is built and ready. Full development mode with all tools available. From here on, keep spec and code in sync as changes are made.
|
|
386
|
+
</project_onboarding>
|
|
387
|
+
|
|
388
|
+
{{static/instructions.md}}
|
|
460
389
|
|
|
461
390
|
<!-- cache_breakpoint -->
|
|
462
391
|
|
|
463
|
-
|
|
392
|
+
<current_project_onboarding_state>
|
|
464
393
|
${onboardingState ?? "onboardingFinished"}
|
|
465
|
-
|
|
466
|
-
|
|
394
|
+
</current_project_onboarding_state>
|
|
395
|
+
|
|
396
|
+
<project_context>
|
|
397
|
+
${projectContext}
|
|
398
|
+
</project_context>
|
|
467
399
|
|
|
468
400
|
<view_context>
|
|
469
401
|
The user is currently in ${viewContext?.mode ?? "code"} mode.
|
|
470
402
|
${viewContext?.activeFile ? `Active file: ${viewContext.activeFile}` : ""}
|
|
471
403
|
</view_context>
|
|
404
|
+
|
|
405
|
+
${loadPlanStatus()}
|
|
472
406
|
`;
|
|
473
407
|
return resolveIncludes(template);
|
|
474
408
|
}
|
|
475
409
|
|
|
476
410
|
// src/api.ts
|
|
477
|
-
var
|
|
411
|
+
var log2 = createLogger("api");
|
|
478
412
|
async function* streamChat(params) {
|
|
479
413
|
const { baseUrl: baseUrl2, apiKey, signal, requestId, ...body } = params;
|
|
480
414
|
const url = `${baseUrl2}/_internal/v2/agent/remy/chat`;
|
|
481
415
|
const startTime = Date.now();
|
|
482
416
|
const subAgentId = body.subAgentId;
|
|
483
|
-
|
|
417
|
+
log2.info("API request", {
|
|
484
418
|
requestId,
|
|
485
419
|
...subAgentId && { subAgentId },
|
|
486
420
|
model: body.model,
|
|
@@ -500,13 +434,13 @@ async function* streamChat(params) {
|
|
|
500
434
|
});
|
|
501
435
|
} catch (err) {
|
|
502
436
|
if (signal?.aborted) {
|
|
503
|
-
|
|
437
|
+
log2.warn("Request aborted", {
|
|
504
438
|
requestId,
|
|
505
439
|
...subAgentId && { subAgentId }
|
|
506
440
|
});
|
|
507
441
|
throw err;
|
|
508
442
|
}
|
|
509
|
-
|
|
443
|
+
log2.error("Network error", {
|
|
510
444
|
requestId,
|
|
511
445
|
...subAgentId && { subAgentId },
|
|
512
446
|
error: err.message
|
|
@@ -515,7 +449,7 @@ async function* streamChat(params) {
|
|
|
515
449
|
return;
|
|
516
450
|
}
|
|
517
451
|
const ttfb = Date.now() - startTime;
|
|
518
|
-
|
|
452
|
+
log2.info("API response", {
|
|
519
453
|
requestId,
|
|
520
454
|
...subAgentId && { subAgentId },
|
|
521
455
|
status: res.status,
|
|
@@ -533,7 +467,7 @@ async function* streamChat(params) {
|
|
|
533
467
|
}
|
|
534
468
|
} catch {
|
|
535
469
|
}
|
|
536
|
-
|
|
470
|
+
log2.error("API error", {
|
|
537
471
|
requestId,
|
|
538
472
|
...subAgentId && { subAgentId },
|
|
539
473
|
status: res.status,
|
|
@@ -546,6 +480,7 @@ async function* streamChat(params) {
|
|
|
546
480
|
const reader = res.body.getReader();
|
|
547
481
|
const decoder = new TextDecoder();
|
|
548
482
|
let buffer = "";
|
|
483
|
+
let receivedDone = false;
|
|
549
484
|
while (true) {
|
|
550
485
|
let stallTimer;
|
|
551
486
|
let readResult;
|
|
@@ -563,7 +498,7 @@ async function* streamChat(params) {
|
|
|
563
498
|
} catch {
|
|
564
499
|
clearTimeout(stallTimer);
|
|
565
500
|
await reader.cancel();
|
|
566
|
-
|
|
501
|
+
log2.error("Stream stalled", {
|
|
567
502
|
requestId,
|
|
568
503
|
...subAgentId && { subAgentId },
|
|
569
504
|
durationMs: Date.now() - startTime
|
|
@@ -589,7 +524,8 @@ async function* streamChat(params) {
|
|
|
589
524
|
const event = JSON.parse(line.slice(6));
|
|
590
525
|
if (event.type === "done") {
|
|
591
526
|
const elapsed = Date.now() - startTime;
|
|
592
|
-
|
|
527
|
+
receivedDone = true;
|
|
528
|
+
log2.info("Stream complete", {
|
|
593
529
|
requestId,
|
|
594
530
|
...subAgentId && { subAgentId },
|
|
595
531
|
durationMs: elapsed,
|
|
@@ -597,12 +533,27 @@ async function* streamChat(params) {
|
|
|
597
533
|
inputTokens: event.usage.inputTokens,
|
|
598
534
|
outputTokens: event.usage.outputTokens
|
|
599
535
|
});
|
|
536
|
+
} else if (event.type === "error") {
|
|
537
|
+
log2.error("SSE error event", {
|
|
538
|
+
requestId,
|
|
539
|
+
...subAgentId && { subAgentId },
|
|
540
|
+
error: event.error,
|
|
541
|
+
durationMs: Date.now() - startTime
|
|
542
|
+
});
|
|
600
543
|
}
|
|
601
544
|
yield event;
|
|
602
545
|
} catch {
|
|
603
546
|
}
|
|
604
547
|
}
|
|
605
548
|
}
|
|
549
|
+
if (!receivedDone) {
|
|
550
|
+
log2.warn("Stream ended without done event", {
|
|
551
|
+
requestId,
|
|
552
|
+
...subAgentId && { subAgentId },
|
|
553
|
+
durationMs: Date.now() - startTime,
|
|
554
|
+
remainingBuffer: buffer.slice(0, 200)
|
|
555
|
+
});
|
|
556
|
+
}
|
|
606
557
|
if (buffer.startsWith("data: ")) {
|
|
607
558
|
try {
|
|
608
559
|
yield JSON.parse(buffer.slice(6));
|
|
@@ -639,7 +590,7 @@ async function* streamChatWithRetry(params, options) {
|
|
|
639
590
|
return;
|
|
640
591
|
}
|
|
641
592
|
const backoff = INITIAL_BACKOFF_MS * 2 ** attempt;
|
|
642
|
-
|
|
593
|
+
log2.warn("Retrying", {
|
|
643
594
|
requestId: params.requestId,
|
|
644
595
|
attempt: attempt + 1,
|
|
645
596
|
maxRetries: MAX_RETRIES,
|
|
@@ -681,7 +632,7 @@ async function generateBackgroundAck(params) {
|
|
|
681
632
|
}
|
|
682
633
|
|
|
683
634
|
// src/compaction/index.ts
|
|
684
|
-
var
|
|
635
|
+
var log3 = createLogger("compaction");
|
|
685
636
|
var CONVERSATION_SUMMARY_PROMPT = readAsset("compaction", "conversation.md");
|
|
686
637
|
var SUBAGENT_SUMMARY_PROMPT = readAsset("compaction", "subagent.md");
|
|
687
638
|
var SUMMARIZABLE_SUBAGENTS = ["visualDesignExpert", "productVision"];
|
|
@@ -745,7 +696,7 @@ async function compactConversation(messages, apiConfig, system, tools2) {
|
|
|
745
696
|
}
|
|
746
697
|
]
|
|
747
698
|
}));
|
|
748
|
-
|
|
699
|
+
log3.info("Compaction complete", { summaries: summaries.length });
|
|
749
700
|
return checkpointMessages;
|
|
750
701
|
}
|
|
751
702
|
function findSafeInsertionPoint(messages) {
|
|
@@ -849,7 +800,7 @@ async function generateSummary(apiConfig, name, compactionPrompt, messagesToSumm
|
|
|
849
800
|
if (!serialized.trim()) {
|
|
850
801
|
return null;
|
|
851
802
|
}
|
|
852
|
-
|
|
803
|
+
log3.info("Generating summary", {
|
|
853
804
|
name,
|
|
854
805
|
messageCount: messagesToSummarize.length,
|
|
855
806
|
cacheReuse: !!mainSystem
|
|
@@ -875,15 +826,15 @@ ${serialized}` : serialized;
|
|
|
875
826
|
if (event.type === "text") {
|
|
876
827
|
summaryText += event.text;
|
|
877
828
|
} else if (event.type === "error") {
|
|
878
|
-
|
|
829
|
+
log3.error("Summary generation failed", { name, error: event.error });
|
|
879
830
|
return null;
|
|
880
831
|
}
|
|
881
832
|
}
|
|
882
833
|
if (!summaryText.trim()) {
|
|
883
|
-
|
|
834
|
+
log3.warn("Empty summary generated", { name });
|
|
884
835
|
return null;
|
|
885
836
|
}
|
|
886
|
-
|
|
837
|
+
log3.info("Summary generated", { name, summaryLength: summaryText.length });
|
|
887
838
|
return summaryText.trim();
|
|
888
839
|
}
|
|
889
840
|
|
|
@@ -2439,6 +2390,50 @@ var editsFinishedTool = {
|
|
|
2439
2390
|
}
|
|
2440
2391
|
};
|
|
2441
2392
|
|
|
2393
|
+
// src/tools/_helpers/sidecar.ts
|
|
2394
|
+
var log4 = createLogger("sidecar");
|
|
2395
|
+
var baseUrl = null;
|
|
2396
|
+
function setSidecarBaseUrl(url) {
|
|
2397
|
+
baseUrl = url;
|
|
2398
|
+
log4.info("Configured", { url });
|
|
2399
|
+
}
|
|
2400
|
+
async function sidecarRequest(endpoint, body = {}, options) {
|
|
2401
|
+
if (!baseUrl) {
|
|
2402
|
+
throw new Error("Sidecar not available");
|
|
2403
|
+
}
|
|
2404
|
+
const url = `${baseUrl}${endpoint}`;
|
|
2405
|
+
try {
|
|
2406
|
+
const res = await fetch(url, {
|
|
2407
|
+
method: "POST",
|
|
2408
|
+
headers: { "Content-Type": "application/json" },
|
|
2409
|
+
body: JSON.stringify(body),
|
|
2410
|
+
signal: options?.timeout ? AbortSignal.timeout(options.timeout) : void 0
|
|
2411
|
+
});
|
|
2412
|
+
if (!res.ok) {
|
|
2413
|
+
log4.error("Sidecar error", { endpoint, status: res.status });
|
|
2414
|
+
throw new Error(`Sidecar error: ${res.status}`);
|
|
2415
|
+
}
|
|
2416
|
+
const data = await res.json();
|
|
2417
|
+
if (data?.success === false) {
|
|
2418
|
+
const code = data.errorCode ? ` [${data.errorCode}]` : "";
|
|
2419
|
+
throw new Error(`${data.error || "Unknown error"}${code}`);
|
|
2420
|
+
}
|
|
2421
|
+
return data;
|
|
2422
|
+
} catch (err) {
|
|
2423
|
+
if (err.message.startsWith("Sidecar error")) {
|
|
2424
|
+
throw err;
|
|
2425
|
+
}
|
|
2426
|
+
log4.error("Sidecar connection error", { endpoint, error: err.message });
|
|
2427
|
+
throw new Error(`Sidecar connection error: ${err.message}`);
|
|
2428
|
+
}
|
|
2429
|
+
}
|
|
2430
|
+
|
|
2431
|
+
// src/tools/_helpers/lsp.ts
|
|
2432
|
+
var setLspBaseUrl = setSidecarBaseUrl;
|
|
2433
|
+
async function lspRequest(endpoint, body) {
|
|
2434
|
+
return sidecarRequest(endpoint, body);
|
|
2435
|
+
}
|
|
2436
|
+
|
|
2442
2437
|
// src/tools/code/lspDiagnostics.ts
|
|
2443
2438
|
var lspDiagnosticsTool = {
|
|
2444
2439
|
clearable: true,
|
|
@@ -6030,13 +6025,24 @@ function resolveAction(text) {
|
|
|
6030
6025
|
}
|
|
6031
6026
|
}
|
|
6032
6027
|
let body = readAsset("automatedActions", `${triggerName}.md`);
|
|
6028
|
+
let next;
|
|
6029
|
+
const fmMatch = body.match(/^---\s*\n([\s\S]*?)\n---/);
|
|
6030
|
+
if (fmMatch) {
|
|
6031
|
+
const nextMatch = fmMatch[1].match(/^\s*next:\s*(\w+)\s*$/m);
|
|
6032
|
+
if (nextMatch) {
|
|
6033
|
+
next = nextMatch[1];
|
|
6034
|
+
}
|
|
6035
|
+
}
|
|
6033
6036
|
body = body.replace(/^---[\s\S]*?---\s*/, "");
|
|
6034
6037
|
for (const [key, value] of Object.entries(params)) {
|
|
6035
6038
|
const str = typeof value === "string" ? value : JSON.stringify(value);
|
|
6036
6039
|
body = body.replaceAll(`{{${key}}}`, str);
|
|
6037
6040
|
}
|
|
6038
|
-
return
|
|
6039
|
-
|
|
6041
|
+
return {
|
|
6042
|
+
message: `@@automated::${triggerName}@@
|
|
6043
|
+
${body}`,
|
|
6044
|
+
next
|
|
6045
|
+
};
|
|
6040
6046
|
}
|
|
6041
6047
|
|
|
6042
6048
|
// src/headless.ts
|
|
@@ -6098,6 +6104,7 @@ async function startHeadless(opts = {}) {
|
|
|
6098
6104
|
let currentRequestId;
|
|
6099
6105
|
let completedEmitted = false;
|
|
6100
6106
|
let turnStart = 0;
|
|
6107
|
+
let pendingNextAction;
|
|
6101
6108
|
const EXTERNAL_TOOL_TIMEOUT_MS = 3e5;
|
|
6102
6109
|
const pendingTools = /* @__PURE__ */ new Map();
|
|
6103
6110
|
const earlyResults = /* @__PURE__ */ new Map();
|
|
@@ -6248,10 +6255,19 @@ ${xmlParts}
|
|
|
6248
6255
|
applyPendingSummaries();
|
|
6249
6256
|
applyPendingBlockUpdates();
|
|
6250
6257
|
flushBackgroundQueue();
|
|
6258
|
+
if (pendingNextAction) {
|
|
6259
|
+
const next = pendingNextAction;
|
|
6260
|
+
pendingNextAction = void 0;
|
|
6261
|
+
handleMessage(
|
|
6262
|
+
{ action: "message", text: `@@automated::${next}@@` },
|
|
6263
|
+
`chain-${Date.now()}`
|
|
6264
|
+
);
|
|
6265
|
+
}
|
|
6251
6266
|
}, 0);
|
|
6252
6267
|
return;
|
|
6253
6268
|
case "turn_cancelled":
|
|
6254
6269
|
completedEmitted = true;
|
|
6270
|
+
pendingNextAction = void 0;
|
|
6255
6271
|
emit("completed", { success: false, error: "cancelled" }, rid);
|
|
6256
6272
|
return;
|
|
6257
6273
|
// Streaming events — forward with requestId
|
|
@@ -6366,6 +6382,120 @@ ${xmlParts}
|
|
|
6366
6382
|
}
|
|
6367
6383
|
}
|
|
6368
6384
|
toolRegistry.onEvent = onEvent;
|
|
6385
|
+
const UPLOADS_DIR = "src/.user-uploads";
|
|
6386
|
+
function filenameFromUrl(url) {
|
|
6387
|
+
try {
|
|
6388
|
+
const pathname = new URL(url).pathname;
|
|
6389
|
+
const name = basename(pathname);
|
|
6390
|
+
return name && name !== "/" ? decodeURIComponent(name) : `upload-${Date.now()}`;
|
|
6391
|
+
} catch {
|
|
6392
|
+
return `upload-${Date.now()}`;
|
|
6393
|
+
}
|
|
6394
|
+
}
|
|
6395
|
+
function resolveUniqueFilename(name) {
|
|
6396
|
+
if (!existsSync(join(UPLOADS_DIR, name))) {
|
|
6397
|
+
return name;
|
|
6398
|
+
}
|
|
6399
|
+
const ext = extname(name);
|
|
6400
|
+
const base = name.slice(0, name.length - ext.length);
|
|
6401
|
+
let counter = 1;
|
|
6402
|
+
while (existsSync(join(UPLOADS_DIR, `${base}-${counter}${ext}`))) {
|
|
6403
|
+
counter++;
|
|
6404
|
+
}
|
|
6405
|
+
return `${base}-${counter}${ext}`;
|
|
6406
|
+
}
|
|
6407
|
+
const IMAGE_EXTENSIONS = /* @__PURE__ */ new Set([
|
|
6408
|
+
".png",
|
|
6409
|
+
".jpg",
|
|
6410
|
+
".jpeg",
|
|
6411
|
+
".gif",
|
|
6412
|
+
".webp",
|
|
6413
|
+
".svg",
|
|
6414
|
+
".bmp",
|
|
6415
|
+
".ico",
|
|
6416
|
+
".tiff",
|
|
6417
|
+
".tif",
|
|
6418
|
+
".avif",
|
|
6419
|
+
".heic",
|
|
6420
|
+
".heif"
|
|
6421
|
+
]);
|
|
6422
|
+
function isImageAttachment(att) {
|
|
6423
|
+
const name = att.filename || filenameFromUrl(att.url);
|
|
6424
|
+
return IMAGE_EXTENSIONS.has(extname(name).toLowerCase());
|
|
6425
|
+
}
|
|
6426
|
+
async function persistAttachments(attachments) {
|
|
6427
|
+
const nonVoice = attachments.filter((a) => !a.isVoice);
|
|
6428
|
+
if (nonVoice.length === 0) {
|
|
6429
|
+
return { documents: [], images: [] };
|
|
6430
|
+
}
|
|
6431
|
+
mkdirSync(UPLOADS_DIR, { recursive: true });
|
|
6432
|
+
const results = await Promise.allSettled(
|
|
6433
|
+
nonVoice.map(async (att) => {
|
|
6434
|
+
const name = resolveUniqueFilename(
|
|
6435
|
+
att.filename || filenameFromUrl(att.url)
|
|
6436
|
+
);
|
|
6437
|
+
const localPath = join(UPLOADS_DIR, name);
|
|
6438
|
+
const res = await fetch(att.url, {
|
|
6439
|
+
signal: AbortSignal.timeout(3e4)
|
|
6440
|
+
});
|
|
6441
|
+
if (!res.ok) {
|
|
6442
|
+
throw new Error(`HTTP ${res.status} downloading ${att.url}`);
|
|
6443
|
+
}
|
|
6444
|
+
const buffer = Buffer.from(await res.arrayBuffer());
|
|
6445
|
+
await writeFile(localPath, buffer);
|
|
6446
|
+
log11.info("Attachment saved", {
|
|
6447
|
+
filename: name,
|
|
6448
|
+
path: localPath,
|
|
6449
|
+
bytes: buffer.length
|
|
6450
|
+
});
|
|
6451
|
+
let extractedTextPath;
|
|
6452
|
+
if (att.extractedTextUrl) {
|
|
6453
|
+
try {
|
|
6454
|
+
const textRes = await fetch(att.extractedTextUrl, {
|
|
6455
|
+
signal: AbortSignal.timeout(3e4)
|
|
6456
|
+
});
|
|
6457
|
+
if (textRes.ok) {
|
|
6458
|
+
extractedTextPath = `${localPath}.txt`;
|
|
6459
|
+
await writeFile(extractedTextPath, await textRes.text(), "utf-8");
|
|
6460
|
+
log11.info("Extracted text saved", { path: extractedTextPath });
|
|
6461
|
+
}
|
|
6462
|
+
} catch {
|
|
6463
|
+
}
|
|
6464
|
+
}
|
|
6465
|
+
return { filename: name, localPath, extractedTextPath };
|
|
6466
|
+
})
|
|
6467
|
+
);
|
|
6468
|
+
const settled = results.map((r, i) => ({
|
|
6469
|
+
result: r.status === "fulfilled" ? r.value : null,
|
|
6470
|
+
isImage: isImageAttachment(nonVoice[i])
|
|
6471
|
+
}));
|
|
6472
|
+
return {
|
|
6473
|
+
documents: settled.filter((s) => !s.isImage).map((s) => s.result),
|
|
6474
|
+
images: settled.filter((s) => s.isImage).map((s) => s.result)
|
|
6475
|
+
};
|
|
6476
|
+
}
|
|
6477
|
+
function buildUploadHeader(results) {
|
|
6478
|
+
const succeeded = results.filter(Boolean);
|
|
6479
|
+
if (succeeded.length === 0) {
|
|
6480
|
+
return "";
|
|
6481
|
+
}
|
|
6482
|
+
if (succeeded.length === 1) {
|
|
6483
|
+
const r = succeeded[0];
|
|
6484
|
+
const parts = [`[Uploaded file: ${r.localPath}`];
|
|
6485
|
+
if (r.extractedTextPath) {
|
|
6486
|
+
parts.push(`extracted text: ${r.extractedTextPath}`);
|
|
6487
|
+
}
|
|
6488
|
+
return parts.join(" \u2014 ") + "]";
|
|
6489
|
+
}
|
|
6490
|
+
const lines = succeeded.map((r) => {
|
|
6491
|
+
if (r.extractedTextPath) {
|
|
6492
|
+
return `- ${r.localPath} (extracted text: ${r.extractedTextPath})`;
|
|
6493
|
+
}
|
|
6494
|
+
return `- ${r.localPath}`;
|
|
6495
|
+
});
|
|
6496
|
+
return `[Uploaded files]
|
|
6497
|
+
${lines.join("\n")}`;
|
|
6498
|
+
}
|
|
6369
6499
|
async function handleMessage(parsed, requestId) {
|
|
6370
6500
|
if (running) {
|
|
6371
6501
|
emit(
|
|
@@ -6387,12 +6517,26 @@ ${xmlParts}
|
|
|
6387
6517
|
turnStart = Date.now();
|
|
6388
6518
|
const attachments = parsed.attachments;
|
|
6389
6519
|
if (attachments?.length) {
|
|
6390
|
-
|
|
6391
|
-
|
|
6392
|
-
attachments.map((a) => a.url)
|
|
6393
|
-
);
|
|
6520
|
+
log11.info("Message has attachments", {
|
|
6521
|
+
count: attachments.length,
|
|
6522
|
+
urls: attachments.map((a) => a.url)
|
|
6523
|
+
});
|
|
6394
6524
|
}
|
|
6395
6525
|
let userMessage = parsed.text ?? "";
|
|
6526
|
+
if (attachments?.some((a) => !a.isVoice)) {
|
|
6527
|
+
try {
|
|
6528
|
+
const { documents, images } = await persistAttachments(attachments);
|
|
6529
|
+
const all = [...documents, ...images];
|
|
6530
|
+
const header = buildUploadHeader(all);
|
|
6531
|
+
if (header) {
|
|
6532
|
+
userMessage = userMessage ? `${header}
|
|
6533
|
+
|
|
6534
|
+
${userMessage}` : header;
|
|
6535
|
+
}
|
|
6536
|
+
} catch (err) {
|
|
6537
|
+
log11.warn("Attachment persistence failed", { error: err.message });
|
|
6538
|
+
}
|
|
6539
|
+
}
|
|
6396
6540
|
let resolved = null;
|
|
6397
6541
|
try {
|
|
6398
6542
|
resolved = resolveAction(userMessage);
|
|
@@ -6404,8 +6548,10 @@ ${xmlParts}
|
|
|
6404
6548
|
);
|
|
6405
6549
|
return;
|
|
6406
6550
|
}
|
|
6551
|
+
pendingNextAction = void 0;
|
|
6407
6552
|
if (resolved !== null) {
|
|
6408
|
-
userMessage = resolved;
|
|
6553
|
+
userMessage = resolved.message;
|
|
6554
|
+
pendingNextAction = resolved.next;
|
|
6409
6555
|
}
|
|
6410
6556
|
const isHidden = resolved !== null || !!parsed.hidden;
|
|
6411
6557
|
const rawText = parsed.text ?? "";
|
package/dist/index.js
CHANGED
|
@@ -156,6 +156,7 @@ async function* streamChat(params) {
|
|
|
156
156
|
const reader = res.body.getReader();
|
|
157
157
|
const decoder = new TextDecoder();
|
|
158
158
|
let buffer = "";
|
|
159
|
+
let receivedDone = false;
|
|
159
160
|
while (true) {
|
|
160
161
|
let stallTimer;
|
|
161
162
|
let readResult;
|
|
@@ -199,6 +200,7 @@ async function* streamChat(params) {
|
|
|
199
200
|
const event = JSON.parse(line.slice(6));
|
|
200
201
|
if (event.type === "done") {
|
|
201
202
|
const elapsed = Date.now() - startTime;
|
|
203
|
+
receivedDone = true;
|
|
202
204
|
log.info("Stream complete", {
|
|
203
205
|
requestId,
|
|
204
206
|
...subAgentId && { subAgentId },
|
|
@@ -207,12 +209,27 @@ async function* streamChat(params) {
|
|
|
207
209
|
inputTokens: event.usage.inputTokens,
|
|
208
210
|
outputTokens: event.usage.outputTokens
|
|
209
211
|
});
|
|
212
|
+
} else if (event.type === "error") {
|
|
213
|
+
log.error("SSE error event", {
|
|
214
|
+
requestId,
|
|
215
|
+
...subAgentId && { subAgentId },
|
|
216
|
+
error: event.error,
|
|
217
|
+
durationMs: Date.now() - startTime
|
|
218
|
+
});
|
|
210
219
|
}
|
|
211
220
|
yield event;
|
|
212
221
|
} catch {
|
|
213
222
|
}
|
|
214
223
|
}
|
|
215
224
|
}
|
|
225
|
+
if (!receivedDone) {
|
|
226
|
+
log.warn("Stream ended without done event", {
|
|
227
|
+
requestId,
|
|
228
|
+
...subAgentId && { subAgentId },
|
|
229
|
+
durationMs: Date.now() - startTime,
|
|
230
|
+
remainingBuffer: buffer.slice(0, 200)
|
|
231
|
+
});
|
|
232
|
+
}
|
|
216
233
|
if (buffer.startsWith("data: ")) {
|
|
217
234
|
try {
|
|
218
235
|
yield JSON.parse(buffer.slice(6));
|
|
@@ -1541,85 +1558,9 @@ var init_compaction = __esm({
|
|
|
1541
1558
|
}
|
|
1542
1559
|
});
|
|
1543
1560
|
|
|
1544
|
-
// src/tools/_helpers/sidecar.ts
|
|
1545
|
-
function setSidecarBaseUrl(url) {
|
|
1546
|
-
baseUrl = url;
|
|
1547
|
-
log3.info("Configured", { url });
|
|
1548
|
-
}
|
|
1549
|
-
function isSidecarConfigured() {
|
|
1550
|
-
return baseUrl !== null;
|
|
1551
|
-
}
|
|
1552
|
-
async function sidecarRequest(endpoint, body = {}, options) {
|
|
1553
|
-
if (!baseUrl) {
|
|
1554
|
-
throw new Error("Sidecar not available");
|
|
1555
|
-
}
|
|
1556
|
-
const url = `${baseUrl}${endpoint}`;
|
|
1557
|
-
try {
|
|
1558
|
-
const res = await fetch(url, {
|
|
1559
|
-
method: "POST",
|
|
1560
|
-
headers: { "Content-Type": "application/json" },
|
|
1561
|
-
body: JSON.stringify(body),
|
|
1562
|
-
signal: options?.timeout ? AbortSignal.timeout(options.timeout) : void 0
|
|
1563
|
-
});
|
|
1564
|
-
if (!res.ok) {
|
|
1565
|
-
log3.error("Sidecar error", { endpoint, status: res.status });
|
|
1566
|
-
throw new Error(`Sidecar error: ${res.status}`);
|
|
1567
|
-
}
|
|
1568
|
-
const data = await res.json();
|
|
1569
|
-
if (data?.success === false) {
|
|
1570
|
-
const code = data.errorCode ? ` [${data.errorCode}]` : "";
|
|
1571
|
-
throw new Error(`${data.error || "Unknown error"}${code}`);
|
|
1572
|
-
}
|
|
1573
|
-
return data;
|
|
1574
|
-
} catch (err) {
|
|
1575
|
-
if (err.message.startsWith("Sidecar error")) {
|
|
1576
|
-
throw err;
|
|
1577
|
-
}
|
|
1578
|
-
log3.error("Sidecar connection error", { endpoint, error: err.message });
|
|
1579
|
-
throw new Error(`Sidecar connection error: ${err.message}`);
|
|
1580
|
-
}
|
|
1581
|
-
}
|
|
1582
|
-
var log3, baseUrl;
|
|
1583
|
-
var init_sidecar = __esm({
|
|
1584
|
-
"src/tools/_helpers/sidecar.ts"() {
|
|
1585
|
-
"use strict";
|
|
1586
|
-
init_logger();
|
|
1587
|
-
log3 = createLogger("sidecar");
|
|
1588
|
-
baseUrl = null;
|
|
1589
|
-
}
|
|
1590
|
-
});
|
|
1591
|
-
|
|
1592
|
-
// src/tools/_helpers/lsp.ts
|
|
1593
|
-
async function lspRequest(endpoint, body) {
|
|
1594
|
-
return sidecarRequest(endpoint, body);
|
|
1595
|
-
}
|
|
1596
|
-
var setLspBaseUrl, isLspConfigured;
|
|
1597
|
-
var init_lsp = __esm({
|
|
1598
|
-
"src/tools/_helpers/lsp.ts"() {
|
|
1599
|
-
"use strict";
|
|
1600
|
-
init_sidecar();
|
|
1601
|
-
setLspBaseUrl = setSidecarBaseUrl;
|
|
1602
|
-
isLspConfigured = isSidecarConfigured;
|
|
1603
|
-
}
|
|
1604
|
-
});
|
|
1605
|
-
|
|
1606
1561
|
// src/prompt/static/projectContext.ts
|
|
1607
1562
|
import fs9 from "fs";
|
|
1608
1563
|
import path4 from "path";
|
|
1609
|
-
function loadProjectInstructions() {
|
|
1610
|
-
for (const file of AGENT_INSTRUCTION_FILES) {
|
|
1611
|
-
try {
|
|
1612
|
-
const content = fs9.readFileSync(file, "utf-8").trim();
|
|
1613
|
-
if (content) {
|
|
1614
|
-
return `
|
|
1615
|
-
## Project Instructions (${file})
|
|
1616
|
-
${content}`;
|
|
1617
|
-
}
|
|
1618
|
-
} catch {
|
|
1619
|
-
}
|
|
1620
|
-
}
|
|
1621
|
-
return "";
|
|
1622
|
-
}
|
|
1623
1564
|
function loadProjectManifest() {
|
|
1624
1565
|
try {
|
|
1625
1566
|
const manifest = fs9.readFileSync("mindstudio.json", "utf-8");
|
|
@@ -1735,26 +1676,9 @@ ${listing}
|
|
|
1735
1676
|
return "";
|
|
1736
1677
|
}
|
|
1737
1678
|
}
|
|
1738
|
-
var AGENT_INSTRUCTION_FILES;
|
|
1739
1679
|
var init_projectContext = __esm({
|
|
1740
1680
|
"src/prompt/static/projectContext.ts"() {
|
|
1741
1681
|
"use strict";
|
|
1742
|
-
AGENT_INSTRUCTION_FILES = [
|
|
1743
|
-
"CLAUDE.md",
|
|
1744
|
-
"claude.md",
|
|
1745
|
-
".claude/instructions.md",
|
|
1746
|
-
"AGENTS.md",
|
|
1747
|
-
"agents.md",
|
|
1748
|
-
".agents.md",
|
|
1749
|
-
"COPILOT.md",
|
|
1750
|
-
"copilot.md",
|
|
1751
|
-
".copilot-instructions.md",
|
|
1752
|
-
".github/copilot-instructions.md",
|
|
1753
|
-
"REMY.md",
|
|
1754
|
-
"remy.md",
|
|
1755
|
-
".cursorrules",
|
|
1756
|
-
".cursorules"
|
|
1757
|
-
];
|
|
1758
1682
|
}
|
|
1759
1683
|
});
|
|
1760
1684
|
|
|
@@ -1768,7 +1692,6 @@ function resolveIncludes(template) {
|
|
|
1768
1692
|
}
|
|
1769
1693
|
function buildSystemPrompt(onboardingState, viewContext) {
|
|
1770
1694
|
const projectContext = [
|
|
1771
|
-
loadProjectInstructions(),
|
|
1772
1695
|
loadProjectManifest(),
|
|
1773
1696
|
loadSpecFileMetadata(),
|
|
1774
1697
|
loadProjectFileListing()
|
|
@@ -1843,29 +1766,26 @@ Current date: ${now}
|
|
|
1843
1766
|
{{compiled/msfm.md}}
|
|
1844
1767
|
</mindstudio_flavored_markdown_spec_docs>
|
|
1845
1768
|
|
|
1846
|
-
<project_context>
|
|
1847
|
-
${projectContext}
|
|
1848
|
-
</project_context>
|
|
1849
|
-
|
|
1850
1769
|
<intake_mode_instructions>
|
|
1851
|
-
{{static/intake.md}}
|
|
1770
|
+
{{static/intake.md}}
|
|
1852
1771
|
</intake_mode_instructions>
|
|
1853
1772
|
|
|
1854
1773
|
<spec_authoring_instructions>
|
|
1855
|
-
{{static/authoring.md}}
|
|
1774
|
+
{{static/authoring.md}}
|
|
1856
1775
|
</spec_authoring_instructions>
|
|
1857
1776
|
|
|
1858
|
-
|
|
1777
|
+
<team>
|
|
1778
|
+
{{static/team.md}}
|
|
1779
|
+
</team>
|
|
1859
1780
|
|
|
1860
1781
|
<code_authoring_instructions>
|
|
1861
1782
|
{{static/coding.md}}
|
|
1862
|
-
|
|
1783
|
+
|
|
1784
|
+
<typescript_lsp>
|
|
1863
1785
|
{{static/lsp.md}}
|
|
1864
|
-
</typescript_lsp
|
|
1786
|
+
</typescript_lsp>
|
|
1865
1787
|
</code_authoring_instructions>
|
|
1866
1788
|
|
|
1867
|
-
{{static/instructions.md}}
|
|
1868
|
-
${loadPlanStatus()}
|
|
1869
1789
|
<conversation_summaries>
|
|
1870
1790
|
Your conversation history may include <prior_conversation_summary> blocks in the user's messages. These are automated summaries of earlier messages that have been compacted to save context space. The user does not see this summary, they see the full conversation history in their UI. Treat the summary as ground truth for what happened before, but do not reference it directly to the user ("as mentioned in the summary..."). Just continue naturally as if you remember the prior work.
|
|
1871
1791
|
|
|
@@ -1879,18 +1799,26 @@ New projects progress through four onboarding states. The user might skip this e
|
|
|
1879
1799
|
- **initialSpecAuthoring**: Writing and refining the first spec. The user can see it in the editor as it streams in and can give feedback to iterate on it. This phase covers both the initial draft and any back-and-forth refinement before code generation.
|
|
1880
1800
|
- **initialCodegen**: First code generation from the spec. The agent is generating methods, tables, interfaces, manifest updates, and scenarios. This can take a while and involves heavy tool use. The user sees a full-screen build progress view.
|
|
1881
1801
|
- **onboardingFinished**: The project is built and ready. Full development mode with all tools available. From here on, keep spec and code in sync as changes are made.
|
|
1802
|
+
</project_onboarding>
|
|
1803
|
+
|
|
1804
|
+
{{static/instructions.md}}
|
|
1882
1805
|
|
|
1883
1806
|
<!-- cache_breakpoint -->
|
|
1884
1807
|
|
|
1885
|
-
|
|
1808
|
+
<current_project_onboarding_state>
|
|
1886
1809
|
${onboardingState ?? "onboardingFinished"}
|
|
1887
|
-
|
|
1888
|
-
|
|
1810
|
+
</current_project_onboarding_state>
|
|
1811
|
+
|
|
1812
|
+
<project_context>
|
|
1813
|
+
${projectContext}
|
|
1814
|
+
</project_context>
|
|
1889
1815
|
|
|
1890
1816
|
<view_context>
|
|
1891
1817
|
The user is currently in ${viewContext?.mode ?? "code"} mode.
|
|
1892
1818
|
${viewContext?.activeFile ? `Active file: ${viewContext.activeFile}` : ""}
|
|
1893
1819
|
</view_context>
|
|
1820
|
+
|
|
1821
|
+
${loadPlanStatus()}
|
|
1894
1822
|
`;
|
|
1895
1823
|
return resolveIncludes(template);
|
|
1896
1824
|
}
|
|
@@ -1898,7 +1826,6 @@ var init_prompt = __esm({
|
|
|
1898
1826
|
"src/prompt/index.ts"() {
|
|
1899
1827
|
"use strict";
|
|
1900
1828
|
init_assets();
|
|
1901
|
-
init_lsp();
|
|
1902
1829
|
init_projectContext();
|
|
1903
1830
|
}
|
|
1904
1831
|
});
|
|
@@ -1914,15 +1841,15 @@ function triggerCompaction(state, apiConfig, callbacks) {
|
|
|
1914
1841
|
compactConversation(state.messages, apiConfig, system, tools2).then((summaries) => {
|
|
1915
1842
|
pendingSummaries.push(...summaries);
|
|
1916
1843
|
callbacks?.onSummariesReady?.();
|
|
1917
|
-
|
|
1844
|
+
log3.info("Compaction complete");
|
|
1918
1845
|
}).catch((err) => {
|
|
1919
1846
|
callbacks?.onError?.(err.message || "Compaction failed");
|
|
1920
|
-
|
|
1847
|
+
log3.error("Compaction failed", { error: err.message });
|
|
1921
1848
|
}).finally(() => {
|
|
1922
1849
|
callbacks?.onFinally?.();
|
|
1923
1850
|
});
|
|
1924
1851
|
}
|
|
1925
|
-
var
|
|
1852
|
+
var log3, pendingSummaries;
|
|
1926
1853
|
var init_trigger = __esm({
|
|
1927
1854
|
"src/compaction/trigger.ts"() {
|
|
1928
1855
|
"use strict";
|
|
@@ -1930,7 +1857,7 @@ var init_trigger = __esm({
|
|
|
1930
1857
|
init_prompt();
|
|
1931
1858
|
init_tools6();
|
|
1932
1859
|
init_logger();
|
|
1933
|
-
|
|
1860
|
+
log3 = createLogger("compaction:trigger");
|
|
1934
1861
|
pendingSummaries = [];
|
|
1935
1862
|
}
|
|
1936
1863
|
});
|
|
@@ -2672,6 +2599,64 @@ var init_editsFinished = __esm({
|
|
|
2672
2599
|
}
|
|
2673
2600
|
});
|
|
2674
2601
|
|
|
2602
|
+
// src/tools/_helpers/sidecar.ts
|
|
2603
|
+
function setSidecarBaseUrl(url) {
|
|
2604
|
+
baseUrl = url;
|
|
2605
|
+
log4.info("Configured", { url });
|
|
2606
|
+
}
|
|
2607
|
+
async function sidecarRequest(endpoint, body = {}, options) {
|
|
2608
|
+
if (!baseUrl) {
|
|
2609
|
+
throw new Error("Sidecar not available");
|
|
2610
|
+
}
|
|
2611
|
+
const url = `${baseUrl}${endpoint}`;
|
|
2612
|
+
try {
|
|
2613
|
+
const res = await fetch(url, {
|
|
2614
|
+
method: "POST",
|
|
2615
|
+
headers: { "Content-Type": "application/json" },
|
|
2616
|
+
body: JSON.stringify(body),
|
|
2617
|
+
signal: options?.timeout ? AbortSignal.timeout(options.timeout) : void 0
|
|
2618
|
+
});
|
|
2619
|
+
if (!res.ok) {
|
|
2620
|
+
log4.error("Sidecar error", { endpoint, status: res.status });
|
|
2621
|
+
throw new Error(`Sidecar error: ${res.status}`);
|
|
2622
|
+
}
|
|
2623
|
+
const data = await res.json();
|
|
2624
|
+
if (data?.success === false) {
|
|
2625
|
+
const code = data.errorCode ? ` [${data.errorCode}]` : "";
|
|
2626
|
+
throw new Error(`${data.error || "Unknown error"}${code}`);
|
|
2627
|
+
}
|
|
2628
|
+
return data;
|
|
2629
|
+
} catch (err) {
|
|
2630
|
+
if (err.message.startsWith("Sidecar error")) {
|
|
2631
|
+
throw err;
|
|
2632
|
+
}
|
|
2633
|
+
log4.error("Sidecar connection error", { endpoint, error: err.message });
|
|
2634
|
+
throw new Error(`Sidecar connection error: ${err.message}`);
|
|
2635
|
+
}
|
|
2636
|
+
}
|
|
2637
|
+
var log4, baseUrl;
|
|
2638
|
+
var init_sidecar = __esm({
|
|
2639
|
+
"src/tools/_helpers/sidecar.ts"() {
|
|
2640
|
+
"use strict";
|
|
2641
|
+
init_logger();
|
|
2642
|
+
log4 = createLogger("sidecar");
|
|
2643
|
+
baseUrl = null;
|
|
2644
|
+
}
|
|
2645
|
+
});
|
|
2646
|
+
|
|
2647
|
+
// src/tools/_helpers/lsp.ts
|
|
2648
|
+
async function lspRequest(endpoint, body) {
|
|
2649
|
+
return sidecarRequest(endpoint, body);
|
|
2650
|
+
}
|
|
2651
|
+
var setLspBaseUrl;
|
|
2652
|
+
var init_lsp = __esm({
|
|
2653
|
+
"src/tools/_helpers/lsp.ts"() {
|
|
2654
|
+
"use strict";
|
|
2655
|
+
init_sidecar();
|
|
2656
|
+
setLspBaseUrl = setSidecarBaseUrl;
|
|
2657
|
+
}
|
|
2658
|
+
});
|
|
2659
|
+
|
|
2675
2660
|
// src/tools/code/lspDiagnostics.ts
|
|
2676
2661
|
var lspDiagnosticsTool;
|
|
2677
2662
|
var init_lspDiagnostics = __esm({
|
|
@@ -6701,13 +6686,24 @@ function resolveAction(text) {
|
|
|
6701
6686
|
}
|
|
6702
6687
|
}
|
|
6703
6688
|
let body = readAsset("automatedActions", `${triggerName}.md`);
|
|
6689
|
+
let next;
|
|
6690
|
+
const fmMatch = body.match(/^---\s*\n([\s\S]*?)\n---/);
|
|
6691
|
+
if (fmMatch) {
|
|
6692
|
+
const nextMatch = fmMatch[1].match(/^\s*next:\s*(\w+)\s*$/m);
|
|
6693
|
+
if (nextMatch) {
|
|
6694
|
+
next = nextMatch[1];
|
|
6695
|
+
}
|
|
6696
|
+
}
|
|
6704
6697
|
body = body.replace(/^---[\s\S]*?---\s*/, "");
|
|
6705
6698
|
for (const [key, value] of Object.entries(params)) {
|
|
6706
6699
|
const str = typeof value === "string" ? value : JSON.stringify(value);
|
|
6707
6700
|
body = body.replaceAll(`{{${key}}}`, str);
|
|
6708
6701
|
}
|
|
6709
|
-
return
|
|
6710
|
-
|
|
6702
|
+
return {
|
|
6703
|
+
message: `@@automated::${triggerName}@@
|
|
6704
|
+
${body}`,
|
|
6705
|
+
next
|
|
6706
|
+
};
|
|
6711
6707
|
}
|
|
6712
6708
|
var NON_ACTION_SENTINELS;
|
|
6713
6709
|
var init_resolve = __esm({
|
|
@@ -6724,7 +6720,15 @@ __export(headless_exports, {
|
|
|
6724
6720
|
startHeadless: () => startHeadless
|
|
6725
6721
|
});
|
|
6726
6722
|
import { createInterface } from "readline";
|
|
6727
|
-
import {
|
|
6723
|
+
import {
|
|
6724
|
+
writeFileSync,
|
|
6725
|
+
readFileSync,
|
|
6726
|
+
unlinkSync,
|
|
6727
|
+
mkdirSync,
|
|
6728
|
+
existsSync
|
|
6729
|
+
} from "fs";
|
|
6730
|
+
import { writeFile } from "fs/promises";
|
|
6731
|
+
import { basename, join, extname } from "path";
|
|
6728
6732
|
function emit(event, data, requestId) {
|
|
6729
6733
|
const payload = { event, ...data };
|
|
6730
6734
|
if (requestId) {
|
|
@@ -6782,6 +6786,7 @@ async function startHeadless(opts = {}) {
|
|
|
6782
6786
|
let currentRequestId;
|
|
6783
6787
|
let completedEmitted = false;
|
|
6784
6788
|
let turnStart = 0;
|
|
6789
|
+
let pendingNextAction;
|
|
6785
6790
|
const EXTERNAL_TOOL_TIMEOUT_MS = 3e5;
|
|
6786
6791
|
const pendingTools = /* @__PURE__ */ new Map();
|
|
6787
6792
|
const earlyResults = /* @__PURE__ */ new Map();
|
|
@@ -6932,10 +6937,19 @@ ${xmlParts}
|
|
|
6932
6937
|
applyPendingSummaries();
|
|
6933
6938
|
applyPendingBlockUpdates();
|
|
6934
6939
|
flushBackgroundQueue();
|
|
6940
|
+
if (pendingNextAction) {
|
|
6941
|
+
const next = pendingNextAction;
|
|
6942
|
+
pendingNextAction = void 0;
|
|
6943
|
+
handleMessage(
|
|
6944
|
+
{ action: "message", text: `@@automated::${next}@@` },
|
|
6945
|
+
`chain-${Date.now()}`
|
|
6946
|
+
);
|
|
6947
|
+
}
|
|
6935
6948
|
}, 0);
|
|
6936
6949
|
return;
|
|
6937
6950
|
case "turn_cancelled":
|
|
6938
6951
|
completedEmitted = true;
|
|
6952
|
+
pendingNextAction = void 0;
|
|
6939
6953
|
emit("completed", { success: false, error: "cancelled" }, rid);
|
|
6940
6954
|
return;
|
|
6941
6955
|
// Streaming events — forward with requestId
|
|
@@ -7050,6 +7064,120 @@ ${xmlParts}
|
|
|
7050
7064
|
}
|
|
7051
7065
|
}
|
|
7052
7066
|
toolRegistry.onEvent = onEvent;
|
|
7067
|
+
const UPLOADS_DIR = "src/.user-uploads";
|
|
7068
|
+
function filenameFromUrl(url) {
|
|
7069
|
+
try {
|
|
7070
|
+
const pathname = new URL(url).pathname;
|
|
7071
|
+
const name = basename(pathname);
|
|
7072
|
+
return name && name !== "/" ? decodeURIComponent(name) : `upload-${Date.now()}`;
|
|
7073
|
+
} catch {
|
|
7074
|
+
return `upload-${Date.now()}`;
|
|
7075
|
+
}
|
|
7076
|
+
}
|
|
7077
|
+
function resolveUniqueFilename(name) {
|
|
7078
|
+
if (!existsSync(join(UPLOADS_DIR, name))) {
|
|
7079
|
+
return name;
|
|
7080
|
+
}
|
|
7081
|
+
const ext = extname(name);
|
|
7082
|
+
const base = name.slice(0, name.length - ext.length);
|
|
7083
|
+
let counter = 1;
|
|
7084
|
+
while (existsSync(join(UPLOADS_DIR, `${base}-${counter}${ext}`))) {
|
|
7085
|
+
counter++;
|
|
7086
|
+
}
|
|
7087
|
+
return `${base}-${counter}${ext}`;
|
|
7088
|
+
}
|
|
7089
|
+
const IMAGE_EXTENSIONS = /* @__PURE__ */ new Set([
|
|
7090
|
+
".png",
|
|
7091
|
+
".jpg",
|
|
7092
|
+
".jpeg",
|
|
7093
|
+
".gif",
|
|
7094
|
+
".webp",
|
|
7095
|
+
".svg",
|
|
7096
|
+
".bmp",
|
|
7097
|
+
".ico",
|
|
7098
|
+
".tiff",
|
|
7099
|
+
".tif",
|
|
7100
|
+
".avif",
|
|
7101
|
+
".heic",
|
|
7102
|
+
".heif"
|
|
7103
|
+
]);
|
|
7104
|
+
function isImageAttachment(att) {
|
|
7105
|
+
const name = att.filename || filenameFromUrl(att.url);
|
|
7106
|
+
return IMAGE_EXTENSIONS.has(extname(name).toLowerCase());
|
|
7107
|
+
}
|
|
7108
|
+
async function persistAttachments(attachments) {
|
|
7109
|
+
const nonVoice = attachments.filter((a) => !a.isVoice);
|
|
7110
|
+
if (nonVoice.length === 0) {
|
|
7111
|
+
return { documents: [], images: [] };
|
|
7112
|
+
}
|
|
7113
|
+
mkdirSync(UPLOADS_DIR, { recursive: true });
|
|
7114
|
+
const results = await Promise.allSettled(
|
|
7115
|
+
nonVoice.map(async (att) => {
|
|
7116
|
+
const name = resolveUniqueFilename(
|
|
7117
|
+
att.filename || filenameFromUrl(att.url)
|
|
7118
|
+
);
|
|
7119
|
+
const localPath = join(UPLOADS_DIR, name);
|
|
7120
|
+
const res = await fetch(att.url, {
|
|
7121
|
+
signal: AbortSignal.timeout(3e4)
|
|
7122
|
+
});
|
|
7123
|
+
if (!res.ok) {
|
|
7124
|
+
throw new Error(`HTTP ${res.status} downloading ${att.url}`);
|
|
7125
|
+
}
|
|
7126
|
+
const buffer = Buffer.from(await res.arrayBuffer());
|
|
7127
|
+
await writeFile(localPath, buffer);
|
|
7128
|
+
log11.info("Attachment saved", {
|
|
7129
|
+
filename: name,
|
|
7130
|
+
path: localPath,
|
|
7131
|
+
bytes: buffer.length
|
|
7132
|
+
});
|
|
7133
|
+
let extractedTextPath;
|
|
7134
|
+
if (att.extractedTextUrl) {
|
|
7135
|
+
try {
|
|
7136
|
+
const textRes = await fetch(att.extractedTextUrl, {
|
|
7137
|
+
signal: AbortSignal.timeout(3e4)
|
|
7138
|
+
});
|
|
7139
|
+
if (textRes.ok) {
|
|
7140
|
+
extractedTextPath = `${localPath}.txt`;
|
|
7141
|
+
await writeFile(extractedTextPath, await textRes.text(), "utf-8");
|
|
7142
|
+
log11.info("Extracted text saved", { path: extractedTextPath });
|
|
7143
|
+
}
|
|
7144
|
+
} catch {
|
|
7145
|
+
}
|
|
7146
|
+
}
|
|
7147
|
+
return { filename: name, localPath, extractedTextPath };
|
|
7148
|
+
})
|
|
7149
|
+
);
|
|
7150
|
+
const settled = results.map((r, i) => ({
|
|
7151
|
+
result: r.status === "fulfilled" ? r.value : null,
|
|
7152
|
+
isImage: isImageAttachment(nonVoice[i])
|
|
7153
|
+
}));
|
|
7154
|
+
return {
|
|
7155
|
+
documents: settled.filter((s) => !s.isImage).map((s) => s.result),
|
|
7156
|
+
images: settled.filter((s) => s.isImage).map((s) => s.result)
|
|
7157
|
+
};
|
|
7158
|
+
}
|
|
7159
|
+
function buildUploadHeader(results) {
|
|
7160
|
+
const succeeded = results.filter(Boolean);
|
|
7161
|
+
if (succeeded.length === 0) {
|
|
7162
|
+
return "";
|
|
7163
|
+
}
|
|
7164
|
+
if (succeeded.length === 1) {
|
|
7165
|
+
const r = succeeded[0];
|
|
7166
|
+
const parts = [`[Uploaded file: ${r.localPath}`];
|
|
7167
|
+
if (r.extractedTextPath) {
|
|
7168
|
+
parts.push(`extracted text: ${r.extractedTextPath}`);
|
|
7169
|
+
}
|
|
7170
|
+
return parts.join(" \u2014 ") + "]";
|
|
7171
|
+
}
|
|
7172
|
+
const lines = succeeded.map((r) => {
|
|
7173
|
+
if (r.extractedTextPath) {
|
|
7174
|
+
return `- ${r.localPath} (extracted text: ${r.extractedTextPath})`;
|
|
7175
|
+
}
|
|
7176
|
+
return `- ${r.localPath}`;
|
|
7177
|
+
});
|
|
7178
|
+
return `[Uploaded files]
|
|
7179
|
+
${lines.join("\n")}`;
|
|
7180
|
+
}
|
|
7053
7181
|
async function handleMessage(parsed, requestId) {
|
|
7054
7182
|
if (running) {
|
|
7055
7183
|
emit(
|
|
@@ -7071,12 +7199,26 @@ ${xmlParts}
|
|
|
7071
7199
|
turnStart = Date.now();
|
|
7072
7200
|
const attachments = parsed.attachments;
|
|
7073
7201
|
if (attachments?.length) {
|
|
7074
|
-
|
|
7075
|
-
|
|
7076
|
-
attachments.map((a) => a.url)
|
|
7077
|
-
);
|
|
7202
|
+
log11.info("Message has attachments", {
|
|
7203
|
+
count: attachments.length,
|
|
7204
|
+
urls: attachments.map((a) => a.url)
|
|
7205
|
+
});
|
|
7078
7206
|
}
|
|
7079
7207
|
let userMessage = parsed.text ?? "";
|
|
7208
|
+
if (attachments?.some((a) => !a.isVoice)) {
|
|
7209
|
+
try {
|
|
7210
|
+
const { documents, images } = await persistAttachments(attachments);
|
|
7211
|
+
const all = [...documents, ...images];
|
|
7212
|
+
const header = buildUploadHeader(all);
|
|
7213
|
+
if (header) {
|
|
7214
|
+
userMessage = userMessage ? `${header}
|
|
7215
|
+
|
|
7216
|
+
${userMessage}` : header;
|
|
7217
|
+
}
|
|
7218
|
+
} catch (err) {
|
|
7219
|
+
log11.warn("Attachment persistence failed", { error: err.message });
|
|
7220
|
+
}
|
|
7221
|
+
}
|
|
7080
7222
|
let resolved = null;
|
|
7081
7223
|
try {
|
|
7082
7224
|
resolved = resolveAction(userMessage);
|
|
@@ -7088,8 +7230,10 @@ ${xmlParts}
|
|
|
7088
7230
|
);
|
|
7089
7231
|
return;
|
|
7090
7232
|
}
|
|
7233
|
+
pendingNextAction = void 0;
|
|
7091
7234
|
if (resolved !== null) {
|
|
7092
|
-
userMessage = resolved;
|
|
7235
|
+
userMessage = resolved.message;
|
|
7236
|
+
pendingNextAction = resolved.next;
|
|
7093
7237
|
}
|
|
7094
7238
|
const isHidden = resolved !== null || !!parsed.hidden;
|
|
7095
7239
|
const rawText = parsed.text ?? "";
|
|
@@ -35,7 +35,7 @@ box-shadow: 0 8px 32px rgba(0,0,0,0.3) for floating depth
|
|
|
35
35
|
~~~
|
|
36
36
|
```
|
|
37
37
|
|
|
38
|
-
When you have image URLs (from the design expert), embed them directly in the spec using markdown image syntax. Write descriptive alt text that captures what the image actually depicts (this helps accessibility and helps the coding agent understand the image without loading it). Use the surrounding prose to explain the design intent — what the image is for, how it should be used in the layout, and why it was chosen.
|
|
38
|
+
When you have image URLs (from the design expert), embed them directly in the spec using markdown image syntax. Write descriptive alt text that captures what the image actually depicts (this helps accessibility and helps the coding agent understand the image without loading it). Use the surrounding prose to explain the design intent — what the image is for, how it should be used in the layout, and why it was chosen. User-uploaded files (images, documents, reference materials) are saved to `src/.user-uploads/` and can be referenced from specs using their disk path.
|
|
39
39
|
|
|
40
40
|
When the design expert provides wireframes, include them directly in the spec for future reference.
|
|
41
41
|
|
|
@@ -28,7 +28,7 @@ The user can already see your tool calls, so most of your work is visible withou
|
|
|
28
28
|
Skip the rest: narrating what you're about to do, restating what the user asked, explaining tool calls they can already see.
|
|
29
29
|
|
|
30
30
|
### User attachments
|
|
31
|
-
|
|
31
|
+
When a user uploads a file (PDF, Word doc, image, etc.), it is automatically saved to `src/.user-uploads/` in the project directory. The message includes the file path and, for documents with extractable text, a `.txt` sidecar with the extracted content. Use `readFile` on the sidecar to access document contents. The raw binary is also on disk at the indicated path. Uploaded images can be referenced in specs and code by their disk path (e.g., ``). These files persist across the conversation — they survive compaction and session restarts. Do not ask the user to re-upload a document that has already been saved. Voice messages are not saved to disk — their transcripts appear inline in the message.
|
|
32
32
|
|
|
33
33
|
### Automated messages
|
|
34
34
|
You will occasionally receive automated messages prefixed with `@@automated_message@@` - these are triggered by things like background agents returning their work, or by the user clicking a button in the UI (e.g., the user might click a "Build Feature" button in the product roadmap UI, and you will receive a message detailing what they want to build). You will be able to see these messages in your chat history but the user will not see them, so acknowledge them appropriately and then perform the requested work.
|
|
@@ -39,4 +39,4 @@ You will occasionally receive automated messages prefixed with `@@automated_mess
|
|
|
39
39
|
- Keep language accessible. Describe what the app *does*, not how it's implemented, unless the user demonstrates technical fluency.
|
|
40
40
|
- Always use full paths relative to the project root when mentioning files (`dist/interfaces/web/src/App.tsx`, not `App.tsx`). Paths will be rendered as clickable links for the user.
|
|
41
41
|
- Use inline `code` formatting only for things the user needs to type or search for.
|
|
42
|
-
- When writing prose or communicating with the user, avoid em dashes (and especially when writing specs); use periods, commas, colons, or parentheses instead.
|
|
42
|
+
- When writing prose or communicating with the user, avoid em dashes (and especially when writing specs); use periods, commas, colons, or parentheses instead. Do not use emojis.
|
|
@@ -28,7 +28,7 @@ Your architect for anything that touches external services, AI models, media pro
|
|
|
28
28
|
|
|
29
29
|
Also critical: model IDs in the MindStudio API do not match vendor API model IDs. Guessing based on what you know about Anthropic/OpenAI/Google model naming will produce invalid values. Always look up the correct ID.
|
|
30
30
|
|
|
31
|
-
Describe what you're building at the method level — the full workflow — and get back architectural guidance and working code.
|
|
31
|
+
Describe what you're building at the method level — the full workflow — and get back architectural guidance and working code. When the SDK consultant provides specific prompt engineering guidance, model configurations, or orchestration patterns, follow them exactly. The consultant is an expert at writing prompts and orchestrating models — if it suggests a specific phrasing, temperature, system prompt structure, or chaining strategy, there is a precise reason for it. Do not paraphrase, simplify, or "improve" its recommendations.
|
|
32
32
|
|
|
33
33
|
### Architecture Expert (aka Code Sanity Check) (`codeSanityCheck`)
|
|
34
34
|
|