@mindstudio-ai/remy 0.1.19 → 0.1.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. package/dist/actions/buildFromInitialSpec.md +11 -3
  2. package/dist/compiled/design.md +2 -1
  3. package/dist/compiled/msfm.md +1 -0
  4. package/dist/compiled/sdk-actions.md +1 -3
  5. package/dist/headless.js +838 -306
  6. package/dist/index.js +952 -358
  7. package/dist/prompt/.notes.md +54 -0
  8. package/dist/prompt/actions/buildFromInitialSpec.md +11 -3
  9. package/dist/prompt/compiled/design.md +2 -1
  10. package/dist/prompt/compiled/msfm.md +1 -0
  11. package/dist/prompt/compiled/sdk-actions.md +1 -3
  12. package/dist/prompt/sources/frontend-design-notes.md +1 -0
  13. package/dist/prompt/static/authoring.md +4 -4
  14. package/dist/prompt/static/coding.md +5 -5
  15. package/dist/prompt/static/team.md +39 -0
  16. package/dist/static/authoring.md +4 -4
  17. package/dist/static/coding.md +5 -5
  18. package/dist/static/team.md +39 -0
  19. package/dist/subagents/browserAutomation/prompt.md +2 -0
  20. package/dist/subagents/codeSanityCheck/.notes.md +44 -0
  21. package/dist/subagents/codeSanityCheck/prompt.md +43 -0
  22. package/dist/subagents/designExpert/.notes.md +16 -4
  23. package/dist/subagents/designExpert/data/compile-inspiration.sh +2 -2
  24. package/dist/subagents/designExpert/prompts/frontend-design-notes.md +1 -0
  25. package/dist/subagents/designExpert/prompts/icons.md +18 -7
  26. package/dist/subagents/designExpert/prompts/identity.md +4 -4
  27. package/dist/subagents/designExpert/prompts/images.md +3 -2
  28. package/dist/subagents/designExpert/prompts/instructions.md +2 -2
  29. package/dist/subagents/designExpert/prompts/layout.md +4 -2
  30. package/dist/subagents/productVision/.notes.md +79 -0
  31. package/dist/subagents/productVision/prompt.md +29 -22
  32. package/package.json +1 -1
@@ -1,7 +1,15 @@
1
1
  This is an automated action triggered by the user pressing "Build" in the editor after reviewing the spec.
2
2
 
3
- The user has reviewed the spec and is ready to build. Build everything in one turn: methods, tables, interfaces, manifest updates, and scenarios, using the spec as the master plan.
3
+ The user has reviewed the spec and is ready to build.
4
4
 
5
- When code generation is complete, verify your work: use `runScenario` to seed test data, then use `runMethod` to confirm a method works, then use `runAutomatedBrowserTest` to smoke-test the main UI flow. The dev database is a disposable snapshot, so don't worry about being destructive. Fix any errors before finishing.
5
+ Think about your approach and then get a quick sanity check from `codeSanityCheck` to make sure you aren't missing anything.
6
6
 
7
- When everything is working, update `src/roadmap/mvp.md` to `status: done`, then call `setProjectOnboardingState({ state: "onboardingFinished" })`.
7
+ Then, build everything in one turn: methods, tables, interfaces, manifest updates, and scenarios, using the spec as the master plan.
8
+
9
+ When code generation is complete, verify your work:
10
+ - First, run use `runScenario` to seed test data, then use `runMethod` to confirm a method works
11
+ - If the app has a web frontend, check the browser logs to make sure there are no errors rendering it.
12
+ - Ask the `visualDesignExpert` to take a screenshot and verity that the visual design looks correct. Fix any issues it flags - we want the user's first time seeing the finished product to truly wow them.
13
+ - Finally, use `runAutomatedBrowserTest` to smoke-test the main UI flow. The dev database is a disposable snapshot, so don't worry about being destructive. Fix any errors before finishing.
14
+
15
+ When everything is working, use `productVision` to mark the MVP roadmap item as done, then call `setProjectOnboardingState({ state: "onboardingFinished" })`.
@@ -43,7 +43,7 @@ Derive additional implementation colors (borders, focus states, hover states, di
43
43
 
44
44
  ### Typography block format
45
45
 
46
- A `` ```typography `` fenced block in a `type: design/typography` spec file declares fonts (with source URLs) and one or two anchor styles (typically Display and Body). Derive additional styles (labels, buttons, captions, overlines) from these anchors:
46
+ A `` ```typography `` fenced block in a `type: design/typography` spec file declares fonts (with source URLs) and one or two anchor styles (typically Display and Body). Styles can include an optional `case` field (`uppercase`, `lowercase`, `capitalize`) for text-transform. Derive additional styles (labels, buttons, captions, overlines) from these anchors:
47
47
 
48
48
  ```typography
49
49
  fonts:
@@ -59,6 +59,7 @@ styles:
59
59
  weight: 600
60
60
  letterSpacing: -0.03em
61
61
  lineHeight: 1.1
62
+ case: uppercase
62
63
  description: Page titles and hero text
63
64
  Body:
64
65
  font: Satoshi
@@ -181,6 +181,7 @@ styles:
181
181
  weight: 600
182
182
  letterSpacing: -0.03em
183
183
  lineHeight: 1.1
184
+ case: uppercase
184
185
  description: Page titles and hero text
185
186
  Body:
186
187
  font: Satoshi
@@ -2,9 +2,7 @@
2
2
 
3
3
  `@mindstudio-ai/agent` provides access to 200+ AI models and 1,000+ actions through a single API key. No separate provider keys needed. MindStudio routes to the correct provider (OpenAI, Anthropic, Google, etc.) server-side.
4
4
 
5
- There is a huge amount of capability here: hundreds of text generation models (OpenAI, Anthropic, Google, Meta, Mistral, and more), dozens of image generation models (FLUX, DALL-E, Stable Diffusion, Ideogram, and more), video generation, text-to-speech, music generation, vision analysis, web scraping, 850+ OAuth connectors, and much more. The tables below are a summary.
6
-
7
- **Always use `askMindStudioSdk` before writing code that uses the SDK.** Treat it as an expert consultant, not a docs search. Describe what you're trying to build at the method level — the full workflow, not just "how do I call generateText." The assistant knows every action, model, connector, configuration option, and the user's configured OAuth connections. It can advise on AI orchestration patterns (structured output, chaining calls, batch processing), help you avoid common mistakes (like manually parsing JSON when the SDK has structured output options), and provide complete working code for your use case.
5
+ There is a huge amount of capability here: hundreds of text generation models (OpenAI, Anthropic, Google, Meta, Mistral, and more), dozens of image generation models (FLUX, DALL-E, Stable Diffusion, Ideogram, and more), video generation, text-to-speech, music generation, vision analysis, web scraping, 850+ OAuth connectors, and much more. The tables below are a summary. Always use `askMindStudioSdk` before writing SDK code.
8
6
 
9
7
  ## Usage in Methods
10
8