@fro.bot/systematic 1.23.0 → 1.23.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (62) hide show
  1. package/agents/research/best-practices-researcher.md +9 -3
  2. package/agents/research/framework-docs-researcher.md +2 -0
  3. package/agents/research/git-history-analyzer.md +9 -6
  4. package/agents/research/issue-intelligence-analyst.md +232 -0
  5. package/agents/research/repo-research-analyst.md +6 -10
  6. package/commands/.gitkeep +0 -0
  7. package/package.json +1 -1
  8. package/skills/agent-browser/SKILL.md +4 -3
  9. package/skills/ce-brainstorm/SKILL.md +242 -52
  10. package/skills/ce-compound/SKILL.md +60 -40
  11. package/skills/ce-compound-refresh/SKILL.md +528 -0
  12. package/skills/ce-ideate/SKILL.md +371 -0
  13. package/skills/ce-plan/SKILL.md +40 -39
  14. package/skills/ce-plan-beta/SKILL.md +572 -0
  15. package/skills/ce-review/SKILL.md +7 -6
  16. package/skills/ce-work/SKILL.md +85 -75
  17. package/skills/create-agent-skill/SKILL.md +1 -1
  18. package/skills/create-agent-skills/SKILL.md +6 -5
  19. package/skills/deepen-plan/SKILL.md +11 -11
  20. package/skills/deepen-plan-beta/SKILL.md +323 -0
  21. package/skills/document-review/SKILL.md +14 -8
  22. package/skills/generate_command/SKILL.md +3 -2
  23. package/skills/lfg/SKILL.md +10 -7
  24. package/skills/report-bug/SKILL.md +15 -14
  25. package/skills/resolve_parallel/SKILL.md +2 -1
  26. package/skills/resolve_todo_parallel/SKILL.md +1 -1
  27. package/skills/slfg/SKILL.md +7 -4
  28. package/skills/test-browser/SKILL.md +3 -3
  29. package/skills/test-xcode/SKILL.md +2 -2
  30. package/agents/workflow/every-style-editor.md +0 -66
  31. package/commands/agent-native-audit.md +0 -279
  32. package/commands/ce/brainstorm.md +0 -145
  33. package/commands/ce/compound.md +0 -240
  34. package/commands/ce/plan.md +0 -636
  35. package/commands/ce/review.md +0 -525
  36. package/commands/ce/work.md +0 -456
  37. package/commands/changelog.md +0 -139
  38. package/commands/create-agent-skill.md +0 -9
  39. package/commands/deepen-plan.md +0 -546
  40. package/commands/deploy-docs.md +0 -120
  41. package/commands/feature-video.md +0 -352
  42. package/commands/generate_command.md +0 -164
  43. package/commands/heal-skill.md +0 -147
  44. package/commands/lfg.md +0 -20
  45. package/commands/report-bug.md +0 -151
  46. package/commands/reproduce-bug.md +0 -100
  47. package/commands/resolve_parallel.md +0 -36
  48. package/commands/resolve_todo_parallel.md +0 -37
  49. package/commands/slfg.md +0 -32
  50. package/commands/test-browser.md +0 -340
  51. package/commands/test-xcode.md +0 -332
  52. package/commands/triage.md +0 -311
  53. package/commands/workflows/brainstorm.md +0 -145
  54. package/commands/workflows/compound.md +0 -10
  55. package/commands/workflows/plan.md +0 -10
  56. package/commands/workflows/review.md +0 -10
  57. package/commands/workflows/work.md +0 -10
  58. package/skills/brainstorming/SKILL.md +0 -190
  59. package/skills/skill-creator/SKILL.md +0 -210
  60. package/skills/skill-creator/scripts/init_skill.py +0 -303
  61. package/skills/skill-creator/scripts/package_skill.py +0 -110
  62. package/skills/skill-creator/scripts/quick_validate.py +0 -65
@@ -1,66 +0,0 @@
1
- ---
2
- name: every-style-editor
3
- description: Reviews and edits text content to conform to Every's editorial style guide. Use when written content needs style compliance checks for headlines, punctuation, voice, and formatting.
4
- tools: Task, Glob, Grep, LS, ExitPlanMode, Read, Edit, MultiEdit, Write, NotebookRead, NotebookEdit, WebFetch, TodoWrite, WebSearch
5
- mode: subagent
6
- temperature: 0.1
7
- ---
8
-
9
- You are an expert copy editor specializing in Every's house style guide. Your role is to meticulously review text content and suggest edits to ensure compliance with Every's specific editorial standards.
10
-
11
- When reviewing content, you will:
12
-
13
- 1. **Systematically check each style rule** - Go through the style guide items one by one, checking the text against each rule
14
- 2. **Provide specific edit suggestions** - For each issue found, quote the problematic text and provide the corrected version
15
- 3. **Explain the rule being applied** - Reference which style guide rule necessitates each change
16
- 4. **Maintain the author's voice** - Make only the changes necessary for style compliance while preserving the original tone and meaning
17
-
18
- **Every Style Guide Rules to Apply:**
19
-
20
- - Headlines use title case; everything else uses sentence case
21
- - Companies are singular ("it" not "they"); teams/people within companies are plural
22
- - Remove unnecessary "actually," "very," or "just"
23
- - Hyperlink 2-4 words when linking to sources
24
- - Cut adverbs where possible
25
- - Use active voice instead of passive voice
26
- - Spell out numbers one through nine (except years at sentence start); use numerals for 10+
27
- - Use italics for emphasis (never bold or underline)
28
- - Image credits: _Source: X/Name_ or _Source: Website name_
29
- - Don't capitalize job titles
30
- - Capitalize after colons only if introducing independent clauses
31
- - Use Oxford commas (x, y, and z)
32
- - Use commas between independent clauses only
33
- - No space after ellipsis...
34
- - Em dashes—like this—with no spaces (max 2 per paragraph)
35
- - Hyphenate compound adjectives except with adverbs ending in "ly"
36
- - Italicize titles of books, newspapers, movies, TV shows, games
37
- - Full names on first mention, last names thereafter (first names in newsletters/social)
38
- - Percentages: "7 percent" (numeral + spelled out)
39
- - Numbers over 999 take commas: 1,000
40
- - Punctuation outside parentheses (unless full sentence inside)
41
- - Periods and commas inside quotation marks
42
- - Single quotes for quotes within quotes
43
- - Comma before quote if introduced; no comma if text leads directly into quote
44
- - Use "earlier/later/previously" instead of "above/below"
45
- - Use "more/less/fewer" instead of "over/under" for quantities
46
- - Avoid slashes; use hyphens when needed
47
- - Don't start sentences with "This" without clear antecedent
48
- - Avoid starting with "We have" or "We get"
49
- - Avoid clichés and jargon
50
- - "Two times faster" not "2x" (except for the common "10x" trope)
51
- - Use "$1 billion" not "one billion dollars"
52
- - Identify people by company/title (except well-known figures like Mark Zuckerberg)
53
- - Button text is always sentence case -- "Complete setup"
54
-
55
- **Output Format:**
56
-
57
- Provide your review as a numbered list of suggested edits, grouping related changes when logical. For each edit:
58
-
59
- - Quote the original text
60
- - Provide the corrected version
61
- - Briefly explain which style rule applies
62
-
63
- If the text is already compliant with the style guide, acknowledge this and highlight any particularly well-executed style choices.
64
-
65
- Be thorough but constructive, focusing on helping the content shine while maintaining Every's professional standards.
66
-
@@ -1,279 +0,0 @@
1
- ---
2
- name: agent-native-audit
3
- description: Run comprehensive agent-native architecture review with scored principles
4
- argument-hint: '[optional: specific principle to audit]'
5
- disable-model-invocation: true
6
- ---
7
-
8
- # Agent-Native Architecture Audit
9
-
10
- Conduct a comprehensive review of the codebase against agent-native architecture principles, launching parallel sub-agents for each principle and producing a scored report.
11
-
12
- ## Core Principles to Audit
13
-
14
- 1. **Action Parity** - "Whatever the user can do, the agent can do"
15
- 2. **Tools as Primitives** - "Tools provide capability, not behavior"
16
- 3. **Context Injection** - "System prompt includes dynamic context about app state"
17
- 4. **Shared Workspace** - "Agent and user work in the same data space"
18
- 5. **CRUD Completeness** - "Every entity has full CRUD (Create, Read, Update, Delete)"
19
- 6. **UI Integration** - "Agent actions immediately reflected in UI"
20
- 7. **Capability Discovery** - "Users can discover what the agent can do"
21
- 8. **Prompt-Native Features** - "Features are prompts defining outcomes, not code"
22
-
23
- ## Workflow
24
-
25
- ### Step 1: Load the Agent-Native Skill
26
-
27
- First, invoke the agent-native-architecture skill to understand all principles:
28
-
29
- ```
30
- /systematic:agent-native-architecture
31
- ```
32
-
33
- Select option 7 (action parity) to load the full reference material.
34
-
35
- ### Step 2: Launch Parallel Sub-Agents
36
-
37
- Launch 8 parallel sub-agents using the task tool with `subagent_type: Explore`, one for each principle. Each agent should:
38
-
39
- 1. Enumerate ALL instances in the codebase (user actions, tools, contexts, data stores, etc.)
40
- 2. Check compliance against the principle
41
- 3. Provide a SPECIFIC SCORE like "X out of Y (percentage%)"
42
- 4. List specific gaps and recommendations
43
-
44
- <sub-agents>
45
-
46
- **Agent 1: Action Parity**
47
- ```
48
- Audit for ACTION PARITY - "Whatever the user can do, the agent can do."
49
-
50
- Tasks:
51
- 1. Enumerate ALL user actions in frontend (API calls, button clicks, form submissions)
52
- - Search for API service files, fetch calls, form handlers
53
- - Check routes and components for user interactions
54
- 2. Check which have corresponding agent tools
55
- - Search for agent tool definitions
56
- - Map user actions to agent capabilities
57
- 3. Score: "Agent can do X out of Y user actions"
58
-
59
- Format:
60
- ## Action Parity Audit
61
- ### User Actions Found
62
- | Action | Location | Agent Tool | Status |
63
- ### Score: X/Y (percentage%)
64
- ### Missing Agent Tools
65
- ### Recommendations
66
- ```
67
-
68
- **Agent 2: Tools as Primitives**
69
- ```
70
- Audit for TOOLS AS PRIMITIVES - "Tools provide capability, not behavior."
71
-
72
- Tasks:
73
- 1. Find and read ALL agent tool files
74
- 2. Classify each as:
75
- - PRIMITIVE (good): read, write, store, list - enables capability without business logic
76
- - WORKFLOW (bad): encodes business logic, makes decisions, orchestrates steps
77
- 3. Score: "X out of Y tools are proper primitives"
78
-
79
- Format:
80
- ## Tools as Primitives Audit
81
- ### Tool Analysis
82
- | Tool | File | Type | Reasoning |
83
- ### Score: X/Y (percentage%)
84
- ### Problematic Tools (workflows that should be primitives)
85
- ### Recommendations
86
- ```
87
-
88
- **Agent 3: Context Injection**
89
- ```
90
- Audit for CONTEXT INJECTION - "System prompt includes dynamic context about app state"
91
-
92
- Tasks:
93
- 1. Find context injection code (search for "context", "system prompt", "inject")
94
- 2. Read agent prompts and system messages
95
- 3. Enumerate what IS injected vs what SHOULD be:
96
- - Available resources (files, drafts, documents)
97
- - User preferences/settings
98
- - Recent activity
99
- - Available capabilities listed
100
- - Session history
101
- - Workspace state
102
-
103
- Format:
104
- ## Context Injection Audit
105
- ### Context Types Analysis
106
- | Context Type | Injected? | Location | Notes |
107
- ### Score: X/Y (percentage%)
108
- ### Missing Context
109
- ### Recommendations
110
- ```
111
-
112
- **Agent 4: Shared Workspace**
113
- ```
114
- Audit for SHARED WORKSPACE - "Agent and user work in the same data space"
115
-
116
- Tasks:
117
- 1. Identify all data stores/tables/models
118
- 2. Check if agents read/write to SAME tables or separate ones
119
- 3. Look for sandbox isolation anti-pattern (agent has separate data space)
120
-
121
- Format:
122
- ## Shared Workspace Audit
123
- ### Data Store Analysis
124
- | Data Store | User Access | Agent Access | Shared? |
125
- ### Score: X/Y (percentage%)
126
- ### Isolated Data (anti-pattern)
127
- ### Recommendations
128
- ```
129
-
130
- **Agent 5: CRUD Completeness**
131
- ```
132
- Audit for CRUD COMPLETENESS - "Every entity has full CRUD"
133
-
134
- Tasks:
135
- 1. Identify all entities/models in the codebase
136
- 2. For each entity, check if agent tools exist for:
137
- - Create
138
- - Read
139
- - Update
140
- - Delete
141
- 3. Score per entity and overall
142
-
143
- Format:
144
- ## CRUD Completeness Audit
145
- ### Entity CRUD Analysis
146
- | Entity | Create | Read | Update | Delete | Score |
147
- ### Overall Score: X/Y entities with full CRUD (percentage%)
148
- ### Incomplete Entities (list missing operations)
149
- ### Recommendations
150
- ```
151
-
152
- **Agent 6: UI Integration**
153
- ```
154
- Audit for UI INTEGRATION - "Agent actions immediately reflected in UI"
155
-
156
- Tasks:
157
- 1. Check how agent writes/changes propagate to frontend
158
- 2. Look for:
159
- - Streaming updates (SSE, WebSocket)
160
- - Polling mechanisms
161
- - Shared state/services
162
- - Event buses
163
- - File watching
164
- 3. Identify "silent actions" anti-pattern (agent changes state but UI doesn't update)
165
-
166
- Format:
167
- ## UI Integration Audit
168
- ### Agent Action → UI Update Analysis
169
- | Agent Action | UI Mechanism | Immediate? | Notes |
170
- ### Score: X/Y (percentage%)
171
- ### Silent Actions (anti-pattern)
172
- ### Recommendations
173
- ```
174
-
175
- **Agent 7: Capability Discovery**
176
- ```
177
- Audit for CAPABILITY DISCOVERY - "Users can discover what the agent can do"
178
-
179
- Tasks:
180
- 1. Check for these 7 discovery mechanisms:
181
- - Onboarding flow showing agent capabilities
182
- - Help documentation
183
- - Capability hints in UI
184
- - Agent self-describes in responses
185
- - Suggested prompts/actions
186
- - Empty state guidance
187
- - Slash commands (/help, /tools)
188
- 2. Score against 7 mechanisms
189
-
190
- Format:
191
- ## Capability Discovery Audit
192
- ### Discovery Mechanism Analysis
193
- | Mechanism | Exists? | Location | Quality |
194
- ### Score: X/7 (percentage%)
195
- ### Missing Discovery
196
- ### Recommendations
197
- ```
198
-
199
- **Agent 8: Prompt-Native Features**
200
- ```
201
- Audit for PROMPT-NATIVE FEATURES - "Features are prompts defining outcomes, not code"
202
-
203
- Tasks:
204
- 1. Read all agent prompts
205
- 2. Classify each feature/behavior as defined in:
206
- - PROMPT (good): outcomes defined in natural language
207
- - CODE (bad): business logic hardcoded
208
- 3. Check if behavior changes require prompt edit vs code change
209
-
210
- Format:
211
- ## Prompt-Native Features Audit
212
- ### Feature Definition Analysis
213
- | Feature | Defined In | Type | Notes |
214
- ### Score: X/Y (percentage%)
215
- ### Code-Defined Features (anti-pattern)
216
- ### Recommendations
217
- ```
218
-
219
- </sub-agents>
220
-
221
- ### Step 3: Compile Summary Report
222
-
223
- After all agents complete, compile a summary with:
224
-
225
- ```markdown
226
- ## Agent-Native Architecture Review: [Project Name]
227
-
228
- ### Overall Score Summary
229
-
230
- | Core Principle | Score | Percentage | Status |
231
- |----------------|-------|------------|--------|
232
- | Action Parity | X/Y | Z% | ✅/⚠️/❌ |
233
- | Tools as Primitives | X/Y | Z% | ✅/⚠️/❌ |
234
- | Context Injection | X/Y | Z% | ✅/⚠️/❌ |
235
- | Shared Workspace | X/Y | Z% | ✅/⚠️/❌ |
236
- | CRUD Completeness | X/Y | Z% | ✅/⚠️/❌ |
237
- | UI Integration | X/Y | Z% | ✅/⚠️/❌ |
238
- | Capability Discovery | X/Y | Z% | ✅/⚠️/❌ |
239
- | Prompt-Native Features | X/Y | Z% | ✅/⚠️/❌ |
240
-
241
- **Overall Agent-Native Score: X%**
242
-
243
- ### Status Legend
244
- - ✅ Excellent (80%+)
245
- - ⚠️ Partial (50-79%)
246
- - ❌ Needs Work (<50%)
247
-
248
- ### Top 10 Recommendations by Impact
249
-
250
- | Priority | Action | Principle | Effort |
251
- |----------|--------|-----------|--------|
252
-
253
- ### What's Working Excellently
254
-
255
- [List top 5 strengths]
256
- ```
257
-
258
- ## Success Criteria
259
-
260
- - [ ] All 8 sub-agents complete their audits
261
- - [ ] Each principle has a specific numeric score (X/Y format)
262
- - [ ] Summary table shows all scores and status indicators
263
- - [ ] Top 10 recommendations are prioritized by impact
264
- - [ ] Report identifies both strengths and gaps
265
-
266
- ## Optional: Single Principle Audit
267
-
268
- If $ARGUMENTS specifies a single principle (e.g., "action parity"), only run that sub-agent and provide detailed findings for that principle alone.
269
-
270
- Valid arguments:
271
- - `action parity` or `1`
272
- - `tools` or `primitives` or `2`
273
- - `context` or `injection` or `3`
274
- - `shared` or `workspace` or `4`
275
- - `crud` or `5`
276
- - `ui` or `integration` or `6`
277
- - `discovery` or `7`
278
- - `prompt` or `features` or `8`
279
-
@@ -1,145 +0,0 @@
1
- ---
2
- name: ce:brainstorm
3
- description: Explore requirements and approaches through collaborative dialogue before planning implementation
4
- argument-hint: "[feature idea or problem to explore]"
5
- ---
6
-
7
- # Brainstorm a Feature or Improvement
8
-
9
- **Note: The current year is 2026.** Use this when dating brainstorm documents.
10
-
11
- Brainstorming helps answer **WHAT** to build through collaborative dialogue. It precedes `/ce:plan`, which answers **HOW** to build it.
12
-
13
- **Process knowledge:** Load the `brainstorming` skill for detailed question techniques, approach exploration patterns, and YAGNI principles.
14
-
15
- ## Feature Description
16
-
17
- <feature_description> #$ARGUMENTS </feature_description>
18
-
19
- **If the feature description above is empty, ask the user:** "What would you like to explore? Please describe the feature, problem, or improvement you're thinking about."
20
-
21
- Do not proceed until you have a feature description from the user.
22
-
23
- ## Execution Flow
24
-
25
- ### Phase 0: Assess Requirements Clarity
26
-
27
- Evaluate whether brainstorming is needed based on the feature description.
28
-
29
- **Clear requirements indicators:**
30
- - Specific acceptance criteria provided
31
- - Referenced existing patterns to follow
32
- - Described exact expected behavior
33
- - Constrained, well-defined scope
34
-
35
- **If requirements are already clear:**
36
- Use **AskUserQuestion tool** to suggest: "Your requirements seem detailed enough to proceed directly to planning. Should I run `/ce:plan` instead, or would you like to explore the idea further?"
37
-
38
- ### Phase 1: Understand the Idea
39
-
40
- #### 1.1 Repository Research (Lightweight)
41
-
42
- Run a quick repo scan to understand existing patterns:
43
-
44
- - task repo-research-analyst("Understand existing patterns related to: <feature_description>")
45
-
46
- Focus on: similar features, established patterns, AGENTS.md guidance.
47
-
48
- #### 1.2 Collaborative Dialogue
49
-
50
- Use the **AskUserQuestion tool** to ask questions **one at a time**.
51
-
52
- **Guidelines (see `brainstorming` skill for detailed techniques):**
53
- - Prefer multiple choice when natural options exist
54
- - Start broad (purpose, users) then narrow (constraints, edge cases)
55
- - Validate assumptions explicitly
56
- - Ask about success criteria
57
-
58
- **Exit condition:** Continue until the idea is clear OR user says "proceed"
59
-
60
- ### Phase 2: Explore Approaches
61
-
62
- Propose **2-3 concrete approaches** based on research and conversation.
63
-
64
- For each approach, provide:
65
- - Brief description (2-3 sentences)
66
- - Pros and cons
67
- - When it's best suited
68
-
69
- Lead with your recommendation and explain why. Apply YAGNI—prefer simpler solutions.
70
-
71
- Use **AskUserQuestion tool** to ask which approach the user prefers.
72
-
73
- ### Phase 3: Capture the Design
74
-
75
- Write a brainstorm document to `docs/brainstorms/YYYY-MM-DD-<topic>-brainstorm.md`.
76
-
77
- **Document structure:** See the `brainstorming` skill for the template format. Key sections: What We're Building, Why This Approach, Key Decisions, Open Questions.
78
-
79
- Ensure `docs/brainstorms/` directory exists before writing.
80
-
81
- **IMPORTANT:** Before proceeding to Phase 4, check if there are any Open Questions listed in the brainstorm document. If there are open questions, YOU MUST ask the user about each one using AskUserQuestion before offering to proceed to planning. Move resolved questions to a "Resolved Questions" section.
82
-
83
- ### Phase 4: Handoff
84
-
85
- Use **AskUserQuestion tool** to present next steps:
86
-
87
- **Question:** "Brainstorm captured. What would you like to do next?"
88
-
89
- **Options:**
90
- 1. **Review and refine** - Improve the document through structured self-review
91
- 2. **Proceed to planning** - Run `/ce:plan` (will auto-detect this brainstorm)
92
- 3. **Share to Proof** - Upload to Proof for collaborative review and sharing
93
- 4. **Ask more questions** - I have more questions to clarify before moving on
94
- 5. **Done for now** - Return later
95
-
96
- **If user selects "Share to Proof":**
97
-
98
- ```bash
99
- CONTENT=$(cat docs/brainstorms/YYYY-MM-DD-<topic>-brainstorm.md)
100
- TITLE="Brainstorm: <topic title>"
101
- RESPONSE=$(curl -s -X POST https://www.proofeditor.ai/share/markdown \
102
- -H "Content-Type: application/json" \
103
- -d "$(jq -n --arg title "$TITLE" --arg markdown "$CONTENT" --arg by "ai:systematic" '{title: $title, markdown: $markdown, by: $by}')")
104
- PROOF_URL=$(echo "$RESPONSE" | jq -r '.tokenUrl')
105
- ```
106
-
107
- Display the URL prominently: `View & collaborate in Proof: <PROOF_URL>`
108
-
109
- If the curl fails, skip silently. Then return to the Phase 4 options.
110
-
111
- **If user selects "Ask more questions":** YOU (the AI) return to Phase 1.2 (Collaborative Dialogue) and continue asking the USER questions one at a time to further refine the design. The user wants YOU to probe deeper - ask about edge cases, constraints, preferences, or areas not yet explored. Continue until the user is satisfied, then return to Phase 4.
112
-
113
- **If user selects "Review and refine":**
114
-
115
- Load the `document-review` skill and apply it to the brainstorm document.
116
-
117
- When document-review returns "Review complete", present next steps:
118
-
119
- 1. **Move to planning** - Continue to `/ce:plan` with this document
120
- 2. **Done for now** - Brainstorming complete. To start planning later: `/ce:plan [document-path]`
121
-
122
- ## Output Summary
123
-
124
- When complete, display:
125
-
126
- ```
127
- Brainstorm complete!
128
-
129
- Document: docs/brainstorms/YYYY-MM-DD-<topic>-brainstorm.md
130
-
131
- Key decisions:
132
- - [Decision 1]
133
- - [Decision 2]
134
-
135
- Next: Run `/ce:plan` when ready to implement.
136
- ```
137
-
138
- ## Important Guidelines
139
-
140
- - **Stay focused on WHAT, not HOW** - Implementation details belong in the plan
141
- - **Ask one question at a time** - Don't overwhelm
142
- - **Apply YAGNI** - Prefer simpler approaches
143
- - **Keep outputs concise** - 200-300 words per section max
144
-
145
- NEVER CODE! Just explore and document decisions.