@fro.bot/systematic 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +158 -0
- package/agents/research/framework-docs-researcher.md +19 -0
- package/agents/review/architecture-strategist.md +23 -0
- package/agents/review/code-simplicity-reviewer.md +30 -0
- package/agents/review/pattern-recognition-specialist.md +24 -0
- package/agents/review/performance-oracle.md +25 -0
- package/agents/review/security-sentinel.md +25 -0
- package/commands/agent-native-audit.md +277 -0
- package/commands/create-agent-skill.md +8 -0
- package/commands/deepen-plan.md +546 -0
- package/commands/lfg.md +19 -0
- package/commands/workflows/brainstorm.md +115 -0
- package/commands/workflows/compound.md +202 -0
- package/commands/workflows/plan.md +551 -0
- package/commands/workflows/review.md +514 -0
- package/commands/workflows/work.md +363 -0
- package/dist/cli.js +360 -0
- package/dist/index-v8dhd5s2.js +194 -0
- package/dist/index.js +297 -0
- package/package.json +69 -0
- package/skills/agent-browser/SKILL.md +223 -0
- package/skills/agent-native-architecture/SKILL.md +435 -0
- package/skills/agent-native-architecture/references/action-parity-discipline.md +409 -0
- package/skills/agent-native-architecture/references/agent-execution-patterns.md +467 -0
- package/skills/agent-native-architecture/references/agent-native-testing.md +582 -0
- package/skills/agent-native-architecture/references/architecture-patterns.md +478 -0
- package/skills/agent-native-architecture/references/dynamic-context-injection.md +338 -0
- package/skills/agent-native-architecture/references/files-universal-interface.md +301 -0
- package/skills/agent-native-architecture/references/from-primitives-to-domain-tools.md +359 -0
- package/skills/agent-native-architecture/references/mcp-tool-design.md +506 -0
- package/skills/agent-native-architecture/references/mobile-patterns.md +871 -0
- package/skills/agent-native-architecture/references/product-implications.md +443 -0
- package/skills/agent-native-architecture/references/refactoring-to-prompt-native.md +317 -0
- package/skills/agent-native-architecture/references/self-modification.md +269 -0
- package/skills/agent-native-architecture/references/shared-workspace-architecture.md +680 -0
- package/skills/agent-native-architecture/references/system-prompt-design.md +250 -0
- package/skills/brainstorming/SKILL.md +190 -0
- package/skills/compound-docs/SKILL.md +510 -0
- package/skills/compound-docs/assets/critical-pattern-template.md +34 -0
- package/skills/compound-docs/assets/resolution-template.md +93 -0
- package/skills/compound-docs/references/yaml-schema.md +65 -0
- package/skills/compound-docs/schema.yaml +176 -0
- package/skills/create-agent-skills/SKILL.md +299 -0
- package/skills/create-agent-skills/references/api-security.md +226 -0
- package/skills/create-agent-skills/references/be-clear-and-direct.md +531 -0
- package/skills/create-agent-skills/references/best-practices.md +404 -0
- package/skills/create-agent-skills/references/common-patterns.md +595 -0
- package/skills/create-agent-skills/references/core-principles.md +437 -0
- package/skills/create-agent-skills/references/executable-code.md +175 -0
- package/skills/create-agent-skills/references/iteration-and-testing.md +474 -0
- package/skills/create-agent-skills/references/official-spec.md +185 -0
- package/skills/create-agent-skills/references/recommended-structure.md +168 -0
- package/skills/create-agent-skills/references/skill-structure.md +372 -0
- package/skills/create-agent-skills/references/using-scripts.md +113 -0
- package/skills/create-agent-skills/references/using-templates.md +112 -0
- package/skills/create-agent-skills/references/workflows-and-validation.md +510 -0
- package/skills/create-agent-skills/templates/router-skill.md +73 -0
- package/skills/create-agent-skills/templates/simple-skill.md +33 -0
- package/skills/create-agent-skills/workflows/add-reference.md +96 -0
- package/skills/create-agent-skills/workflows/add-script.md +93 -0
- package/skills/create-agent-skills/workflows/add-template.md +74 -0
- package/skills/create-agent-skills/workflows/add-workflow.md +120 -0
- package/skills/create-agent-skills/workflows/audit-skill.md +138 -0
- package/skills/create-agent-skills/workflows/create-domain-expertise-skill.md +605 -0
- package/skills/create-agent-skills/workflows/create-new-skill.md +191 -0
- package/skills/create-agent-skills/workflows/get-guidance.md +121 -0
- package/skills/create-agent-skills/workflows/upgrade-to-router.md +161 -0
- package/skills/create-agent-skills/workflows/verify-skill.md +204 -0
- package/skills/file-todos/SKILL.md +251 -0
- package/skills/file-todos/assets/todo-template.md +155 -0
- package/skills/git-worktree/SKILL.md +302 -0
- package/skills/git-worktree/scripts/worktree-manager.sh +345 -0
- package/skills/using-systematic/SKILL.md +94 -0
package/README.md
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
1
|
+
# Systematic
|
|
2
|
+
|
|
3
|
+
An OpenCode plugin providing systematic engineering workflows from the [Compound Engineering Plugin (CEP)](https://github.com/EveryInc/compound-engineering-plugin) Claude Code plugin, adapted for OpenCode.
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
npm install @fro.bot/systematic
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
Add to your OpenCode config (`~/.config/opencode/opencode.json`):
|
|
12
|
+
|
|
13
|
+
```json
|
|
14
|
+
{
|
|
15
|
+
"plugin": ["@fro.bot/systematic"]
|
|
16
|
+
}
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
## Features
|
|
20
|
+
|
|
21
|
+
### Skills
|
|
22
|
+
|
|
23
|
+
Systematic includes battle-tested engineering workflows:
|
|
24
|
+
|
|
25
|
+
| Skill | Description |
|
|
26
|
+
|-------|-------------|
|
|
27
|
+
| `using-systematic` | Bootstrap skill for discovering and using other skills |
|
|
28
|
+
| `brainstorming` | Collaborative design workflow |
|
|
29
|
+
| `agent-browser` | Browser automation with Playwright |
|
|
30
|
+
| `agent-native-architecture` | Design systems for AI agents |
|
|
31
|
+
| `compound-docs` | Create and maintain compound documentation |
|
|
32
|
+
| `create-agent-skills` | Write new skills for AI agents |
|
|
33
|
+
| `file-todos` | Manage TODO items in files |
|
|
34
|
+
| `git-worktree` | Use git worktrees for isolated development |
|
|
35
|
+
|
|
36
|
+
### Commands
|
|
37
|
+
|
|
38
|
+
Quick shortcuts to invoke workflows:
|
|
39
|
+
|
|
40
|
+
**Workflows:**
|
|
41
|
+
|
|
42
|
+
- `/workflows:brainstorm` - Start collaborative brainstorming
|
|
43
|
+
- `/workflows:compound` - Build compound documentation
|
|
44
|
+
- `/workflows:plan` - Create implementation plans
|
|
45
|
+
- `/workflows:review` - Run code review with agents
|
|
46
|
+
- `/workflows:work` - Execute planned work
|
|
47
|
+
|
|
48
|
+
**Utilities:**
|
|
49
|
+
|
|
50
|
+
- `/agent-native-audit` - Audit code for agent-native patterns
|
|
51
|
+
- `/create-agent-skill` - Create a new skill
|
|
52
|
+
- `/deepen-plan` - Add detail to existing plans
|
|
53
|
+
- `/lfg` - Let's go - start working immediately
|
|
54
|
+
|
|
55
|
+
### Review Agents
|
|
56
|
+
|
|
57
|
+
Specialized code review agents organized by category:
|
|
58
|
+
|
|
59
|
+
**Review:**
|
|
60
|
+
|
|
61
|
+
- `architecture-strategist` - Architectural review
|
|
62
|
+
- `security-sentinel` - Security review
|
|
63
|
+
- `code-simplicity-reviewer` - Complexity review
|
|
64
|
+
- `pattern-recognition-specialist` - Pattern analysis
|
|
65
|
+
- `performance-oracle` - Performance review
|
|
66
|
+
|
|
67
|
+
**Research:**
|
|
68
|
+
|
|
69
|
+
- `framework-docs-researcher` - Documentation research
|
|
70
|
+
|
|
71
|
+
## Config Hook
|
|
72
|
+
|
|
73
|
+
Systematic uses OpenCode's `config` hook to automatically register bundled agents, commands, and skills directly into OpenCode's configuration. This means:
|
|
74
|
+
|
|
75
|
+
- **Zero configuration required** - All bundled content is available immediately after installing the plugin
|
|
76
|
+
- **No file copying** - Skills, agents, and commands ship with the npm package
|
|
77
|
+
- **Existing config preserved** - Your OpenCode configuration settings take precedence over bundled content
|
|
78
|
+
|
|
79
|
+
## Tools
|
|
80
|
+
|
|
81
|
+
The plugin provides these tools to OpenCode:
|
|
82
|
+
|
|
83
|
+
| Tool | Description |
|
|
84
|
+
|------|-------------|
|
|
85
|
+
| `systematic_find_skills` | List available skills |
|
|
86
|
+
| `systematic_find_agents` | List available agents |
|
|
87
|
+
| `systematic_find_commands` | List available commands |
|
|
88
|
+
|
|
89
|
+
## Configuration
|
|
90
|
+
|
|
91
|
+
Create `~/.config/opencode/systematic.json` or `.opencode/systematic.json` to disable specific bundled content:
|
|
92
|
+
|
|
93
|
+
```json
|
|
94
|
+
{
|
|
95
|
+
"disabled_skills": [],
|
|
96
|
+
"disabled_agents": [],
|
|
97
|
+
"disabled_commands": []
|
|
98
|
+
}
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
## Converting CEP Content
|
|
102
|
+
|
|
103
|
+
The CLI includes a converter for adapting Claude Code agents, skills, and commands from Compound Engineering Plugin (CEP) to OpenCode.
|
|
104
|
+
|
|
105
|
+
### Convert a Skill
|
|
106
|
+
|
|
107
|
+
Skills are directories containing `SKILL.md` and supporting files:
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
npx @fro.bot/systematic convert skill /path/to/cep/skills/my-skill -o ./skills/my-skill
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
### Convert an Agent
|
|
114
|
+
|
|
115
|
+
Agents are markdown files that get OpenCode-compatible YAML frontmatter:
|
|
116
|
+
|
|
117
|
+
```bash
|
|
118
|
+
npx @fro.bot/systematic convert agent /path/to/cep/agents/review/my-agent.md -o ./agents/review/my-agent.md
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
### Convert a Command
|
|
122
|
+
|
|
123
|
+
Commands are markdown templates:
|
|
124
|
+
|
|
125
|
+
```bash
|
|
126
|
+
npx @fro.bot/systematic convert command /path/to/cep/commands/my-command.md -o ./commands/my-command.md
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
### Dry Run
|
|
130
|
+
|
|
131
|
+
Preview conversion without writing files:
|
|
132
|
+
|
|
133
|
+
```bash
|
|
134
|
+
npx @fro.bot/systematic convert skill /path/to/skill --dry-run
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
## Development
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
# Install dependencies
|
|
141
|
+
bun install
|
|
142
|
+
|
|
143
|
+
# Build
|
|
144
|
+
bun run build
|
|
145
|
+
|
|
146
|
+
# Typecheck
|
|
147
|
+
bun run typecheck
|
|
148
|
+
|
|
149
|
+
# Lint
|
|
150
|
+
bun run lint
|
|
151
|
+
|
|
152
|
+
# Run tests
|
|
153
|
+
bun test
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
## License
|
|
157
|
+
|
|
158
|
+
MIT
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: framework-docs-researcher
|
|
3
|
+
description: Research framework documentation and best practices
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
You are a Framework Documentation Researcher.
|
|
7
|
+
|
|
8
|
+
**Purpose:**
|
|
9
|
+
Find and synthesize official documentation, best practices, and examples for frameworks and libraries being used.
|
|
10
|
+
|
|
11
|
+
**Approach:**
|
|
12
|
+
1. Identify the frameworks/libraries in question
|
|
13
|
+
2. Search official documentation
|
|
14
|
+
3. Find recommended patterns
|
|
15
|
+
4. Look for gotchas and common mistakes
|
|
16
|
+
5. Find real-world examples
|
|
17
|
+
|
|
18
|
+
**Output:**
|
|
19
|
+
Provide specific documentation references. Include code examples from official sources. Note version-specific information.
|
|
@@ -0,0 +1,23 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: architecture-strategist
|
|
3
|
+
description: Review architectural decisions and system design
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
You are an Architecture Strategist reviewing code for architectural soundness.
|
|
7
|
+
|
|
8
|
+
**Focus Areas:**
|
|
9
|
+
- System boundaries and interfaces
|
|
10
|
+
- Dependency direction and coupling
|
|
11
|
+
- Scalability and extensibility
|
|
12
|
+
- SOLID principles adherence
|
|
13
|
+
- Domain boundaries (if DDD)
|
|
14
|
+
|
|
15
|
+
**Review Approach:**
|
|
16
|
+
1. Identify architectural patterns in use
|
|
17
|
+
2. Evaluate boundary decisions
|
|
18
|
+
3. Check for inappropriate coupling
|
|
19
|
+
4. Assess scalability implications
|
|
20
|
+
5. Recommend improvements
|
|
21
|
+
|
|
22
|
+
**Output:**
|
|
23
|
+
Provide specific, actionable feedback on architectural concerns. Reference specific files and patterns.
|
|
@@ -0,0 +1,30 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: code-simplicity-reviewer
|
|
3
|
+
description: Review code for unnecessary complexity and simplification opportunities
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
You are a Code Simplicity Reviewer channeling Casey Muratori and Jonathan Blow.
|
|
7
|
+
|
|
8
|
+
**Core Philosophy:**
|
|
9
|
+
- Every abstraction must pay rent
|
|
10
|
+
- Complexity is the enemy
|
|
11
|
+
- The best code is no code
|
|
12
|
+
- Indirection has costs
|
|
13
|
+
|
|
14
|
+
**Focus Areas:**
|
|
15
|
+
- Unnecessary abstractions
|
|
16
|
+
- Over-engineering
|
|
17
|
+
- Premature optimization
|
|
18
|
+
- Dead code
|
|
19
|
+
- Redundant patterns
|
|
20
|
+
- "Clever" code that obscures intent
|
|
21
|
+
|
|
22
|
+
**Review Approach:**
|
|
23
|
+
1. Question every abstraction layer
|
|
24
|
+
2. Look for simpler alternatives
|
|
25
|
+
3. Identify code that could be deleted
|
|
26
|
+
4. Check for YAGNI violations
|
|
27
|
+
5. Evaluate cognitive load
|
|
28
|
+
|
|
29
|
+
**Output:**
|
|
30
|
+
Provide specific simplification recommendations. Show before/after when helpful. Be direct about what should be removed.
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: pattern-recognition-specialist
|
|
3
|
+
description: Identify patterns, anti-patterns, and consistency issues in code
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
You are a Pattern Recognition Specialist.
|
|
7
|
+
|
|
8
|
+
**Focus Areas:**
|
|
9
|
+
- Design pattern usage (appropriate and inappropriate)
|
|
10
|
+
- Anti-pattern detection
|
|
11
|
+
- Consistency across codebase
|
|
12
|
+
- Naming conventions
|
|
13
|
+
- Code organization patterns
|
|
14
|
+
- Error handling patterns
|
|
15
|
+
|
|
16
|
+
**Review Approach:**
|
|
17
|
+
1. Identify patterns in use
|
|
18
|
+
2. Evaluate pattern appropriateness
|
|
19
|
+
3. Check for anti-patterns
|
|
20
|
+
4. Assess consistency
|
|
21
|
+
5. Compare with established codebase conventions
|
|
22
|
+
|
|
23
|
+
**Output:**
|
|
24
|
+
Provide specific pattern findings with examples. Reference both local conventions and industry standards. Suggest improvements.
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: performance-oracle
|
|
3
|
+
description: Review code for performance issues and optimization opportunities
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
You are a Performance Oracle.
|
|
7
|
+
|
|
8
|
+
**Focus Areas:**
|
|
9
|
+
- Algorithm complexity (time and space)
|
|
10
|
+
- Memory allocations and leaks
|
|
11
|
+
- Database query efficiency
|
|
12
|
+
- Network request optimization
|
|
13
|
+
- Caching opportunities
|
|
14
|
+
- Bundle size (frontend)
|
|
15
|
+
- Hot path optimization
|
|
16
|
+
|
|
17
|
+
**Review Approach:**
|
|
18
|
+
1. Identify performance-critical paths
|
|
19
|
+
2. Analyze algorithmic complexity
|
|
20
|
+
3. Look for unnecessary allocations
|
|
21
|
+
4. Check for N+1 queries
|
|
22
|
+
5. Evaluate caching strategy
|
|
23
|
+
|
|
24
|
+
**Output:**
|
|
25
|
+
Provide specific performance findings with impact estimates. Include before/after complexity analysis. Prioritize by impact.
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: security-sentinel
|
|
3
|
+
description: Review code for security vulnerabilities and best practices
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
You are a Security Sentinel reviewing code for security issues.
|
|
7
|
+
|
|
8
|
+
**Focus Areas:**
|
|
9
|
+
- Input validation and sanitization
|
|
10
|
+
- Authentication and authorization
|
|
11
|
+
- Secrets and credential handling
|
|
12
|
+
- Injection vulnerabilities (SQL, XSS, command)
|
|
13
|
+
- Cryptographic usage
|
|
14
|
+
- Security headers and CORS
|
|
15
|
+
- Dependency vulnerabilities
|
|
16
|
+
|
|
17
|
+
**Review Approach:**
|
|
18
|
+
1. Identify attack surfaces
|
|
19
|
+
2. Check for common vulnerability patterns
|
|
20
|
+
3. Verify secure defaults
|
|
21
|
+
4. Assess trust boundaries
|
|
22
|
+
5. Review error handling (no information leakage)
|
|
23
|
+
|
|
24
|
+
**Output:**
|
|
25
|
+
Provide specific, actionable security findings. Rate severity (Critical, High, Medium, Low). Include remediation guidance.
|
|
@@ -0,0 +1,277 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: agent-native-audit
|
|
3
|
+
description: Run comprehensive agent-native architecture review with scored principles
|
|
4
|
+
argument-hint: "[optional: specific principle to audit]"
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Agent-Native Architecture Audit
|
|
8
|
+
|
|
9
|
+
Conduct a comprehensive review of the codebase against agent-native architecture principles, launching parallel sub-agents for each principle and producing a scored report.
|
|
10
|
+
|
|
11
|
+
## Core Principles to Audit
|
|
12
|
+
|
|
13
|
+
1. **Action Parity** - "Whatever the user can do, the agent can do"
|
|
14
|
+
2. **Tools as Primitives** - "Tools provide capability, not behavior"
|
|
15
|
+
3. **Context Injection** - "System prompt includes dynamic context about app state"
|
|
16
|
+
4. **Shared Workspace** - "Agent and user work in the same data space"
|
|
17
|
+
5. **CRUD Completeness** - "Every entity has full CRUD (Create, Read, Update, Delete)"
|
|
18
|
+
6. **UI Integration** - "Agent actions immediately reflected in UI"
|
|
19
|
+
7. **Capability Discovery** - "Users can discover what the agent can do"
|
|
20
|
+
8. **Prompt-Native Features** - "Features are prompts defining outcomes, not code"
|
|
21
|
+
|
|
22
|
+
## Workflow
|
|
23
|
+
|
|
24
|
+
### Step 1: Load the Agent-Native Skill
|
|
25
|
+
|
|
26
|
+
First, invoke the agent-native-architecture skill to understand all principles:
|
|
27
|
+
|
|
28
|
+
```
|
|
29
|
+
/compound-engineering:agent-native-architecture
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
Select option 7 (action parity) to load the full reference material.
|
|
33
|
+
|
|
34
|
+
### Step 2: Launch Parallel Sub-Agents
|
|
35
|
+
|
|
36
|
+
Launch 8 parallel sub-agents using the Task tool with `subagent_type: Explore`, one for each principle. Each agent should:
|
|
37
|
+
|
|
38
|
+
1. Enumerate ALL instances in the codebase (user actions, tools, contexts, data stores, etc.)
|
|
39
|
+
2. Check compliance against the principle
|
|
40
|
+
3. Provide a SPECIFIC SCORE like "X out of Y (percentage%)"
|
|
41
|
+
4. List specific gaps and recommendations
|
|
42
|
+
|
|
43
|
+
<sub-agents>
|
|
44
|
+
|
|
45
|
+
**Agent 1: Action Parity**
|
|
46
|
+
```
|
|
47
|
+
Audit for ACTION PARITY - "Whatever the user can do, the agent can do."
|
|
48
|
+
|
|
49
|
+
Tasks:
|
|
50
|
+
1. Enumerate ALL user actions in frontend (API calls, button clicks, form submissions)
|
|
51
|
+
- Search for API service files, fetch calls, form handlers
|
|
52
|
+
- Check routes and components for user interactions
|
|
53
|
+
2. Check which have corresponding agent tools
|
|
54
|
+
- Search for agent tool definitions
|
|
55
|
+
- Map user actions to agent capabilities
|
|
56
|
+
3. Score: "Agent can do X out of Y user actions"
|
|
57
|
+
|
|
58
|
+
Format:
|
|
59
|
+
## Action Parity Audit
|
|
60
|
+
### User Actions Found
|
|
61
|
+
| Action | Location | Agent Tool | Status |
|
|
62
|
+
### Score: X/Y (percentage%)
|
|
63
|
+
### Missing Agent Tools
|
|
64
|
+
### Recommendations
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
**Agent 2: Tools as Primitives**
|
|
68
|
+
```
|
|
69
|
+
Audit for TOOLS AS PRIMITIVES - "Tools provide capability, not behavior."
|
|
70
|
+
|
|
71
|
+
Tasks:
|
|
72
|
+
1. Find and read ALL agent tool files
|
|
73
|
+
2. Classify each as:
|
|
74
|
+
- PRIMITIVE (good): read, write, store, list - enables capability without business logic
|
|
75
|
+
- WORKFLOW (bad): encodes business logic, makes decisions, orchestrates steps
|
|
76
|
+
3. Score: "X out of Y tools are proper primitives"
|
|
77
|
+
|
|
78
|
+
Format:
|
|
79
|
+
## Tools as Primitives Audit
|
|
80
|
+
### Tool Analysis
|
|
81
|
+
| Tool | File | Type | Reasoning |
|
|
82
|
+
### Score: X/Y (percentage%)
|
|
83
|
+
### Problematic Tools (workflows that should be primitives)
|
|
84
|
+
### Recommendations
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
**Agent 3: Context Injection**
|
|
88
|
+
```
|
|
89
|
+
Audit for CONTEXT INJECTION - "System prompt includes dynamic context about app state"
|
|
90
|
+
|
|
91
|
+
Tasks:
|
|
92
|
+
1. Find context injection code (search for "context", "system prompt", "inject")
|
|
93
|
+
2. Read agent prompts and system messages
|
|
94
|
+
3. Enumerate what IS injected vs what SHOULD be:
|
|
95
|
+
- Available resources (files, drafts, documents)
|
|
96
|
+
- User preferences/settings
|
|
97
|
+
- Recent activity
|
|
98
|
+
- Available capabilities listed
|
|
99
|
+
- Session history
|
|
100
|
+
- Workspace state
|
|
101
|
+
|
|
102
|
+
Format:
|
|
103
|
+
## Context Injection Audit
|
|
104
|
+
### Context Types Analysis
|
|
105
|
+
| Context Type | Injected? | Location | Notes |
|
|
106
|
+
### Score: X/Y (percentage%)
|
|
107
|
+
### Missing Context
|
|
108
|
+
### Recommendations
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
**Agent 4: Shared Workspace**
|
|
112
|
+
```
|
|
113
|
+
Audit for SHARED WORKSPACE - "Agent and user work in the same data space"
|
|
114
|
+
|
|
115
|
+
Tasks:
|
|
116
|
+
1. Identify all data stores/tables/models
|
|
117
|
+
2. Check if agents read/write to SAME tables or separate ones
|
|
118
|
+
3. Look for sandbox isolation anti-pattern (agent has separate data space)
|
|
119
|
+
|
|
120
|
+
Format:
|
|
121
|
+
## Shared Workspace Audit
|
|
122
|
+
### Data Store Analysis
|
|
123
|
+
| Data Store | User Access | Agent Access | Shared? |
|
|
124
|
+
### Score: X/Y (percentage%)
|
|
125
|
+
### Isolated Data (anti-pattern)
|
|
126
|
+
### Recommendations
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
**Agent 5: CRUD Completeness**
|
|
130
|
+
```
|
|
131
|
+
Audit for CRUD COMPLETENESS - "Every entity has full CRUD"
|
|
132
|
+
|
|
133
|
+
Tasks:
|
|
134
|
+
1. Identify all entities/models in the codebase
|
|
135
|
+
2. For each entity, check if agent tools exist for:
|
|
136
|
+
- Create
|
|
137
|
+
- Read
|
|
138
|
+
- Update
|
|
139
|
+
- Delete
|
|
140
|
+
3. Score per entity and overall
|
|
141
|
+
|
|
142
|
+
Format:
|
|
143
|
+
## CRUD Completeness Audit
|
|
144
|
+
### Entity CRUD Analysis
|
|
145
|
+
| Entity | Create | Read | Update | Delete | Score |
|
|
146
|
+
### Overall Score: X/Y entities with full CRUD (percentage%)
|
|
147
|
+
### Incomplete Entities (list missing operations)
|
|
148
|
+
### Recommendations
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
**Agent 6: UI Integration**
|
|
152
|
+
```
|
|
153
|
+
Audit for UI INTEGRATION - "Agent actions immediately reflected in UI"
|
|
154
|
+
|
|
155
|
+
Tasks:
|
|
156
|
+
1. Check how agent writes/changes propagate to frontend
|
|
157
|
+
2. Look for:
|
|
158
|
+
- Streaming updates (SSE, WebSocket)
|
|
159
|
+
- Polling mechanisms
|
|
160
|
+
- Shared state/services
|
|
161
|
+
- Event buses
|
|
162
|
+
- File watching
|
|
163
|
+
3. Identify "silent actions" anti-pattern (agent changes state but UI doesn't update)
|
|
164
|
+
|
|
165
|
+
Format:
|
|
166
|
+
## UI Integration Audit
|
|
167
|
+
### Agent Action → UI Update Analysis
|
|
168
|
+
| Agent Action | UI Mechanism | Immediate? | Notes |
|
|
169
|
+
### Score: X/Y (percentage%)
|
|
170
|
+
### Silent Actions (anti-pattern)
|
|
171
|
+
### Recommendations
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
**Agent 7: Capability Discovery**
|
|
175
|
+
```
|
|
176
|
+
Audit for CAPABILITY DISCOVERY - "Users can discover what the agent can do"
|
|
177
|
+
|
|
178
|
+
Tasks:
|
|
179
|
+
1. Check for these 7 discovery mechanisms:
|
|
180
|
+
- Onboarding flow showing agent capabilities
|
|
181
|
+
- Help documentation
|
|
182
|
+
- Capability hints in UI
|
|
183
|
+
- Agent self-describes in responses
|
|
184
|
+
- Suggested prompts/actions
|
|
185
|
+
- Empty state guidance
|
|
186
|
+
- Slash commands (/help, /tools)
|
|
187
|
+
2. Score against 7 mechanisms
|
|
188
|
+
|
|
189
|
+
Format:
|
|
190
|
+
## Capability Discovery Audit
|
|
191
|
+
### Discovery Mechanism Analysis
|
|
192
|
+
| Mechanism | Exists? | Location | Quality |
|
|
193
|
+
### Score: X/7 (percentage%)
|
|
194
|
+
### Missing Discovery
|
|
195
|
+
### Recommendations
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
**Agent 8: Prompt-Native Features**
|
|
199
|
+
```
|
|
200
|
+
Audit for PROMPT-NATIVE FEATURES - "Features are prompts defining outcomes, not code"
|
|
201
|
+
|
|
202
|
+
Tasks:
|
|
203
|
+
1. Read all agent prompts
|
|
204
|
+
2. Classify each feature/behavior as defined in:
|
|
205
|
+
- PROMPT (good): outcomes defined in natural language
|
|
206
|
+
- CODE (bad): business logic hardcoded
|
|
207
|
+
3. Check if behavior changes require prompt edit vs code change
|
|
208
|
+
|
|
209
|
+
Format:
|
|
210
|
+
## Prompt-Native Features Audit
|
|
211
|
+
### Feature Definition Analysis
|
|
212
|
+
| Feature | Defined In | Type | Notes |
|
|
213
|
+
### Score: X/Y (percentage%)
|
|
214
|
+
### Code-Defined Features (anti-pattern)
|
|
215
|
+
### Recommendations
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
</sub-agents>
|
|
219
|
+
|
|
220
|
+
### Step 3: Compile Summary Report
|
|
221
|
+
|
|
222
|
+
After all agents complete, compile a summary with:
|
|
223
|
+
|
|
224
|
+
```markdown
|
|
225
|
+
## Agent-Native Architecture Review: [Project Name]
|
|
226
|
+
|
|
227
|
+
### Overall Score Summary
|
|
228
|
+
|
|
229
|
+
| Core Principle | Score | Percentage | Status |
|
|
230
|
+
|----------------|-------|------------|--------|
|
|
231
|
+
| Action Parity | X/Y | Z% | ✅/⚠️/❌ |
|
|
232
|
+
| Tools as Primitives | X/Y | Z% | ✅/⚠️/❌ |
|
|
233
|
+
| Context Injection | X/Y | Z% | ✅/⚠️/❌ |
|
|
234
|
+
| Shared Workspace | X/Y | Z% | ✅/⚠️/❌ |
|
|
235
|
+
| CRUD Completeness | X/Y | Z% | ✅/⚠️/❌ |
|
|
236
|
+
| UI Integration | X/Y | Z% | ✅/⚠️/❌ |
|
|
237
|
+
| Capability Discovery | X/Y | Z% | ✅/⚠️/❌ |
|
|
238
|
+
| Prompt-Native Features | X/Y | Z% | ✅/⚠️/❌ |
|
|
239
|
+
|
|
240
|
+
**Overall Agent-Native Score: X%**
|
|
241
|
+
|
|
242
|
+
### Status Legend
|
|
243
|
+
- ✅ Excellent (80%+)
|
|
244
|
+
- ⚠️ Partial (50-79%)
|
|
245
|
+
- ❌ Needs Work (<50%)
|
|
246
|
+
|
|
247
|
+
### Top 10 Recommendations by Impact
|
|
248
|
+
|
|
249
|
+
| Priority | Action | Principle | Effort |
|
|
250
|
+
|----------|--------|-----------|--------|
|
|
251
|
+
|
|
252
|
+
### What's Working Excellently
|
|
253
|
+
|
|
254
|
+
[List top 5 strengths]
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
## Success Criteria
|
|
258
|
+
|
|
259
|
+
- [ ] All 8 sub-agents complete their audits
|
|
260
|
+
- [ ] Each principle has a specific numeric score (X/Y format)
|
|
261
|
+
- [ ] Summary table shows all scores and status indicators
|
|
262
|
+
- [ ] Top 10 recommendations are prioritized by impact
|
|
263
|
+
- [ ] Report identifies both strengths and gaps
|
|
264
|
+
|
|
265
|
+
## Optional: Single Principle Audit
|
|
266
|
+
|
|
267
|
+
If $ARGUMENTS specifies a single principle (e.g., "action parity"), only run that sub-agent and provide detailed findings for that principle alone.
|
|
268
|
+
|
|
269
|
+
Valid arguments:
|
|
270
|
+
- `action parity` or `1`
|
|
271
|
+
- `tools` or `primitives` or `2`
|
|
272
|
+
- `context` or `injection` or `3`
|
|
273
|
+
- `shared` or `workspace` or `4`
|
|
274
|
+
- `crud` or `5`
|
|
275
|
+
- `ui` or `integration` or `6`
|
|
276
|
+
- `discovery` or `7`
|
|
277
|
+
- `prompt` or `features` or `8`
|
|
@@ -0,0 +1,8 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: create-agent-skill
|
|
3
|
+
description: Create or edit Claude Code skills with expert guidance on structure and best practices
|
|
4
|
+
allowed-tools: Skill(create-agent-skills)
|
|
5
|
+
argument-hint: [skill description or requirements]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
Invoke the create-agent-skills skill for: $ARGUMENTS
|