@zenuml/core 3.32.6 → 3.33.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/.claude/commands/README.md +162 -0
  2. package/.claude/commands/code-review.md +322 -0
  3. package/.claude/commands/create-docs.md +309 -0
  4. package/.claude/commands/full-context.md +121 -0
  5. package/.claude/commands/gemini-consult.md +164 -0
  6. package/.claude/commands/handoff.md +146 -0
  7. package/.claude/commands/refactor.md +188 -0
  8. package/.claude/commands/update-docs.md +314 -0
  9. package/.claude/hooks/README.md +270 -0
  10. package/.claude/hooks/config/sensitive-patterns.json +86 -0
  11. package/.claude/hooks/gemini-context-injector.sh +129 -0
  12. package/.claude/hooks/mcp-security-scan.sh +147 -0
  13. package/.claude/hooks/notify.sh +103 -0
  14. package/.claude/hooks/setup/hook-setup.md +96 -0
  15. package/.claude/hooks/setup/settings.json.template +63 -0
  16. package/.claude/hooks/sounds/complete.wav +0 -0
  17. package/.claude/hooks/sounds/input-needed.wav +0 -0
  18. package/.claude/hooks/subagent-context-injector.sh +65 -0
  19. package/.storybook/main.ts +25 -0
  20. package/.storybook/preview.ts +29 -0
  21. package/MCP-ASSISTANT-RULES.md +85 -0
  22. package/README.md +1 -1
  23. package/TUTORIAL.md +116 -0
  24. package/dist/zenuml.esm.mjs +4649 -4598
  25. package/dist/zenuml.js +52 -52
  26. package/docs/CONTEXT-tier2-component.md +96 -0
  27. package/docs/CONTEXT-tier3-feature.md +162 -0
  28. package/docs/README.md +207 -0
  29. package/docs/ai-context/deployment-infrastructure.md +21 -0
  30. package/docs/ai-context/docs-overview.md +89 -0
  31. package/docs/ai-context/handoff.md +174 -0
  32. package/docs/ai-context/project-structure.md +160 -0
  33. package/docs/ai-context/system-integration.md +21 -0
  34. package/docs/open-issues/example-api-performance-issue.md +79 -0
  35. package/eslint.config.mjs +26 -26
  36. package/package.json +9 -2
  37. package/tailwind.config.js +0 -4
  38. package/docs/asciidoc/integration-guide.adoc +0 -121
@@ -0,0 +1,162 @@
1
+ # 🔧 Command Templates
2
+
3
+ Orchestration templates that enable Claude Code to coordinate multi-agent workflows for different development tasks.
4
+
5
+ ## Overview
6
+
7
+ After reading the [main kit documentation](../README.md), you'll understand how these commands fit into the integrated system. Each command:
8
+
9
+ - **Auto-loads** the appropriate documentation tier for its task
10
+ - **Spawns specialized agents** based on complexity
11
+ - **Integrates MCP servers** when external expertise helps
12
+ - **Maintains documentation** to keep AI context current
13
+
14
+ ### 🚀 Automatic Context Injection
15
+
16
+ All commands benefit from automatic context injection via the `subagent-context-injector.sh` hook:
17
+
18
+ - **Core documentation auto-loaded**: Every command and sub-agent automatically receives `@/docs/CLAUDE.md`, `@/docs/ai-context/project-structure.md`, and `@/docs/ai-context/docs-overview.md`
19
+ - **No manual context loading**: Sub-agents spawned by commands automatically have access to essential project documentation
20
+ - **Consistent knowledge**: All agents start with the same foundational understanding
21
+
22
+ ## Available Commands
23
+
24
+ ### 📊 `/full-context`
25
+ **Purpose**: Comprehensive context gathering and analysis when you need deep understanding or plan to execute code changes.
26
+
27
+ **When to use**:
28
+ - Starting work on a new feature or bug
29
+ - Need to understand how systems interconnect
30
+ - Planning architectural changes
31
+ - Any task requiring thorough analysis before implementation
32
+
33
+ **How it works**: Adaptively scales from direct analysis to multi-agent orchestration based on request complexity. Agents read documentation, analyze code, map dependencies, and consult MCP servers as needed.
34
+
35
+ ### 🔍 `/code-review`
36
+ **Purpose**: Get multiple expert perspectives on code quality, focusing on high-impact findings rather than nitpicks.
37
+
38
+ **When to use**:
39
+ - After implementing new features
40
+ - Before merging important changes
41
+ - When you want security, performance, and architecture insights
42
+ - Need confidence in code quality
43
+
44
+ **How it works**: Spawns specialized agents (security, performance, architecture) that analyze in parallel. Each agent focuses on critical issues that matter for production code.
45
+
46
+ ### 🧠 `/gemini-consult` *(Requires Gemini MCP Server)*
47
+ **Purpose**: Engage in deep, iterative conversations with Gemini for complex problem-solving and architectural guidance.
48
+
49
+ **When to use**:
50
+ - Tackling complex architectural decisions
51
+ - Need expert guidance on implementation approaches
52
+ - Debugging intricate issues across multiple files
53
+ - Exploring optimization strategies
54
+ - When you need a thinking partner for difficult problems
55
+
56
+ **How it works**: Creates persistent conversation sessions with Gemini, automatically attaching project context and MCP-ASSISTANT-RULES.md. Supports iterative refinement through follow-up questions and implementation feedback.
57
+
58
+ **Key features**:
59
+ - Context-aware problem detection when no arguments provided
60
+ - Persistent sessions maintained throughout problem lifecycle
61
+ - Automatic attachment of foundational project documentation
62
+ - Support for follow-up questions with session continuity
63
+
64
+ ### 📝 `/update-docs`
65
+ **Purpose**: Keep documentation synchronized with code changes, ensuring AI context remains current.
66
+
67
+ **When to use**:
68
+ - After modifying code
69
+ - After adding new features
70
+ - When project structure changes
71
+ - Following any significant implementation
72
+
73
+ **How it works**: Analyzes what changed and updates the appropriate CLAUDE.md files across all tiers. Maintains the context that future AI sessions will rely on.
74
+
75
+ ### 📄 `/create-docs`
76
+ **Purpose**: Generate initial documentation structure for existing projects that lack AI-optimized documentation.
77
+
78
+ **When to use**:
79
+ - Adopting the framework in an existing project
80
+ - Starting documentation from scratch
81
+ - Need to document legacy code
82
+ - Setting up the 3-tier structure
83
+
84
+ **How it works**: Analyzes your project structure and creates appropriate CLAUDE.md files at each tier, establishing the foundation for AI-assisted development.
85
+
86
+ ### ♻️ `/refactor`
87
+ **Purpose**: Intelligently restructure code while maintaining functionality and updating all dependencies.
88
+
89
+ **When to use**:
90
+ - Breaking up large files
91
+ - Improving code organization
92
+ - Extracting reusable components
93
+ - Cleaning up technical debt
94
+
95
+ **How it works**: Analyzes file structure, maps dependencies, identifies logical split points, and handles all import/export updates across the codebase.
96
+
97
+ ### 🤝 `/handoff`
98
+ **Purpose**: Preserve context when ending a session or when the conversation becomes too long.
99
+
100
+ **When to use**:
101
+ - Ending a work session
102
+ - Context limit approaching
103
+ - Switching between major tasks
104
+ - Supplementing `/compact` with permanent storage
105
+
106
+ **How it works**: Updates the handoff documentation with session achievements, current state, and next steps. Ensures smooth continuation in future sessions.
107
+
108
+ ## Integration Patterns
109
+
110
+ ### Typical Workflow
111
+ ```bash
112
+ /full-context "implement user notifications" # Understand
113
+ # ... implement the feature ...
114
+ /code-review "review notification system" # Validate
115
+ /update-docs "document notification feature" # Synchronize
116
+ /handoff "completed notification system" # Preserve
117
+ ```
118
+
119
+ ### Quick Analysis
120
+ ```bash
121
+ /full-context "why is the API slow?" # Investigate
122
+ # ... apply fixes ...
123
+ /update-docs "document performance fixes" # Update context
124
+ ```
125
+
126
+ ### Major Refactoring
127
+ ```bash
128
+ /full-context "analyze authentication module" # Understand current state
129
+ /refactor "@auth/large-auth-file.ts" # Restructure
130
+ /code-review "review refactored auth" # Verify quality
131
+ /update-docs "document new auth structure" # Keep docs current
132
+ ```
133
+
134
+ ### Complex Problem Solving
135
+ ```bash
136
+ /gemini-consult "optimize real-time data pipeline" # Start consultation
137
+ # ... implement suggested approach ...
138
+ /gemini-consult # Follow up with results
139
+ /update-docs "document optimization approach" # Capture insights
140
+ ```
141
+
142
+ ## Customization
143
+
144
+ Each command template can be adapted:
145
+
146
+ - **Adjust agent strategies** - Modify how many agents spawn and their specializations
147
+ - **Change context loading** - Customize which documentation tiers load
148
+ - **Tune MCP integration** - Adjust when to consult external services
149
+ - **Modify output formats** - Tailor results to your preferences
150
+
151
+ Commands are stored in `.claude/commands/` and can be edited directly.
152
+
153
+ ## Key Principles
154
+
155
+ 1. **Commands work together** - Each command builds on others' outputs
156
+ 2. **Documentation stays current** - Commands maintain their own context
157
+ 3. **Complexity scales naturally** - Simple tasks stay simple, complex tasks get sophisticated analysis
158
+ 4. **Context is continuous** - Information flows between sessions through documentation
159
+
160
+ ---
161
+
162
+ *For detailed implementation of each command, see the individual command files in this directory.*
@@ -0,0 +1,322 @@
1
+ # /code-review
2
+
3
+ *Performs focused multi-agent code review that surfaces only critical, high-impact findings for solo developers using AI tools.*
4
+
5
+ ## Core Philosophy
6
+
7
+ This command prioritizes **needle-moving discoveries** over exhaustive lists. Every finding must demonstrate significant impact on:
8
+ - System reliability & stability
9
+ - Security vulnerabilities with real exploitation risk
10
+ - Performance bottlenecks affecting user experience
11
+ - Architectural decisions blocking future scalability
12
+ - Critical technical debt threatening maintainability
13
+
14
+ ### 🚨 Critical Findings Only
15
+ Issues that could cause production failures, security breaches, or severe user impact within 48 hours.
16
+
17
+ ### 🔥 High-Value Improvements
18
+ Changes that unlock new capabilities, remove significant constraints, or improve metrics by >25%.
19
+
20
+ ### ❌ Excluded from Reports
21
+ Minor style issues, micro-optimizations (<10%), theoretical best practices, edge cases affecting <1% of users.
22
+
23
+
24
+ ## Auto-Loaded Project Context:
25
+ @/CLAUDE.md
26
+ @/docs/ai-context/project-structure.md
27
+ @/docs/ai-context/docs-overview.md
28
+
29
+
30
+ ## Command Execution
31
+
32
+ User provided context: "$ARGUMENTS"
33
+
34
+ ### Step 1: Understand User Intent & Gather Context
35
+
36
+ #### Parse the Request
37
+ Analyze the natural language input to determine:
38
+ 1. **What to review**: Parse file paths, component names, feature descriptions, or commit references
39
+ 2. **Review focus**: Identify any specific concerns mentioned (security, performance, etc.)
40
+ 3. **Scope inference**: Intelligently determine the breadth of review needed
41
+
42
+ Examples of intent parsing:
43
+ - "the authentication flow" → Find all files related to auth across the codebase
44
+ - "voice pipeline implementation" → Locate voice processing components
45
+ - "recent changes" → Parse git history for relevant commits
46
+ - "the API routes" → Identify all API endpoint files
47
+
48
+ #### Read Relevant Documentation
49
+ Before allocating agents, **read the documentation** to understand:
50
+ 1. Use `/docs/ai-context/docs-overview.md` to identify relevant docs
51
+ 2. Read documentation related to the code being reviewed:
52
+ - Architecture docs for subsystem understanding
53
+ - API documentation for integration points
54
+ - Security guidelines for sensitive areas
55
+ - Performance considerations for critical paths
56
+ 3. Build a mental model of risks, constraints, and priorities
57
+
58
+ This context ensures intelligent agent allocation based on actual project knowledge.
59
+
60
+ ### Step 2: Define Mandatory Coverage Areas
61
+
62
+ Every code review MUST analyze these core areas, with depth determined by scope:
63
+
64
+ #### 🎯 Mandatory Coverage Areas:
65
+
66
+ 1. **Critical Path Analysis**
67
+ - User-facing functionality that could break
68
+ - Data integrity and state management
69
+ - Error handling and recovery mechanisms
70
+
71
+ 2. **Security Surface**
72
+ - Input validation and sanitization
73
+ - Authentication/authorization flows
74
+ - Data exposure and API security
75
+
76
+ 3. **Performance Impact**
77
+ - Real-time processing bottlenecks
78
+ - Resource consumption (memory, CPU)
79
+ - Scalability constraints
80
+
81
+ 4. **Integration Points**
82
+ - API contracts and boundaries
83
+ - Service dependencies
84
+ - External system interactions
85
+
86
+ #### 📊 Dynamic Agent Allocation:
87
+
88
+ Based on review scope, allocate agents proportionally:
89
+
90
+ **Small to medium Scope (small set of files or small feature)**
91
+ - 2-3 agents covering mandatory areas
92
+ - Each agent handles 1-2 coverage areas
93
+ - Focus on highest-risk aspects
94
+
95
+ **Large Scope (many files, major feature or subsystem)**
96
+ - 4-6 agents with specialized focus
97
+ - Each mandatory area gets dedicated coverage
98
+ - Additional agents for cross-cutting concerns
99
+
100
+ ### Step 3: Dynamic Agent Generation
101
+
102
+ Based on scope analysis and mandatory coverage areas, dynamically create specialized agents:
103
+
104
+ #### Agent Generation Strategy:
105
+
106
+ **With your documentation knowledge from Step 1, think deeply** about optimal agent allocation:
107
+ - Leverage your understanding of the project architecture and risks
108
+ - Consider the specific documentation you read about this subsystem
109
+ - Apply insights about critical paths and security considerations
110
+ - Use documented boundaries and integration points to partition work
111
+ - Factor in any performance or scalability concerns from the docs
112
+
113
+ Use your understanding of the project to intuitively determine:
114
+ 1. **How many agents are needed** - Let the code's complexity and criticality guide you
115
+ 2. **How to partition the work** - Follow natural architectural boundaries
116
+ 3. **Which specializations matter most** - Focus agents where risk is highest
117
+
118
+ **Generate Specialized Agents**
119
+
120
+ For each allocated agent, create a focused role:
121
+
122
+ **Example for 6-agent allocation:**
123
+ - Agent 1: Critical_Path_Validator (user flows + error handling)
124
+ - Agent 2: Security_Scanner (input validation + auth)
125
+ - Agent 3: API_Security_Auditor (data exposure + boundaries)
126
+ - Agent 4: Performance_Profiler (bottlenecks + resource usage)
127
+ - Agent 5: Scalability_Analyst (constraints + growth paths)
128
+ - Agent 6: Integration_Verifier (dependencies + contracts)
129
+
130
+ **Example for 3-agent allocation:**
131
+ - Agent 1: Security_Performance_Analyst (security + performance areas)
132
+ - Agent 2: Critical_Path_Guardian (functionality + integrations)
133
+ - Agent 3: Risk_Quality_Assessor (technical debt + code quality)
134
+
135
+ #### Dynamic Focus Areas:
136
+
137
+ Each agent receives specialized instructions based on:
138
+ - **File characteristics**: API endpoints → security focus
139
+ - **Code patterns**: Loops/algorithms → performance focus
140
+ - **Dependencies**: External services → integration focus
141
+ - **User touchpoints**: UI/voice → critical path focus
142
+
143
+ ### Step 4: Execute Dynamic Multi-Agent Review
144
+
145
+ **Before launching agents, pause and think deeply:**
146
+ - What are the real risks in this code?
147
+ - Which areas could cause the most damage if they fail?
148
+ - Where would a solo developer need the most help?
149
+
150
+ Generate and launch agents based on your thoughtful analysis:
151
+
152
+ ```
153
+ For each dynamically generated agent:
154
+ Task: "As [Agent_Role], analyze [assigned_coverage_areas] in [target_scope].
155
+
156
+ MANDATORY COVERAGE CHECKLIST:
157
+ ☐ Critical Path: [assigned aspects]
158
+ ☐ Security: [assigned aspects]
159
+ ☐ Performance: [assigned aspects]
160
+ ☐ Integration: [assigned aspects]
161
+
162
+ HIGH-IMPACT REVIEW MANDATE:
163
+ Focus ONLY on findings that significantly move the needle for a solo developer.
164
+
165
+ Review workflow:
166
+ 1. Review auto-loaded project context (CLAUDE.md, project-structure.md, docs-overview.md)
167
+ 2. Analyze your assigned coverage areas with deep focus
168
+ 3. For complex issues, use:
169
+ - mcp__gemini__consult_gemini for architectural analysis
170
+ - mcp__context7__get-library-docs for framework best practices
171
+ 4. Cross-reference with other coverage areas for systemic issues
172
+ 5. Document ONLY high-impact findings:
173
+
174
+ ## [Coverage_Area] Analysis by [Agent_Role]
175
+
176
+ ### 🚨 Critical Issues (Production Risk)
177
+ - Issue: [description]
178
+ - Location: [file:line_number]
179
+ - Impact: [quantified - downtime hours, users affected, data at risk]
180
+ - Fix: [specific code snippet]
181
+ - Consequence if ignored: [what happens in 48 hours]
182
+
183
+ ### 🎯 Strategic Improvements (Capability Unlocks)
184
+ - Limitation: [what's currently blocked]
185
+ - Solution: [architectural change or implementation]
186
+ - Unlocks: [new capability or scale]
187
+ - ROI: [effort hours vs benefit quantified]
188
+
189
+ ### ⚡ Quick Wins (Optional)
190
+ - Only include if <2 hours for >20% improvement
191
+ - Must show measurable impact
192
+
193
+ REMEMBER: Every finding must pass the 'so what?' test for a solo developer."
194
+ ```
195
+
196
+ #### Parallel Execution Strategy:
197
+
198
+ **Launch all agents simultaneously** for maximum efficiency
199
+
200
+
201
+ ### Step 5: Synthesize Findings with Maximum Analysis Power
202
+
203
+ After all sub-agents complete their analysis:
204
+
205
+ **ultrathink**
206
+
207
+ Activate maximum cognitive capabilities to:
208
+
209
+ 1. **Filter for Impact**
210
+ - Discard all low-priority findings
211
+ - Quantify real-world impact of each issue
212
+ - Focus on production risks and capability unlocks
213
+
214
+ 2. **Deep Pattern Analysis**
215
+ - Identify systemic issues vs isolated problems
216
+ - Find root causes across agent reports
217
+ - Detect subtle security vulnerabilities
218
+
219
+ 3. **Strategic Prioritization**
220
+ - Calculate ROI for each improvement
221
+ - Consider solo developer constraints
222
+ - Create actionable fix sequence
223
+ ```markdown
224
+ # Code Review Summary
225
+
226
+ **Reviewed**: [scope description]
227
+ **Date**: [current date]
228
+ **Overall Quality Score**: [A-F grade with justification]
229
+
230
+ ## Key Metrics
231
+ - Security Risk Level: [Critical/High/Medium/Low]
232
+ - Performance Impact: [description]
233
+ - Technical Debt: [assessment]
234
+ - Test Coverage: [if applicable]
235
+ ```
236
+
237
+ ### Step 6: Present Comprehensive Review
238
+
239
+ Structure the final output as:
240
+
241
+ ```markdown
242
+ # 🔍 Code Review Report
243
+
244
+ ## Executive Summary
245
+ [High-level findings and overall assessment]
246
+
247
+ ## 🚨 Production Risks (Fix Within 48 Hours)
248
+ [Only issues that could cause downtime, data loss, or security breaches]
249
+
250
+ ## 🎯 Strategic Improvements (High ROI)
251
+ [Only changes that unlock capabilities or improve metrics >25%]
252
+
253
+ ## ⚡ Quick Wins (Optional)
254
+ [Only if <2 hours effort for significant improvement]
255
+
256
+ ## Detailed Analysis
257
+
258
+ ### Security Assessment
259
+ [Detailed security findings from Security_Auditor]
260
+
261
+ ### Performance Analysis
262
+ [Detailed performance findings from Performance_Analyzer]
263
+
264
+ ### Architecture Review
265
+ [Detailed architecture findings from Architecture_Validator]
266
+
267
+ ### Code Quality Evaluation
268
+ [Detailed quality findings from Quality_Inspector]
269
+
270
+ [Additional sections based on sub-agents used]
271
+
272
+ ## Action Plan
273
+ 1. Critical fixes preventing production failures
274
+ 2. High-ROI improvements unlocking capabilities
275
+
276
+ ## Impact Matrix
277
+ | Issue | User Impact | Effort | ROI |
278
+ |-------|-------------|--------|-----|
279
+ | [Only high-impact issues with quantified metrics] |
280
+ ```
281
+
282
+ ### Step 7: Interactive Follow-up
283
+
284
+ After presenting the review, offer interactive follow-ups. For example:
285
+ - "Would you like me to fix any of the critical issues?"
286
+ - "Should I create a detailed refactoring plan for any component?"
287
+ - "Do you want me to generate tests for uncovered code?"
288
+ - "Should I create GitHub issues for tracking these improvements?"
289
+
290
+ ## Implementation Notes
291
+
292
+ 1. **Use parallel Task execution** for all sub-agents to minimize review time
293
+ 2. **Include file:line_number references** for easy navigation
294
+ 3. **Balance criticism with recognition** of good practices
295
+ 4. **Provide actionable fixes**, not just problem identification
296
+ 5. **Consider project phase** and priorities when recommending changes
297
+ 6. **Use MCP servers** for specialized analysis when beneficial
298
+ 7. **Keep security findings sensitive** - don't expose vulnerabilities publicly
299
+
300
+ ## Error Handling
301
+
302
+ ### Coverage Verification
303
+
304
+ Before presenting results, verify complete coverage:
305
+
306
+ ```
307
+ ☑ Critical Path Analysis: [Covered by agents X, Y]
308
+ ☑ Security Surface: [Covered by agents Y, Z]
309
+ ☑ Performance Impact: [Covered by agents X, Z]
310
+ ☑ Integration Points: [Covered by agents W, X]
311
+ ```
312
+
313
+ If any area lacks coverage, deploy additional focused agents.
314
+
315
+ ## Error Handling
316
+
317
+ If issues occur during review:
318
+ - **Ambiguous input**: Use search tools to find relevant files before asking for clarification
319
+ - **File not found**: Search for similar names or components across the codebase
320
+ - **Large scope detected**: Dynamically scale agents based on calculated complexity
321
+ - **No files found**: Provide helpful suggestions based on project structure
322
+ - **Coverage gaps**: Deploy supplementary agents for missed areas