@zenuml/core 3.32.5 → 3.32.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/.claude/commands/README.md +162 -0
  2. package/.claude/commands/code-review.md +322 -0
  3. package/.claude/commands/create-docs.md +309 -0
  4. package/.claude/commands/full-context.md +121 -0
  5. package/.claude/commands/gemini-consult.md +164 -0
  6. package/.claude/commands/handoff.md +146 -0
  7. package/.claude/commands/refactor.md +188 -0
  8. package/.claude/commands/update-docs.md +314 -0
  9. package/.claude/hooks/README.md +270 -0
  10. package/.claude/hooks/config/sensitive-patterns.json +86 -0
  11. package/.claude/hooks/gemini-context-injector.sh +129 -0
  12. package/.claude/hooks/mcp-security-scan.sh +147 -0
  13. package/.claude/hooks/notify.sh +103 -0
  14. package/.claude/hooks/setup/hook-setup.md +96 -0
  15. package/.claude/hooks/setup/settings.json.template +63 -0
  16. package/.claude/hooks/sounds/complete.wav +0 -0
  17. package/.claude/hooks/sounds/input-needed.wav +0 -0
  18. package/.claude/hooks/subagent-context-injector.sh +65 -0
  19. package/.devcontainer/devcontainer.json +21 -0
  20. package/.storybook/main.ts +25 -0
  21. package/.storybook/preview.ts +29 -0
  22. package/MCP-ASSISTANT-RULES.md +85 -0
  23. package/dist/zenuml.esm.mjs +6461 -6404
  24. package/dist/zenuml.js +73 -73
  25. package/docs/CONTEXT-tier2-component.md +96 -0
  26. package/docs/CONTEXT-tier3-feature.md +162 -0
  27. package/docs/README.md +207 -0
  28. package/docs/ai-context/deployment-infrastructure.md +21 -0
  29. package/docs/ai-context/docs-overview.md +89 -0
  30. package/docs/ai-context/handoff.md +174 -0
  31. package/docs/ai-context/project-structure.md +160 -0
  32. package/docs/ai-context/system-integration.md +21 -0
  33. package/docs/open-issues/example-api-performance-issue.md +79 -0
  34. package/eslint.config.mjs +26 -26
  35. package/package.json +10 -3
  36. package/playwright.config.ts +1 -1
  37. package/tailwind.config.js +0 -4
@@ -0,0 +1,309 @@
1
+ You are working on the VR Language Learning App project. The user has requested to create or regenerate documentation with the arguments: "$ARGUMENTS"
2
+
3
+ ## Auto-Loaded Project Context:
4
+ @/CLAUDE.md
5
+ @/docs/ai-context/project-structure.md
6
+ @/docs/ai-context/docs-overview.md
7
+
8
+ ## CRITICAL: AI-Optimized Documentation Principles
9
+ All documentation must be optimized for AI consumption and future-proofing:
10
+ - **Structured & Concise**: Use clear sections, lists, and hierarchies. Provide essential information only.
11
+ - **Contextually Complete**: Include necessary context, decision rationale ("why"), and cross-references.
12
+ - **Pattern-Oriented**: Make architectural patterns, conventions, and data flow explicit.
13
+ - **Modular & Scalable**: Structure for partial updates and project growth.
14
+ - **Cross-references**: Link related concepts with file paths, function names, and stable identifiers
15
+
16
+
17
+ ---
18
+
19
+ ## Step 1: Analyze & Strategize
20
+
21
+ Using the auto-loaded project context, analyze the user's request and determine the optimal documentation strategy.
22
+
23
+ ### 1.1. Parse Target & Assess Complexity
24
+ **Action**: Analyze `$ARGUMENTS` to identify the `target_path` and its documentation tier.
25
+
26
+ **Target Classification:**
27
+ - **Tier 3 (Feature-Specific)**: Paths containing `/src/` and ending in `/CONTEXT.md`
28
+ - **Tier 2 (Component-Level)**: Paths ending in component root `/CONTEXT.md`
29
+
30
+ **Complexity Assessment Criteria:**
31
+ - **Codebase Size**: File count and lines of code in target directory
32
+ - **Technology Mix**: Diversity of languages and frameworks (Python, TypeScript, etc.)
33
+ - **Architectural Complexity**: Dependency graph and cross-component imports
34
+ - **Existing Documentation**: Presence and state of any CLAUDE.md files in the area
35
+
36
+ ### 1.2. Select Strategy
37
+ Think deeply about this documentation generation task and strategy based on the auto-loaded project context. Based on the assessment, select and announce the strategy.
38
+
39
+ **Strategy Logic:**
40
+ - **Direct Creation**: Simple targets (< 15 files, single tech, standard patterns)
41
+ - **Focused Analysis**: Moderate complexity (15-75 files, 2-3 techs, some novel patterns)
42
+ - **Comprehensive Analysis**: High complexity (> 75 files, 3+ techs, significant architectural depth)
43
+
44
+ ---
45
+
46
+ ## Step 2: Information Gathering (Analysis Phase)
47
+
48
+ Based on the chosen strategy, gather the necessary information.
49
+
50
+ ### Strategy A: Direct Creation
51
+ Proceed directly to **Step 3.1**. Perform lightweight analysis during content generation.
52
+
53
+ ### Strategy B: Focused or Comprehensive Analysis (Sub-Agent Orchestration)
54
+
55
+ #### 2.1. Sub-Agent Roles
56
+ Select from these specialized roles based on complexity assessment:
57
+ - **`Code_Analyzer`**: File structure, implementation patterns, logic flow, coding conventions
58
+ - **`Tech_Stack_Identifier`**: Frameworks, libraries, dependencies, technology-specific patterns
59
+ - **`Architecture_Mapper`**: Cross-component dependencies, integration points, data flow
60
+ - **`Doc_Validator`**: Existing documentation accuracy, gaps, valuable insights, content overlap analysis
61
+
62
+ #### 2.2. Launch Sub-Agents
63
+ **Execution Plan:**
64
+ - **Focused Analysis (2-3 agents)**: `Code_Analyzer` + `Tech_Stack_Identifier` + `Doc_Validator` (if existing docs)
65
+ - **Comprehensive Analysis (3-4 agents)**: All agents as needed
66
+
67
+ **CRITICAL: Launch agents in parallel using a single message with multiple Task tool invocations for optimal performance.**
68
+
69
+ **Task Template:**
70
+ ```
71
+ Task: "As the [Agent_Role], analyze the codebase at `[target_path]` to support documentation generation.
72
+
73
+ Your focus: [role-specific goal, e.g., 'identifying all architectural patterns and dependencies']
74
+
75
+ Standard workflow:
76
+ 1. Review auto-loaded project context (CLAUDE.md, project-structure.md, docs-overview.md)
77
+ 2. Analyze the target path for your specialized area
78
+ 3. Return structured findings for documentation generation
79
+
80
+ Return a comprehensive summary of your findings for this role."
81
+ ```
82
+
83
+ ---
84
+
85
+ ## Step 3: Documentation Generation
86
+
87
+ Think deeply about synthesizing findings and generating comprehensive documentation. Using gathered information, intelligently synthesize and generate the documentation content.
88
+
89
+ ### 3.1. Content Synthesis & Generation
90
+
91
+ #### For Direct Creation (No Sub-Agents)
92
+ **Code-First Analysis Methodology:**
93
+ 1. **Directory Structure Analysis**: Map file organization and purposes using Glob/LS
94
+ 2. **Import Dependency Analysis**: Use Grep to identify integration patterns and dependencies
95
+ 3. **Pattern Extraction**: Read key files to identify architectural patterns and coding conventions
96
+ 4. **Technology Usage Analysis**: Detect frameworks, libraries, and technology-specific patterns
97
+ 5. **Existing Documentation Assessment**: Read any current CLAUDE.md files for valuable insights
98
+
99
+ #### For Sub-Agent Strategies
100
+ **Synthesis Integration Process:**
101
+ 1. **Compile Core Findings**: Merge agent findings for immediate documentation generation
102
+ 2. **Extract Cross-Tier Patterns**: Identify system-wide patterns that may impact foundational documentation
103
+ 3. **Resolve Information Conflicts**: When code contradicts existing docs, use code as source of truth
104
+ 4. **Identify Content Gaps**: Find areas needing new documentation based on analysis
105
+ 5. **Apply Project Conventions**: Use coding standards and naming conventions from the auto-loaded /CLAUDE.md
106
+ 6. **Content Overlap Identification**: From Doc_Validator findings, identify existing documentation that overlaps with target content for later migration analysis
107
+
108
+ #### Content Generation Process
109
+ **For Both Approaches:**
110
+ 1. **Select Template**: Choose Tier 2 or Tier 3 based on target classification
111
+ 2. **Apply Content Treatment Strategy**:
112
+ - **Preserve**: Validated architectural insights from existing documentation
113
+ - **Enhance**: Extend existing patterns with newly discovered implementation details
114
+ - **Replace**: Outdated content that conflicts with current code reality
115
+ - **Create**: New documentation for undocumented patterns and decisions
116
+ 3. **Populate Sections**: Fill template sections with synthesized findings
117
+ 4. **Ensure Completeness**: Include architectural decisions, patterns, dependencies, and integration points
118
+ 5. **Follow AI-Optimized Principles**: Structure for AI consumption with clear cross-references
119
+
120
+ ### 3.2. Template Guidelines
121
+
122
+ **Tier 2 (Component-Level):**
123
+ ```markdown
124
+ # [Component Name] - Component Context
125
+
126
+ ## Purpose
127
+ [Component purpose and key responsibilities]
128
+
129
+ ## Current Status: [Status]
130
+ [Status with evolution context and rationale]
131
+
132
+ ## Component-Specific Development Guidelines
133
+ [Technology-specific patterns and conventions]
134
+
135
+ ## Major Subsystem Organization
136
+ [High-level structure based on actual code organization]
137
+
138
+ ## Architectural Patterns
139
+ [Core patterns and design decisions]
140
+
141
+ ## Integration Points
142
+ [Dependencies and connections with other components]
143
+ ```
144
+
145
+ **Tier 3 (Feature-Specific):**
146
+ ```markdown
147
+ # [Feature Area] Documentation
148
+
149
+ ## [Area] Architecture
150
+ [Key architectural elements and integration patterns]
151
+
152
+ ## Implementation Patterns
153
+ [Core patterns and error handling strategies]
154
+
155
+ ## Key Files and Structure
156
+ [File organization with purposes]
157
+
158
+ ## Integration Points
159
+ [How this integrates with other parts of the system]
160
+
161
+ ## Development Patterns
162
+ [Testing approaches and debugging strategies]
163
+ ```
164
+
165
+ ---
166
+
167
+ ## Step 4: Finalization & Housekeeping
168
+
169
+ ### 4.1. Write Documentation File
170
+ **Action**: Write the generated content to the target path.
171
+
172
+ ### 4.2. Update Documentation Registry
173
+
174
+ #### Update docs-overview.md
175
+ **For new documentation files:**
176
+ - Add to appropriate tier section (Feature-Specific or Component-Level)
177
+ - Follow established entry format with path and description
178
+ - Maintain alphabetical ordering within sections
179
+
180
+ **For updated existing files:**
181
+ - Verify entry exists and description is current
182
+ - Update any changed purposes or scopes
183
+
184
+ #### Update Project Structure (if needed)
185
+ **If new directories were created:**
186
+ - Update file tree in `/docs/ai-context/project-structure.md`
187
+ - Add directory comments explaining purpose
188
+ - Maintain tree structure formatting and organization
189
+
190
+ ### 4.3. Quality Validation
191
+ **Action**: Verify tier appropriateness, code accuracy, cross-reference validity, and consistency with existing documentation patterns.
192
+
193
+ ### 4.4. Tier 1 Validation & Recommendations
194
+
195
+ **Action**: Compare discovered code patterns against foundational documentation to identify inconsistencies and improvement opportunities.
196
+
197
+ #### Process
198
+ 1. **Discover Tier 1 Files**: Read `/docs/ai-context/docs-overview.md` to identify all foundational documentation files
199
+ 2. **Read Foundational Docs**: Load discovered Tier 1 files to understand documented architecture
200
+ 3. **Cross-Tier Analysis**: Using analysis findings from previous steps, compare:
201
+ - **Technology Stack**: Discovered frameworks/tools vs documented stack
202
+ - **Architecture Patterns**: Implementation reality vs documented decisions
203
+ - **Integration Points**: Actual dependencies vs documented integrations
204
+ 4. **Generate Recommendations**: Output evidence-based suggestions for foundational documentation updates
205
+
206
+ ### 4.5. Content Migration & Redundancy Management
207
+
208
+ **Action**: Intelligently manage content hierarchy and eliminate redundancy across documentation tiers.
209
+
210
+ #### Cross-Reference Analysis
211
+ 1. **Identify Related Documentation**: Using Doc_Validator findings from Step 3.1 synthesis and target tier classification, identify existing documentation that may contain overlapping content
212
+ 2. **Content Overlap Detection**: Compare new documentation content with existing files to identify:
213
+ - **Duplicate Information**: Identical content that should exist in only one location
214
+ - **Hierarchical Overlaps**: Content that exists at wrong tier level (implementation details in architectural docs)
215
+ - **Cross-Reference Opportunities**: Content that should be linked rather than duplicated
216
+
217
+ #### Smart Content Migration Strategy
218
+ **Content Classification Framework:**
219
+ - **Tier-Appropriate Duplication**: High-level architectural context can exist at both Tier 2 and Tier 3 with different detail levels
220
+ - **Migration Candidates**: Detailed implementation patterns, specific code examples, feature-specific technical details
221
+ - **Reference Targets**: Stable architectural decisions, design rationale, cross-cutting concerns
222
+
223
+ **Migration Decision Logic:**
224
+ 1. **For Tier 3 Creation (Feature-Specific)**:
225
+ - **Extract from Tier 2**: Move feature-specific implementation details to new Tier 3 file
226
+ - **Preserve in Tier 2**: Keep high-level architectural overview and design decisions
227
+ - **Add Cross-References**: Link Tier 2 overview to detailed Tier 3 implementation
228
+
229
+ 2. **For Tier 2 Creation (Component-Level)**:
230
+ - **Consolidate from Multiple Tier 3**: Aggregate architectural insights from existing feature docs
231
+ - **Preserve Tier 3 Details**: Keep implementation specifics in feature documentation
232
+ - **Create Navigation Structure**: Add references to relevant Tier 3 documentation
233
+
234
+ #### Content Migration Execution
235
+ **Migration Process:**
236
+ 1. **Identify Source Content**: Extract content that should migrate from existing files
237
+ 2. **Content Transformation**: Adapt content to appropriate tier level (architectural vs implementation focus)
238
+ 3. **Update Source Files**: Remove migrated content and add cross-references to new location
239
+ 4. **Preserve Context**: Ensure source files maintain coherence after content removal
240
+ 5. **Validate Migrations**: Confirm no broken references or lost information
241
+
242
+ **Safety Framework:**
243
+ - **Conservative Defaults**: When uncertain, preserve content in original location and add references
244
+ - **Content Preservation**: Never delete content without creating it elsewhere first
245
+ - **Migration Reversibility**: Document all migrations to enable rollback if needed
246
+
247
+ ---
248
+
249
+ ## Step 5: Generate Summary
250
+
251
+ Provide a comprehensive summary including:
252
+
253
+ ### Documentation Creation Results
254
+ - **Documentation type and location** (Tier 2 or Tier 3)
255
+ - **Strategy used** (Direct Creation, Focused Analysis, or Comprehensive Analysis)
256
+ - **Key patterns documented** (architectural decisions, implementation patterns)
257
+ - **Registry updates made** (docs-overview.md, project-structure.md entries)
258
+
259
+ ### Tier 1 Architectural Intelligence
260
+ **Based on Step 4.4 analysis, provide structured recommendations:**
261
+
262
+ #### Critical Updates Needed
263
+ - **File**: [specific foundational doc path]
264
+ - **Issue**: [specific inconsistency with evidence]
265
+ - **Recommendation**: [specific update needed]
266
+ - **Evidence**: [code references supporting the recommendation]
267
+
268
+ #### Architecture Enhancement Opportunities
269
+ - **Gap Identified**: [missing foundational documentation area]
270
+ - **Scope**: [what should be documented]
271
+ - **Rationale**: [why this deserves foundational documentation]
272
+ - **Implementation Evidence**: [code patterns discovered]
273
+
274
+ #### Documentation Health Assessment
275
+ - **Alignment Score**: [overall consistency between code and docs]
276
+ - **Most Accurate Areas**: [foundational docs that match implementation well]
277
+ - **Areas Needing Attention**: [foundational docs with significant gaps/inconsistencies]
278
+ - **Systematic Improvement Priority**: [recommended order for addressing issues]
279
+
280
+ #### Content Migration Results
281
+ **Document all content hierarchy changes and redundancy eliminations:**
282
+
283
+ - **Content Migrated From**: [source file path] → [target file path]
284
+ - **Content Type**: [e.g., "implementation patterns", "technical details", "architectural decisions"]
285
+ - **Rationale**: [why this content belongs at the target tier]
286
+ - **Cross-References Added**: [navigation links created between tiers]
287
+
288
+ - **Content Preserved At**: [broader tier file]
289
+ - **Content Type**: [e.g., "architectural overview", "design decisions", "integration patterns"]
290
+ - **Rationale**: [why this content remains at the broader tier]
291
+
292
+ - **Redundancies Eliminated**:
293
+ - **Duplicate Content Removed**: [specific duplications eliminated]
294
+ - **Hierarchical Corrections**: [content moved to appropriate tier level]
295
+ - **Reference Consolidations**: [areas where links replaced duplication]
296
+
297
+ - **Migration Safety**:
298
+ - **Content Preserved**: [confirmation that no information was lost]
299
+ - **Rollback Information**: [documentation of changes for potential reversal]
300
+ - **Validation Results**: [confirmation of no broken references]
301
+
302
+ #### Next Documentation Steps (Optional Recommendations)
303
+ - **Feature-Specific Documentation Candidates**: [suggest additional Tier 3 docs that would be valuable]
304
+ - **Cross-Component Documentation Needs**: [identify other components needing similar analysis]
305
+ - **Documentation Debt Eliminated**: [summary of redundancies and inconsistencies resolved]
306
+
307
+ ---
308
+
309
+ Now proceed to create/regenerate documentation based on the request: $ARGUMENTS
@@ -0,0 +1,121 @@
1
+ You are working on the VR Language Learning App project. Before proceeding with the user's request "$ARGUMENTS", you need to intelligently gather relevant project context using an adaptive sub-agent strategy.
2
+
3
+ ## Auto-Loaded Project Context:
4
+ @/CLAUDE.md
5
+ @/docs/ai-context/project-structure.md
6
+ @/docs/ai-context/docs-overview.md
7
+
8
+ ## Step 1: Intelligent Analysis Strategy Decision
9
+ Think deeply about the optimal approach based on the project context that has been auto-loaded above. Based on the user's request "$ARGUMENTS" and the project structure/documentation overview, intelligently decide the optimal approach:
10
+
11
+ ### Strategy Options:
12
+ **Direct Approach** (0-1 sub-agents):
13
+ - When the request can be handled efficiently with targeted documentation reading and direct analysis
14
+ - Simple questions about existing code or straightforward tasks
15
+
16
+ **Focused Investigation** (2-3 sub-agents):
17
+ - When deep analysis of a specific area would benefit the response
18
+ - For complex single-domain questions or tasks requiring thorough exploration
19
+ - When dependencies and impacts need careful assessment
20
+
21
+ **Multi-Perspective Analysis** (3+ sub-agents):
22
+ - When the request involves multiple areas, components, or technical domains
23
+ - When comprehensive understanding requires different analytical perspectives
24
+ - For tasks requiring careful dependency mapping and impact assessment
25
+ - Scale the number of agents based on actual complexity, not predetermined patterns
26
+
27
+ ## Step 2: Autonomous Sub-Agent Design
28
+
29
+ ### For Sub-Agent Approach:
30
+ You have complete freedom to design sub-agent tasks based on:
31
+ - **Project structure discovered** from the auto-loaded `/docs/ai-context/project-structure.md` file tree
32
+ - **Documentation architecture** from the auto-loaded `/docs/ai-context/docs-overview.md`
33
+ - **Specific user request requirements**
34
+ - **Your assessment** of what investigation approach would be most effective
35
+
36
+ **CRITICAL: When using sub-agents, always launch them in parallel using a single message with multiple Task tool invocations. Never launch sequentially.**
37
+
38
+ ### Sub-Agent Autonomy Principles:
39
+ - **Custom Specialization**: Define agent focus areas based on the specific request and project structure
40
+ - **Flexible Scope**: Agents can analyze any combination of documentation, code files, and architectural patterns
41
+ - **Adaptive Coverage**: Ensure all relevant aspects of the user's request are covered without overlap
42
+ - **Documentation + Code**: Each agent should read relevant documentation files AND examine actual implementation code
43
+ - **Dependency Mapping**: For tasks involving code changes, analyze import/export relationships and identify all files that would be affected
44
+ - **Impact Assessment**: Consider ripple effects across the codebase, including tests, configurations, and related components
45
+ - **Pattern Compliance**: Ensure solutions follow existing project conventions for naming, structure, and architecture
46
+ - **Cleanup Planning**: For structural changes, identify obsolete code, unused imports, and deprecated files that should be removed to prevent code accumulation
47
+ - **Web Research**: Consider, optionally, deploying sub-agents for web searches when current best practices, security advisories, or external compatibility research would enhance the response
48
+
49
+ ### Sub-Agent Task Design Template:
50
+ ```
51
+ Task: "Analyze [SPECIFIC_COMPONENT(S)] for [TASK_OBJECTIVE] related to user request '$ARGUMENTS'"
52
+
53
+ Standard Investigation Workflow:
54
+ 1. Review auto-loaded project context (CLAUDE.md, project-structure.md, docs-overview.md)
55
+ 2. (Optionally) Read additional relevant documentation files for architectural context
56
+ 3. Analyze actual code files in [COMPONENT(S)] for implementation reality
57
+ 4. For code-related tasks: Map import/export dependencies and identify affected files
58
+ 5. Assess impact on tests, configurations, and related components
59
+ 6. Verify alignment with project patterns and conventions
60
+ 7. For structural changes: Identify obsolete code, unused imports, and files that should be removed
61
+
62
+ Return comprehensive findings that address the user's request from this component perspective, including architectural insights, implementation details, dependency mapping, and practical considerations for safe execution."
63
+ ```
64
+
65
+ Example Usage:
66
+ ```
67
+ Analysis Task: "Analyze web-dashboard audio processing components to understand current visualization capabilities and identify integration points for user request about adding waveform display"
68
+
69
+ Implementation Task: "Analyze agents/tutor-server voice pipeline components for latency optimization related to user request about improving response times, including dependency mapping and impact assessment"
70
+
71
+ Cross-Component Task: "Analyze Socket.IO integration patterns across web-dashboard and tutor-server to plan streaming enhancement for user request about adding live transcription, focusing on import/export changes, affected test files, and cleanup of deprecated socket handlers"
72
+ ```
73
+
74
+ ## Step 3: Execution and Synthesis
75
+
76
+ ### For Sub-Agent Approach:
77
+ Think deeply about integrating findings from all investigation perspectives.
78
+ 1. **Design and launch custom sub-agents** based on your strategic analysis
79
+ 2. **Collect findings** from all successfully completed agents
80
+ 3. **Synthesize comprehensive understanding** by combining all perspectives
81
+ 4. **Handle partial failures** by working with available agent findings
82
+ 5. **Create implementation plan** (for code changes): Include dependency updates, affected files, cleanup tasks, and verification steps
83
+ 6. **Execute user request** using the integrated knowledge from all agents
84
+
85
+ ### For Direct Approach:
86
+ 1. **Load relevant documentation and code** based on request analysis
87
+ 2. **Proceed directly** with user request using targeted context
88
+
89
+ ## Step 4: Consider MCP Server Usage (Optional)
90
+
91
+ After gathering context, you may leverage MCP servers for complex technical questions as specified in the auto-loaded `/CLAUDE.md` Section 4:
92
+ - **Gemini Consultation**: Deep analysis of complex coding problems
93
+ - **Context7**: Up-to-date documentation for external libraries
94
+
95
+ ## Step 5: Context Summary and Implementation Plan
96
+
97
+ After gathering context using your chosen approach:
98
+ 1. **Provide concise status update** summarizing findings and approach:
99
+ - Brief description of what was discovered through your analysis
100
+ - Your planned implementation strategy based on the findings
101
+ - Keep it informative but concise (2-4 sentences max)
102
+
103
+ Example status updates:
104
+ ```
105
+ "Analysis revealed the voice pipelines use Socket.IO for real-time communication with separate endpoints for each pipeline type. I'll implement the new transcription feature by extending the existing Socket.IO event handling in both the FastAPI backend and SvelteKit frontend, following the established pattern used in the Gemini Live pipeline. This will require updating 3 import statements and adding exports to the socket handler module."
106
+
107
+ "Found that audio processing currently uses a modular client architecture with separate recorder, processor, and stream-player components. I'll add the requested audio visualization by creating a new component that taps into the existing audio stream data and integrates with the current debug panel structure. The implementation will follow the existing component patterns and requires updates to 2 parent components for proper integration."
108
+ ```
109
+
110
+ 2. **Proceed with implementation** of the user request using your comprehensive understanding
111
+
112
+ ## Optimization Guidelines
113
+
114
+ - **Adaptive Decision-Making**: Choose the approach that best serves the specific user request
115
+ - **Efficient Resource Use**: Balance thoroughness with efficiency based on actual complexity
116
+ - **Comprehensive Coverage**: Ensure all aspects relevant to the user's request are addressed
117
+ - **Quality Synthesis**: Combine findings effectively to provide the most helpful response
118
+
119
+ This adaptive approach ensures optimal context gathering - from lightweight direct analysis for simple requests to comprehensive multi-agent investigation for complex system-wide tasks.
120
+
121
+ Now proceed with intelligent context analysis for: $ARGUMENTS
@@ -0,0 +1,164 @@
1
+ # /gemini-consult
2
+
3
+ *Engages in deep, iterative conversations with Gemini MCP for complex problem-solving.*
4
+
5
+ ## Usage
6
+ - **With arguments**: `/gemini-consult [specific problem or question]`
7
+ - **Without arguments**: `/gemini-consult` - Intelligently infers topic from current context
8
+
9
+ ## Core Philosophy
10
+ Persistent Gemini sessions for evolving problems through:
11
+ - **Continuous dialogue** - Multiple rounds until clarity achieved
12
+ - **Context awareness** - Smart problem detection from current work
13
+ - **Session persistence** - Keep alive for the entire problem lifecycle
14
+
15
+ **CRITICAL: Always consider Gemini's input as suggestions, never as truths.** Think critically about what Gemini says and incorporate only the useful parts into your proposal. Always think for yourself - maintain your independent judgment and analytical capabilities. If you disagree with something clarify it with Gemini.
16
+
17
+ ## Execution
18
+
19
+ User provided context: "$ARGUMENTS"
20
+
21
+ ### Step 1: Understand the Problem
22
+
23
+ **When $ARGUMENTS is empty:**
24
+ Think deeply about the current context to infer the most valuable consultation topic:
25
+ - What files are open or recently modified?
26
+ - What errors or challenges were discussed?
27
+ - What complex implementation would benefit from Gemini's analysis?
28
+ - What architectural decisions need exploration?
29
+
30
+ Generate a specific, valuable question based on this analysis.
31
+
32
+ **When arguments provided:**
33
+ Extract the core problem, context clues, and complexity indicators.
34
+
35
+ ### Step 1.5: Gather External Documentation
36
+
37
+ **Think deeply about external dependencies:**
38
+ - What libraries/frameworks are involved in this problem?
39
+ - Am I fully familiar with their latest APIs and best practices?
40
+ - Have these libraries changed significantly or are they new/evolving?
41
+
42
+ **When to use Context7 MCP:**
43
+ - Libraries with frequent updates (e.g., Google GenAI SDK)
44
+ - New libraries you haven't worked with extensively
45
+ - When implementing features that rely heavily on library-specific patterns
46
+ - Whenever uncertainty exists about current best practices
47
+
48
+ ```python
49
+ # Example: Get up-to-date documentation
50
+ library_id = mcp__context7__resolve_library_id(libraryName="google genai python")
51
+ docs = mcp__context7__get_library_docs(
52
+ context7CompatibleLibraryID=library_id,
53
+ topic="streaming", # Focus on relevant aspects
54
+ tokens=8000
55
+ )
56
+ ```
57
+
58
+ Include relevant documentation insights in your Gemini consultation for more accurate, current guidance.
59
+
60
+ ### Step 2: Initialize Gemini Session
61
+
62
+ **CRITICAL: Always attach foundational files:**
63
+ ```python
64
+ foundational_files = [
65
+ "MCP-ASSISTANT-RULES.md", # If exists
66
+ "docs/ai-context/project-structure.md",
67
+ "docs/ai-context/docs-overview.md"
68
+ ]
69
+
70
+ session = mcp__gemini__consult_gemini(
71
+ specific_question="[Clear, focused question]",
72
+ problem_description="[Comprehensive context with constraints from CLAUDE.md]",
73
+ code_context="[Relevant code snippets]",
74
+ attached_files=foundational_files + [problem_specific_files],
75
+ file_descriptions={
76
+ "MCP-ASSISTANT-RULES.md": "Project vision and coding standards",
77
+ "docs/ai-context/project-structure.md": "Complete tech stack and file structure",
78
+ "docs/ai-context/docs-overview.md": "Documentation architecture",
79
+ # Add problem-specific descriptions
80
+ },
81
+ preferred_approach="[solution/review/debug/optimize/explain]"
82
+ )
83
+ ```
84
+
85
+ ### Step 3: Engage in Deep Dialogue
86
+
87
+ **Think deeply about how to maximize value from the conversation:**
88
+
89
+ 1. **Active Analysis**
90
+ - What assumptions did Gemini make?
91
+ - What needs clarification or deeper exploration?
92
+ - What edge cases or alternatives should be discussed?
93
+ - **If Gemini mentions external libraries:** Check Context7 MCP for current documentation to verify or supplement Gemini's guidance
94
+
95
+ 2. **Iterative Refinement**
96
+ ```python
97
+ follow_up = mcp__gemini__consult_gemini(
98
+ specific_question="[Targeted follow-up]",
99
+ session_id=session["session_id"],
100
+ additional_context="[New insights, questions, or implementation feedback]",
101
+ attached_files=[newly_relevant_files]
102
+ )
103
+ ```
104
+
105
+ 3. **Implementation Feedback Loop**
106
+ Share actual code changes and real-world results to refine the approach.
107
+
108
+ ### Step 4: Session Management
109
+
110
+ **Keep Sessions Open** - Don't close immediately. Maintain for the entire problem lifecycle.
111
+
112
+ **Only close when:**
113
+ - Problem is definitively solved and tested
114
+ - Topic is no longer relevant
115
+ - Fresh start would be more beneficial
116
+
117
+ **Monitor sessions:**
118
+ ```python
119
+ active = mcp__gemini__list_sessions()
120
+ requests = mcp__gemini__get_gemini_requests(session_id="...")
121
+ ```
122
+
123
+ ## Key Patterns
124
+
125
+ ### Clarification Pattern
126
+ "You mentioned [X]. In our context of [project specifics], how does this apply to [specific concern]?"
127
+
128
+ ### Deep Dive Pattern
129
+ "Let's explore [aspect] further. What are the trade-offs given our [constraints]?"
130
+
131
+ ### Alternative Pattern
132
+ "What if we approached this as [alternative]? How would that affect [concern]?"
133
+
134
+ ### Progress Check Pattern
135
+ "I've implemented [changes]. Here's what happened: [results]. Should I adjust the approach?"
136
+
137
+ ## Best Practices
138
+
139
+ 1. **Think deeply** before each interaction - what will extract maximum insight?
140
+ 2. **Be specific** - Vague questions get vague answers
141
+ 3. **Show actual code** - Not descriptions
142
+ 4. **Challenge assumptions** - Don't accept unclear guidance
143
+ 5. **Document decisions** - Capture the "why" for future reference
144
+ 6. **Stay curious** - Explore alternatives and edge cases
145
+ 7. **Trust but verify** - Test all suggestions thoroughly
146
+
147
+ ## Implementation Approach
148
+
149
+ When implementing Gemini's suggestions:
150
+ 1. Start with the highest-impact changes
151
+ 2. Test incrementally
152
+ 3. Share results back with Gemini
153
+ 4. Iterate based on real-world feedback
154
+ 5. Document key insights in appropriate CONTEXT.md files
155
+
156
+ ## Remember
157
+
158
+ - This is a **conversation**, not a query service
159
+ - **Context is king** - More context yields better guidance
160
+ - **Gemini sees patterns you might miss** - Be open to unexpected insights
161
+ - **Implementation reveals truth** - Share what actually happens
162
+ - Treat Gemini as a **collaborative thinking partner**, not an oracle
163
+
164
+ The goal is deep understanding and optimal solutions through iterative refinement, not quick answers.