@vfarcic/dot-ai 0.43.0 → 0.45.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (49) hide show
  1. package/README.md +33 -8
  2. package/dist/core/embedding-service.d.ts +80 -0
  3. package/dist/core/embedding-service.d.ts.map +1 -0
  4. package/dist/core/embedding-service.js +198 -0
  5. package/dist/core/index.d.ts +7 -0
  6. package/dist/core/index.d.ts.map +1 -1
  7. package/dist/core/index.js +15 -1
  8. package/dist/core/pattern-creation-session.d.ts +43 -0
  9. package/dist/core/pattern-creation-session.d.ts.map +1 -0
  10. package/dist/core/pattern-creation-session.js +312 -0
  11. package/dist/core/pattern-creation-types.d.ts +30 -0
  12. package/dist/core/pattern-creation-types.d.ts.map +1 -0
  13. package/dist/core/pattern-creation-types.js +8 -0
  14. package/dist/core/pattern-operations.d.ts +11 -0
  15. package/dist/core/pattern-operations.d.ts.map +1 -0
  16. package/dist/core/pattern-operations.js +74 -0
  17. package/dist/core/pattern-types.d.ts +17 -0
  18. package/dist/core/pattern-types.d.ts.map +1 -0
  19. package/dist/core/pattern-types.js +8 -0
  20. package/dist/core/pattern-vector-service.d.ts +97 -0
  21. package/dist/core/pattern-vector-service.d.ts.map +1 -0
  22. package/dist/core/pattern-vector-service.js +302 -0
  23. package/dist/core/schema.d.ts +43 -0
  24. package/dist/core/schema.d.ts.map +1 -1
  25. package/dist/core/schema.js +176 -9
  26. package/dist/core/vector-db-service.d.ts +81 -0
  27. package/dist/core/vector-db-service.d.ts.map +1 -0
  28. package/dist/core/vector-db-service.js +299 -0
  29. package/dist/interfaces/mcp.d.ts.map +1 -1
  30. package/dist/interfaces/mcp.js +10 -2
  31. package/dist/tools/index.d.ts +1 -0
  32. package/dist/tools/index.d.ts.map +1 -1
  33. package/dist/tools/index.js +6 -1
  34. package/dist/tools/organizational-data.d.ts +27 -0
  35. package/dist/tools/organizational-data.d.ts.map +1 -0
  36. package/dist/tools/organizational-data.js +470 -0
  37. package/dist/tools/recommend.d.ts.map +1 -1
  38. package/dist/tools/recommend.js +13 -2
  39. package/dist/tools/version.d.ts +26 -3
  40. package/dist/tools/version.d.ts.map +1 -1
  41. package/dist/tools/version.js +161 -8
  42. package/package.json +3 -1
  43. package/prompts/concept-extraction.md +91 -0
  44. package/prompts/doc-testing-test-section.md +78 -226
  45. package/prompts/resource-selection.md +4 -0
  46. package/prompts/resource-solution-ranking.md +66 -2
  47. package/shared-prompts/prd-create.md +4 -0
  48. package/shared-prompts/prd-start.md +68 -3
  49. package/shared-prompts/prd-update-progress.md +50 -4
@@ -2,7 +2,33 @@
2
2
 
3
3
  You are testing a specific section of documentation to validate both functionality AND accuracy. You must verify that instructions work AND that the documentation text truthfully describes what actually happens.
4
4
 
5
- **Important**: Skip content that has ignore comments containing "dotai-ignore" (e.g., `<!-- dotai-ignore -->`, `.. dotai-ignore`, `// dotai-ignore`). Do not generate issues or recommendations for ignored content.
5
+ **Important**:
6
+ - Skip content that has ignore comments containing "dotai-ignore" (e.g., `<!-- dotai-ignore -->`, `.. dotai-ignore`, `// dotai-ignore`). Do not generate issues or recommendations for ignored content.
7
+ - Look for testing hints in comments containing "dotai-test-hint" (e.g., `<!-- dotai-test-hint: use mcp__dot-ai__prompts to verify slash commands -->`, `.. dotai-test-hint: run command X to test claim Y`, `// dotai-test-hint: check actual behavior with tool Z`). Follow these hints when testing the associated content.
8
+
9
+ ## CRITICAL MINDSET: User Behavior Simulation
10
+
11
+ **You are simulating a real user following this documentation step-by-step.**
12
+
13
+ ### User Journey Testing Requirements
14
+
15
+ **Follow documented workflows exactly as users would:**
16
+ - If docs say "Run this command to test" → Actually execute that command and verify it works
17
+ - If docs say "Navigate to Settings page" → Verify that page/option exists and is accessible
18
+ - If docs say "You should see output X" → Confirm you actually get output X
19
+ - If docs say "Click the Install button" → Verify that button exists and functions
20
+ - If docs say "This will automatically happen" → Test that it actually happens automatically
21
+
22
+ **Key User Scenarios to Simulate:**
23
+ 1. **Frustrated troubleshooting user** → Would run every suggested diagnostic command to find the problem
24
+ 2. **New setup user** → Would expect every installation/configuration step to work as written
25
+ 3. **Verification user** → Would run confirmation commands to ensure their setup is working
26
+ 4. **Integration user** → Would follow workflow examples expecting them to produce stated results
27
+
28
+ **Critical Testing Mindset Shifts:**
29
+ - **From**: "This looks like an example command" → **To**: "A user would actually run this - does it work?"
30
+ - **From**: "The JSON syntax is valid" → **To**: "If a user creates this config, does it actually work?"
31
+ - **From**: "This seems reasonable" → **To**: "If I follow these exact steps, do I get the promised outcome?"
6
32
 
7
33
  ## Section to Test
8
34
  **File**: {filePath}
@@ -13,186 +39,46 @@ You are testing a specific section of documentation to validate both functionali
13
39
  ## Your Task - Two-Phase Validation
14
40
 
15
41
  ### Phase 1: Execute and Test (Functional Validation)
16
- **Goal**: Verify that instructions, examples, and procedures actually work
17
-
18
- Execute everything testable in this section:
42
+ Execute everything testable as a real user would:
19
43
  - Follow step-by-step instructions exactly as written
20
- - Execute commands, code examples, or procedures
21
- - Test interactive elements (buttons, forms, interfaces)
22
- - Verify file operations, downloads, installations
23
- - Check that prerequisites are sufficient
44
+ - Execute commands, code examples, procedures (adapt for safety: use `./tmp/` for file operations, test endpoints for URLs, etc.)
45
+ - Test interactive elements and verify file operations work
24
46
  - Validate that examples produce expected results
25
47
 
26
48
  ### Phase 2: Analyze Claims vs Reality (Semantic Validation)
27
- **Goal**: Verify that the documentation text truthfully describes what actually happens
28
-
29
- **MANDATORY SEMANTIC ANALYSIS** - Check every claim in the documentation:
30
-
31
- □ **Difficulty Claims**: Does "easy," "simple," "straightforward" match actual complexity?
32
- □ **Automation Claims**: Does "automatically," "seamlessly," "instantly" match real behavior?
49
+ Check every claim in the documentation:
50
+ □ **Difficulty/Time Claims**: Does "easy," "simple," "quickly," "automatically" match reality?
33
51
  □ **Outcome Claims**: Do "you will see," "this enables," "results in" match what actually happens?
34
- □ **Time Claims**: Do "quickly," "immediately," "in seconds" match actual duration?
35
52
  □ **Prerequisite Claims**: Are stated requirements actually sufficient for success?
36
- □ **Success Claims**: Do "successful," "working," "ready" match actual end states?
37
53
  □ **User Experience Claims**: Would a typical user get the promised experience?
38
- □ **Code/Architecture Claims**: When documentation makes claims about code, files, or system architecture, validate them against the actual codebase
39
-
40
- ### Cross-File Terminology Validation (When Applicable)
41
-
42
- **When testing documentation that references related files**, validate terminology consistency:
43
-
44
- #### Terminology Consistency Check
45
- **For files that are part of a documentation set** (e.g., setup guides, user guides, API references):
46
-
47
- 1. **Identify Key Terms**: Extract important technical terms, feature names, and concepts from current section
48
- 2. **Find Related Files**: Identify documentation files that should use consistent terminology
49
- 3. **Cross-Reference Validation**: Check if the same concepts use identical terms across files
50
- 4. **Flag Inconsistencies**: Report terminology mismatches that could confuse users
51
-
52
- **Common Terminology Issues:**
53
- - Same feature called different names in different files
54
- - Inconsistent capitalization (e.g., "MCP Server" vs "mcp server")
55
- - Different terms for same concept (e.g., "slash commands" vs "command shortcuts")
56
- - Inconsistent format examples (e.g., `/dot-ai:name` vs `/mcp.dot-ai.name`)
57
-
58
- **How to Perform Cross-File Validation:**
59
- 1. **Extract key terms** from current section being tested
60
- 2. **Read related documentation files** mentioned in cross-references or logically related
61
- 3. **Compare terminology usage** across files for consistency
62
- 4. **Report discrepancies** that would confuse users navigating between files
63
-
64
- ### Code Analysis Validation (When Applicable)
65
-
66
- **When testing technical documentation in a code repository**, perform BOTH directions of validation:
67
-
68
- #### 1. Validate Documented Claims Against Code
69
- **File & Directory Claims:**
70
- - Check if claimed file paths actually exist (e.g., "src/core/discovery.ts")
71
- - Verify directory structure matches documentation claims
72
- - Validate that referenced configuration files exist where claimed
73
-
74
- **Component & Feature Claims:**
75
- - For architecture docs claiming "System has components A, B, C" - read the actual source code
76
- - Check if documented components/classes/functions actually exist in the codebase
77
- - Verify CLI commands exist if documentation claims they're available
78
-
79
- **Implementation Status Claims:**
80
- - For status markers (✅ IMPLEMENTED, 🔴 PLANNED) - verify against actual code
81
- - Check if "planned" features are already implemented but not updated in docs
82
-
83
- #### 2. Find Missing Documentation (Reverse Analysis)
84
- **Scan codebase to identify undocumented features:**
85
- - Read key source directories (src/, lib/, bin/, tools/) to find major components
86
- - Check package.json, CLI entry points, and main modules for implemented features
87
- - Look for significant classes, services, interfaces, or tools not mentioned in documentation
88
- - Identify recently added features that may not be reflected in architecture docs
89
-
90
- **For architecture/system documentation specifically:**
91
- - Compare documented system components against actual code organization
92
- - Look for major implemented subsystems missing from architecture diagrams
93
- - Check if all main interfaces/entry points are documented
94
-
95
- **How to Perform Code Analysis:**
96
- 1. **Forward validation**: For each documented claim, verify against actual code
97
- 2. **Reverse validation**: Scan actual code to find major features missing from docs
98
- 3. Use file reading tools to examine source code structure
99
- 4. Focus on major components that users would need to know about
100
- 5. Don't flag internal implementation details - focus on user-facing or architecturally significant features
101
-
102
- **For each code-related validation, ask:**
103
- - Does this documented claim match the actual code?
104
- - Are there major implemented features missing from this documentation section?
105
- - Would a developer/user be surprised by significant undocumented functionality?
106
-
107
- ## Universal Testing Approach
108
-
109
- ### Content Discovery
110
- Look for any testable content in this section:
111
- - **Commands/Scripts**: Terminal commands, code snippets, shell scripts
112
- - **Interactive Steps**: Click buttons, fill forms, navigate interfaces
113
- - **File Operations**: Create, modify, download, upload files
114
- - **Web Interactions**: Visit URLs, test API endpoints, verify web content
115
- - **Installation Procedures**: Software setup, dependency installation
116
- - **Configuration Steps**: Settings, environment setup, account creation
117
- - **Verification Steps**: Commands or actions that check if something worked
118
- - **Code Examples**: Runnable code that should produce specific outputs
119
- - **Troubleshooting**: Problem-solution pairs that can be validated
120
-
121
- ### Universal Functional Testing
122
- For any instruction found:
123
- 1. **Execute with adaptation** - Modify the steps to work safely in your current environment
124
- 2. **Verify actual outcomes** - Confirm the steps produce the described results
125
- 3. **Complete missing context** - Add authentication, permissions, dependencies, or setup as needed
126
- 4. **Test in safe isolation** - Use temporary directories, test accounts, or sandboxed environments
127
- 5. **Validate end-to-end** - Ensure the full workflow achieves its stated purpose
128
-
129
- ### Universal Semantic Validation
130
- For every claim or description:
131
- 1. **Accuracy**: Is the statement factually correct?
132
- 2. **Completeness**: Are there undocumented requirements or side effects?
133
- 3. **Precision**: Do vague terms like "automatically," "easily," "quickly" match reality?
134
- 4. **Outcome matching**: Do results match what's promised?
135
- 5. **User expectations**: Would following this meet the set expectations?
136
-
137
- ## Validation Patterns
138
-
139
- ### Pattern 1: Command/Code Claims
140
- **Documentation Pattern**: "Run X to do Y"
141
- - **Functional**: Execute command X (adapting for your environment) - does it run without errors?
142
- - **Semantic**: Does executing command X actually accomplish Y as described?
143
-
144
- ### Pattern 2: Step-by-Step Procedures
145
- **Documentation Pattern**: "Follow these steps to achieve Z"
146
- - **Functional**: Execute each step (adapting commands/actions for your environment)
147
- - **Semantic**: Do the executed steps actually lead to achieving Z as described?
148
-
149
- ### Pattern 3: Interactive Instructions
150
- **Documentation Pattern**: "Click A, then B will happen"
151
- - **Functional**: Perform the interaction (click, form submission, navigation, etc.)
152
- - **Semantic**: Does performing the action actually cause B to happen as claimed?
153
-
154
- ### Pattern 4: Expected Outputs
155
- **Documentation Pattern**: "You should see output like: [example]"
156
- - **Functional**: Execute the process and capture actual output
157
- - **Semantic**: Does the actual output match the documented example (accounting for environment differences)?
158
-
159
- ### Pattern 5: Capability Claims
160
- **Documentation Pattern**: "This feature enables you to X"
161
- - **Functional**: Use the feature to perform the claimed capability
162
- - **Semantic**: Does using the feature actually enable X as claimed?
163
-
164
- ## Execution Guidelines
165
-
166
- **PRIMARY GOAL**: Test what users will actually do when following the documentation.
167
-
168
- **MANDATORY TESTING APPROACH:**
169
- 1. **Execute documented examples first** - Always prioritize running the actual commands/procedures shown in the documentation
170
- 2. **Use help commands as supplements** - Help commands (`--help`, `man`, `info`) are valuable for understanding syntax or troubleshooting, but should not replace testing documented workflows
171
- 3. **Test real user workflows** - Focus on the actual commands and procedures users are instructed to follow
172
-
173
- **Examples of correct approach:**
174
- - If docs show `npm start` → Execute `npm start` (primary), use `npm --help` if needed for context
175
- - If docs show `make install PREFIX=/usr/local` → Execute this command (primary), use `make --help` if syntax is unclear
176
- - If docs show `./configure --enable-feature` → Execute this command (primary), check `./configure --help` if it fails
177
- - If docs show `pip install -r requirements.txt` → Execute this command (primary), use `pip --help` for troubleshooting
178
-
179
- **The key principle**: Test the documented workflows that users will actually follow, using help commands as tools for understanding rather than as substitutes for real testing.
180
- - **Execute with adaptation**: Modify commands/procedures to work safely in your environment
181
- - `npm install -g tool` → `npm install tool` (avoid global installs)
182
- - `curl api.prod.com/endpoint` → `curl httpbin.org/get` (use test endpoints)
183
- - `mkdir /usr/local/app` → `mkdir ./tmp/test-app` (use local tmp directory)
184
- - `cd /path/to/project` → `cd ./tmp/project` (work in local tmp directory)
185
- - **Create safe contexts**: Use `./tmp/` directory for all file operations and temporary work
186
- - **Complete incomplete examples**: Add missing parameters, authentication, or setup steps
187
- - `curl api.example.com` → `curl -H "Accept: application/json" httpbin.org/json`
188
- - `docker run image` → `docker run --rm -it image` (ensure cleanup)
189
- - `touch important-file` → `mkdir -p ./tmp && touch ./tmp/important-file` (create in tmp)
190
- - **Verify actual behavior**: Don't just check syntax - confirm the described outcomes occur
191
- - **Adapt destructive operations**: Transform dangerous commands into safe equivalents
192
- - `rm -rf /data` → `rm -rf ./tmp/test-data` (use local tmp directory)
193
- - `sudo systemctl restart service` → `echo "Would restart service"` (simulate when necessary)
194
- - Any file creation/modification → redirect to `./tmp/` directory
195
- - **Document adaptations**: Explain how you modified examples to make them testable
54
+ □ **Feature Claims**: Are described capabilities actually implemented in the codebase?
55
+ □ **Architecture Claims**: Do system descriptions match actual implementation?
56
+ **Integration Claims**: Do components actually work together as described?
57
+ □ **Status Claims**: Are features marked as "available" actually working vs. "planned"?
58
+
59
+ ### Additional Validation (When Applicable)
60
+ **Cross-File Terminology**: If testing documentation that references related files, check for terminology consistency (same concepts using identical terms across files).
61
+
62
+ **Code Claims**: When documentation makes claims about code, files, or system architecture, validate them against the actual codebase using available tools (Grep, Read, Task, etc.).
63
+
64
+ ## Testing Approach
65
+
66
+ ### Functional Testing (Execute Documentation)
67
+ **Execute documented examples first** - Always prioritize running the actual commands/procedures shown in the documentation, adapting for safety when needed. Use help commands only as supplements for understanding, not as substitutes for real testing.
68
+
69
+ ### Claim Validation (Verify Descriptions)
70
+ **For architectural/system claims**: Use Grep/Read tools to find relevant code and verify claims about system behavior, component relationships, and implementation details.
71
+
72
+ **For feature availability claims**: Search codebase for actual implementations of described features. Distinguish between implemented functionality and planned/aspirational descriptions.
73
+
74
+ **For integration claims**: Test that described component interactions actually work as documented, not just that individual components exist.
75
+
76
+ **For file/directory claims**: Verify that referenced files, directories, and code structures actually exist and contain what's described.
77
+
78
+ **Before submitting results:**
79
+ - "If I were a real user following these docs, where would I get stuck?"
80
+ - "Did I test the actual user workflows, not just validate syntax?"
81
+ - "Would a user following these steps get the experience the docs promise?"
196
82
 
197
83
  ## Result Format
198
84
 
@@ -203,71 +89,37 @@ Return your results as JSON in this exact format:
203
89
  "whatWasDone": "Brief summary of what you tested and executed in this section",
204
90
  "issues": [
205
91
  "Specific problem or issue you found while testing",
206
- "Another issue that prevents users from succeeding",
207
- "Documentation inaccuracy or missing information"
92
+ "Another issue that prevents users from succeeding"
208
93
  ],
209
94
  "recommendations": [
210
95
  "Specific actionable suggestion to fix an issue",
211
- "Improvement that would help users succeed",
212
- "Way to make documentation more accurate"
96
+ "Improvement that would help users succeed"
213
97
  ]
214
98
  }
215
99
  ```
216
100
 
217
- **Guidelines for each field:**
218
-
219
- **whatWasDone** (string):
220
- - Concise summary covering BOTH functional testing AND semantic analysis
221
- - Include what commands/procedures you executed (Phase 1)
222
- - Include what claims you analyzed (Phase 2)
223
- - Mention how many items you tested in both phases
224
- - Example: "Tested 4 installation commands - npm install, API key setup, and 2 verification commands. All executed successfully with minor adaptations. Analyzed 6 documentation claims including 'easy installation' and 'automatic verification' - found installation complexity matches claimed simplicity but verification requires manual interpretation."
225
-
226
- **Common Requirements for Both Issues and Recommendations:**
227
- - **MUST include precise location**: Section headings, specific text snippets, or element descriptions (NOT line numbers)
228
- - **MUST be immediately actionable**: Clear enough for someone else to locate and address
229
- - Be specific and actionable items only
230
- - Do NOT include positive assessments like "section works well" or "documentation is accurate"
231
- - Use empty arrays if nothing to report
232
- - Keep each item concise but clear
233
- - Focus on user impact and success
234
-
235
- **issues** (array of strings):
236
- - **Purpose**: Specific problems that prevent or hinder user success
237
- - **Include**: Both functional problems (doesn't work) and semantic problems (inaccurate descriptions)
238
- - **Must explain user impact**: What fails or misleads users
239
- - Examples:
240
- - "In 'Quick Start' section: npm install command requires global flag but documentation doesn't mention it"
241
- - "Under 'Verification' heading: Expected output 'Success: Ready' but actual output shows 'Status: OK'"
242
- - "The phrase 'automatically detects' in Prerequisites: Claims automatic detection but requires manual configuration file editing"
243
-
244
- **recommendations** (array of strings):
245
- - **Purpose**: Specific actionable improvements that would help users succeed
246
- - **Include**: Only concrete changes or additions to the documentation
247
- - **Must specify exact action**: What text to add, remove, or modify
248
- - Examples:
249
- - "In 'Quick Start' section: Change 'npm install' command to 'npm install --global'"
250
- - "Under 'Verification' heading: Update expected output example from 'Success: Ready' to 'Status: OK'"
251
- - "In Prerequisites section: Add note after 'automatically detects' phrase: 'Requires manual editing of config.json file before detection works'"
101
+ **Guidelines:**
252
102
 
253
- **Important**:
254
- - Use only this JSON format - do not include additional text before or after
255
- - Arrays can be empty if no issues or recommendations found
256
- - Keep strings concise but informative
257
- - Focus on user impact rather than technical details
103
+ **whatWasDone** (string): Concise summary covering BOTH functional testing AND semantic analysis - what commands/procedures you executed and what claims you analyzed.
104
+
105
+ **issues** (array): CRITICAL PROBLEMS that prevent users from succeeding (broken functionality, incorrect information, missing required steps). Include precise location and explain user impact.
106
+
107
+ **recommendations** (array): OPTIONAL IMPROVEMENTS that would enhance user experience (NOT critical problems). Must be genuinely optional - user can succeed without these changes. Do NOT repeat anything from issues array.
108
+
109
+ **ANTI-DUPLICATION RULES**: If something is broken/incorrect → issues only. If something could be enhanced but works fine → recommendations only. Never put the same concept in both arrays.
258
110
 
259
111
  ## Instructions
260
112
 
261
- **CRITICAL**: You must complete BOTH phases for comprehensive testing:
113
+ Complete BOTH phases for comprehensive testing:
262
114
 
263
115
  ### Phase 1 Execution Checklist:
264
- 1. **Identify all testable content** - discover commands, procedures, examples
265
- 2. **Execute everything** - run commands, test procedures, verify examples
266
- 3. **Document what actually happens** - capture real outcomes vs expected
116
+ 1. Identify all testable content - discover commands, procedures, examples
117
+ 2. Execute everything - run commands, test procedures, verify examples
118
+ 3. Document what actually happens - capture real outcomes vs expected
267
119
 
268
120
  ### Phase 2 Analysis Checklist:
269
- 1. **Find all claims** - scan text for promises, expectations, descriptions
270
- 2. **Evaluate each claim** - does reality match what's written?
271
- 3. **Check user perspective** - would a typical user get the promised experience?
121
+ 1. Find all claims - scan text for promises, expectations, descriptions
122
+ 2. Evaluate each claim - does reality match what's written?
123
+ 3. Check user perspective - would a typical user get the promised experience?
272
124
 
273
- **Both phases are mandatory** - functional testing without semantic analysis misses critical user experience gaps. Your goal is ensuring users get both working instructions AND accurate expectations about what will actually happen.
125
+ Both phases are mandatory - functional testing without semantic analysis misses critical user experience gaps. Your goal is ensuring users get both working instructions AND accurate expectations about what will actually happen.
@@ -8,11 +8,15 @@ You are a Kubernetes expert. Given this user intent and available resources, sel
8
8
  ## Available Resources
9
9
  {resources}
10
10
 
11
+ ## Organizational Patterns
12
+ {patterns}
13
+
11
14
  ## Instructions
12
15
 
13
16
  Select all resources that could be relevant for this intent. Consider:
14
17
  - Direct relevance to the user's needs
15
18
  - Common Kubernetes patterns and best practices
19
+ - **Organizational patterns** and resource suggestions from your organization's best practices (see above)
16
20
  - Resource relationships and combinations
17
21
  - Production deployment patterns
18
22
  - Complex multi-component solutions
@@ -8,6 +8,11 @@ You are a Kubernetes expert helping to determine which resource(s) best meet a u
8
8
  ## Available Resources
9
9
  {resources}
10
10
 
11
+ ## Organizational Patterns
12
+ {patterns}
13
+
14
+ **Note**: If no organizational patterns are provided above, this means pattern matching is unavailable (Vector DB not configured). Focus on pure Kubernetes resource analysis and recommendations.
15
+
11
16
  ## Instructions
12
17
 
13
18
  Analyze the user's intent and determine the best solution(s). **Provide multiple alternative approaches** ranked by effectiveness, such as:
@@ -15,6 +20,19 @@ Analyze the user's intent and determine the best solution(s). **Provide multiple
15
20
  - A combination of resources that can actually integrate and work together to create a complete solution
16
21
  - Different approaches with varying complexity and capabilities
17
22
 
23
+ **Organizational Patterns**: Multiple organizational patterns may be provided, each addressing different aspects of the deployment:
24
+
25
+ - **Generic Application Patterns**: Apply to all applications (networking, monitoring, security)
26
+ - **Architectural Patterns**: Apply to specific architectural styles (stateless, microservice, etc.)
27
+ - **Infrastructure Patterns**: Apply to specific integrations (database, messaging, etc.)
28
+ - **Operational Patterns**: Apply to specific operational requirements (scaling, schema management, etc.)
29
+
30
+ **Pattern Composition Strategy**:
31
+ - **Combine relevant patterns** - A single solution can be influenced by multiple patterns
32
+ - **Prioritize by specificity** - More specific patterns should have higher influence than generic ones
33
+ - **Layer pattern guidance** - Generic patterns provide baseline, specific patterns add requirements
34
+ - **Avoid conflicts** - If patterns conflict, prioritize user intent and technical accuracy
35
+
18
36
  **IMPORTANT**: Always provide at least 2-3 different solution alternatives when possible, even if some score lower than others. Users benefit from seeing multiple options to choose from.
19
37
 
20
38
  ## Validation Requirements
@@ -67,7 +85,24 @@ Analyze the user's intent and determine the best solution(s). **Provide multiple
67
85
  "score": 95,
68
86
  "description": "Complete application deployment with networking",
69
87
  "reasons": ["Provides full application lifecycle", "Includes network access"],
70
- "analysis": "Detailed explanation of schema analysis and why this solution meets the user's needs"
88
+ "analysis": "Detailed explanation of schema analysis and why this solution meets the user's needs",
89
+ "patternInfluences": [
90
+ {
91
+ "patternId": "stateless-app-pattern-123",
92
+ "description": "Stateless application deployment pattern",
93
+ "influence": "high",
94
+ "matchedTriggers": ["stateless app", "web application"],
95
+ "matchedConcept": "stateless application"
96
+ },
97
+ {
98
+ "patternId": "network-policy-pattern-456",
99
+ "description": "Standard networking and security pattern",
100
+ "influence": "medium",
101
+ "matchedTriggers": ["application", "deployment"],
102
+ "matchedConcept": "generic application"
103
+ }
104
+ ],
105
+ "usedPatterns": true
71
106
  },
72
107
  {
73
108
  "type": "single",
@@ -81,10 +116,39 @@ Analyze the user's intent and determine the best solution(s). **Provide multiple
81
116
  "score": 75,
82
117
  "description": "Basic application deployment",
83
118
  "reasons": ["Simple deployment option", "Lower complexity"],
84
- "analysis": "Alternative approach with reduced functionality but simpler setup"
119
+ "analysis": "Alternative approach with reduced functionality but simpler setup",
120
+ "patternInfluences": [],
121
+ "usedPatterns": false
85
122
  }
86
123
  ]
87
124
  }
88
125
  ```
89
126
 
127
+ ## Pattern Influence Tracking
128
+
129
+ For each solution, you MUST include pattern influence information:
130
+
131
+ **If organizational patterns influenced this solution:**
132
+ - Set `"usedPatterns": true`
133
+ - Include `"patternInfluences"` array with:
134
+ - `patternId`: Use the pattern's ID from the organizational patterns section
135
+ - `description`: Brief description of the pattern
136
+ - `influence`: Rate as "high", "medium", or "low" based on how much the pattern shaped this solution
137
+ - `matchedTriggers`: Which pattern triggers matched the user's intent
138
+
139
+ **If no patterns influenced this solution (or no patterns available):**
140
+ - Set `"usedPatterns": false`
141
+ - Use empty array: `"patternInfluences": []`
142
+
143
+ **Pattern Influence Guidelines:**
144
+ - **High influence**: Pattern directly suggested these specific resources or architecture
145
+ - **Medium influence**: Pattern informed the approach but didn't dictate specific resources
146
+ - **Low influence**: Pattern provided general guidance but minimal impact on final solution
147
+
148
+ **Multiple Pattern Handling:**
149
+ - **Include all relevant patterns** that influenced the solution, even if slightly
150
+ - **Use different influence levels** to show relative importance of each pattern
151
+ - **Match concept context** - Reference which deployment concept led to each pattern match
152
+ - **Show composition** - Demonstrate how multiple patterns work together
153
+
90
154
  **IMPORTANT**: In your analysis field, explicitly explain which schema fields enable each requirement from the user intent. If a requirement cannot be fulfilled by available schema fields, explain this and score accordingly.
@@ -22,6 +22,8 @@ Ask the user to describe the feature idea to understand the core concept and sco
22
22
  ### Step 2: Create GitHub Issue FIRST
23
23
  Create the GitHub issue immediately to get the issue ID. This ID is required for proper PRD file naming.
24
24
 
25
+ **IMPORTANT: Add the "PRD" label to the issue for discoverability.**
26
+
25
27
  ### Step 3: Create PRD File with Correct Naming
26
28
  Create the PRD file using the actual GitHub issue ID: `prds/[issue-id]-[feature-name].md`
27
29
 
@@ -114,6 +116,8 @@ Work through the PRD template focusing on project management, milestone tracking
114
116
  **Priority**: [High/Medium/Low]
115
117
  ```
116
118
 
119
+ **Don't forget to add the "PRD" label to the issue after creation.**
120
+
117
121
  **Issue Update (after PRD file created):**
118
122
  ```markdown
119
123
  ## PRD: [Feature Name]
@@ -18,9 +18,71 @@ You are helping initiate active implementation work on a specific Product Requir
18
18
  4. **Identify Starting Point** - Determine the best first implementation task
19
19
  5. **Begin Implementation** - Launch into actual development work
20
20
 
21
- ## Step 1: PRD Selection
21
+ ## Step 0: Context Awareness Check
22
22
 
23
- **Ask the user which PRD they want to start implementing:**
23
+ **FIRST: Check if PRD context is already clear from recent conversation:**
24
+
25
+ **Skip detection/analysis if recent conversation shows:**
26
+ - **Recent PRD work discussed** - "We just worked on PRD 29", "Just completed PRD update", etc.
27
+ - **Specific PRD mentioned** - "PRD #X", "MCP Prompts PRD", etc.
28
+ - **PRD-specific commands used** - Recent use of `prd-update-progress`, `prd-start` with specific PRD
29
+ - **Clear work context** - Discussion of specific features, tasks, or requirements for a known PRD
30
+
31
+ **If context is clear:**
32
+ - Skip to Step 2 (PRD Readiness Validation) using the known PRD
33
+ - Use conversation history to understand current state and recent progress
34
+ - Proceed directly with readiness validation based on known PRD status
35
+
36
+ **If context is unclear:**
37
+ - Continue to Step 1 (PRD Detection) for full analysis
38
+
39
+ ## Step 1: Smart PRD Detection (Only if Context Unclear)
40
+
41
+ **Auto-detect the target PRD using these context clues (in priority order):**
42
+
43
+ 1. **Git Branch Analysis** - Check current branch name for PRD patterns:
44
+ - `feature/prd-12-*` → PRD 12
45
+ - `prd-13-*` → PRD 13
46
+ - `feature/prd-*` → Extract PRD number
47
+
48
+ 2. **Recent Git Commits** - Look at recent commit messages for PRD references:
49
+ - "fix: PRD 12 documentation" → PRD 12
50
+ - "feat: implement prd-13 features" → PRD 13
51
+
52
+ 3. **Git Status Analysis** - Check modified/staged files for PRD clues:
53
+ - Modified `prds/12-*.md` → PRD 12
54
+ - Changes in feature-specific directories
55
+
56
+ 4. **Available PRDs Discovery** - List all PRDs in `prds/` directory:
57
+ - `prds/12-documentation-testing.md`
58
+ - `prds/13-cicd-documentation-testing.md`
59
+
60
+ 5. **Fallback to User Choice** - Only if context detection fails, ask user to specify
61
+
62
+ **PRD Detection Implementation:**
63
+ ```bash
64
+ # Use these tools to gather context:
65
+ # 1. Check git branch: gitStatus shows current branch
66
+ # 2. Check git status: Look for modified PRD files
67
+ # 3. List PRDs: Use LS or Glob to find prds/*.md files
68
+ # 4. Recent commits: Use Bash 'git log --oneline -n 5' for recent context
69
+ ```
70
+
71
+ **Detection Logic:**
72
+ - **High Confidence**: Branch name matches PRD pattern (e.g., `feature/prd-12-documentation-testing`)
73
+ - **Medium Confidence**: Modified PRD files in git status or recent commits mention PRD
74
+ - **Low Confidence**: Multiple PRDs available, use heuristics (most recent, largest)
75
+ - **No Context**: Present available options to user
76
+
77
+ **Example Detection Outputs:**
78
+ ```markdown
79
+ 🎯 **Auto-detected PRD 12** (Documentation Testing)
80
+ - Branch: `feature/prd-12-documentation-testing` ✅
81
+ - Modified files: `prds/12-documentation-testing.md` ✅
82
+ - Recent commits mention PRD 12 features ✅
83
+ ```
84
+
85
+ **If context detection fails, ask the user:**
24
86
 
25
87
  ```markdown
26
88
  ## Which PRD would you like to start implementing?
@@ -33,7 +95,10 @@ Execute `dot-ai:prds-get` prompt to see all available PRDs organized by priority
33
95
  **Your choice**: [Wait for user input]
34
96
  ```
35
97
 
36
- Once the user provides the PRD number, proceed to load and validate that specific PRD.
98
+ **Once PRD is identified:**
99
+ - Read the PRD file from `prds/[issue-id]-[feature-name].md`
100
+ - Analyze completion status across all sections
101
+ - Identify patterns in completed vs remaining work
37
102
 
38
103
  ## Step 2: PRD Readiness Validation
39
104
 
@@ -13,12 +13,13 @@ You are helping update an existing Product Requirements Document (PRD) based on
13
13
  ## Process Overview
14
14
 
15
15
  1. **Identify Target PRD** - Determine which PRD to update
16
- 2. **Analyze Git Changes** - Review commits and file changes since last update
17
- 3. **Map Changes to PRD Items** - Intelligently connect code changes to requirements
16
+ 2. **Context-First Progress Analysis** - Use conversation context first, Git analysis as fallback
17
+ 3. **Map Changes to PRD Items** - Intelligently connect work to requirements
18
18
  4. **Propose Updates** - Suggest checkbox completions and requirement changes
19
19
  5. **User Confirmation** - Verify proposals and handle edge cases
20
20
  6. **Update PRD** - Apply changes and add work log entry
21
21
  7. **Flag Divergences** - Alert when actual work differs from planned work
22
+ 8. **Commit Progress Updates** - Preserve progress checkpoint
22
23
 
23
24
  ## Step 1: Smart PRD Identification
24
25
 
@@ -36,9 +37,23 @@ You are helping update an existing Product Requirements Document (PRD) based on
36
37
  - If only one PRD file recently modified → Use that PRD
37
38
  - If multiple PRDs possible → Ask user to clarify
38
39
 
39
- ## Step 2: Git Change Analysis
40
+ ## Step 2: Context-First Progress Analysis
40
41
 
41
- Use git tools to understand what work was completed:
42
+ **PRIORITY: Use conversation context first before Git analysis**
43
+
44
+ ### Conversation Context Analysis (FAST - Use First)
45
+ **If recent conversation shows clear work completion:**
46
+ - **Recently discussed implementations**: "Just completed X", "Implemented Y", "Built Z"
47
+ - **Todo list context**: Check TodoWrite tool for completed/in-progress items
48
+ - **File creation mentions**: "Created file X", "Added Y functionality"
49
+ - **Test completion references**: "Tests passing", "All X tests complete"
50
+ - **User confirmations**: "That works", "Implementation complete", "Ready for next step"
51
+
52
+ **Use conversation context when available - it's faster and more accurate than Git parsing**
53
+
54
+ ### Git Change Analysis (FALLBACK - Use Only If Context Unclear)
55
+
56
+ **Only use git tools when conversation context is insufficient:**
42
57
 
43
58
  ### Commit Analysis
44
59
  ```bash
@@ -231,3 +246,34 @@ When applying updates:
231
246
  3. **Update status sections** to reflect current phase
232
247
  4. **Preserve unchecked items** that still need work
233
248
  5. **Update completion percentages** realistically
249
+
250
+ ## Step 8: Commit Progress Updates
251
+
252
+ After successfully updating the PRD, commit all changes to preserve the progress checkpoint:
253
+
254
+ ### Commit Implementation Work
255
+ ```bash
256
+ # MANDATORY: Stage ALL files - implementation work AND PRD updates together
257
+ # DO NOT selectively add only PRD files - commit everything as one atomic unit
258
+ git add .
259
+
260
+ # Verify what will be committed
261
+ git status
262
+
263
+ # Create comprehensive commit with PRD reference
264
+ git commit -m "feat(prd-X): implement [brief description of completed work]
265
+
266
+ - [Brief list of key implementation achievements]
267
+ - Updated PRD checkboxes for completed items
268
+ - Added work log entry with progress summary
269
+
270
+ Progress: X% complete - [next major milestone]"
271
+ ```
272
+
273
+ ### Commit Message Guidelines
274
+ - **Reference PRD number**: Always include `prd-X` in commit message
275
+ - **Descriptive summary**: Brief but clear description of what was implemented
276
+ - **Progress indication**: Include completion status and next steps
277
+ - **Evidence-based**: Only commit when there's actual implementation progress
278
+
279
+ **Note**: Do NOT push commits unless explicitly requested by the user. Commits preserve local progress checkpoints without affecting remote branches.