@nano-step/skill-manager 5.3.0 → 5.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,290 +0,0 @@
1
- ---
2
- name: feature-analysis
3
- description: "Deep code analysis of any feature or service before writing docs, diagrams, or making changes. Enforces read-everything-first discipline. Traces exact execution paths, data transformations, guard clauses, bugs, and gaps between existing docs and actual code. Produces a validated Mermaid diagram and structured analysis output. Language and framework agnostic."
4
- compatibility: "OpenCode"
5
- metadata:
6
- version: "2.0.0"
7
- tools:
8
- required:
9
- - Read (every file in the feature)
10
- - Bash (find all files, run mermaid validator)
11
- uses:
12
- - mermaid-validator skill (validate any diagram produced)
13
- triggers:
14
- - "analyze [feature]"
15
- - "how does X work"
16
- - "trace the flow of"
17
- - "understand X"
18
- - "what does X do"
19
- - "deep dive into"
20
- - "working on X - understand it first"
21
- - "update docs/brain for"
22
- ---
23
-
24
- # Feature Analysis Skill
25
-
26
- A disciplined protocol for deeply analyzing any feature in any codebase before producing docs, diagrams, or making changes. Framework-agnostic. Language-agnostic.
27
-
28
- ---
29
-
30
- ## The Core Rule
31
-
32
- **READ EVERYTHING. PRODUCE NOTHING. THEN SYNTHESIZE.**
33
-
34
- Do not write a single diagram node, doc line, or description until every file in the feature has been read. Every time you produce output before reading all files, you will miss something.
35
-
36
- ---
37
-
38
- ## Phase 1: Discovery — Find Every File
39
-
40
- Before reading anything, map the full file set.
41
-
42
- ```bash
43
- # Find all source files for the feature
44
- find <feature-dir> -type f | sort
45
-
46
- # Check imports to catch shared utilities, decorators, helpers
47
- grep -r "import\|require" <feature-dir> | grep -v node_modules | sort -u
48
- ```
49
-
50
- **Read in dependency order (bottom-up — foundations first):**
51
-
52
- 1. **Entry point / bootstrap** — port, env vars, startup config
53
- 2. **Schema / model files** — DB schema, columns, nullable, indexes, types
54
- 3. **Utility / helper files** — every function, every transformation, every constant
55
- 4. **Decorator / middleware files** — wrapping logic, side effects, return value handling
56
- 5. **Infrastructure services** — cache, lock, queue, external connections
57
- 6. **Core business logic** — the main service/handler files
58
- 7. **External / fetch services** — HTTP calls, filters applied, error handling
59
- 8. **Entry controllers / routers / handlers** — HTTP method, route, params, return
60
- 9. **Wiring files** — module/DI config, middleware registration
61
-
62
- **Do not skip any file. Do not skim.**
63
-
64
- ---
65
-
66
- ## Phase 2: Per-File Checklist
67
-
68
- For each file, answer these questions before moving to the next.
69
-
70
- ### Entry point / bootstrap
71
- - [ ] What port or address? (default? env override?)
72
- - [ ] Any global middleware, pipes, interceptors, or lifecycle hooks?
73
-
74
- ### Schema / model files
75
- - [ ] Table/collection name
76
- - [ ] Every field: type, nullable, default, constraints, indexes
77
- - [ ] Relations / references to other entities
78
-
79
- ### Utility / helper files
80
- - [ ] Every exported function — what does it do, step by step?
81
- - [ ] For transformations: what inputs? what outputs? what edge cases handled?
82
- - [ ] Where is this function called? (grep for usages)
83
- - [ ] How many times is it called within a single method? (once per batch? once per item?)
84
-
85
- ### Decorator / middleware files
86
- - [ ] What does it wrap?
87
- - [ ] What side effects before / after the original method?
88
- - [ ] **Does it `return` the result of the original method?** (missing `return` = silent discard bug)
89
- - [ ] Does it use try/finally? What runs in finally?
90
- - [ ] What happens on the early-exit path?
91
-
92
- ### Core business logic files
93
- - [ ] Every method: signature, return type
94
- - [ ] For each method: trace every line — no summarizing
95
- - [ ] Accumulator variables — where initialized, where incremented, where returned
96
- - [ ] Loop structure: sequential or parallel?
97
- - [ ] Every external call: what service/module, what args, what returned
98
- - [ ] Guard clauses: every early return / continue / throw
99
- - [ ] Every branch in conditionals
100
-
101
- ### External / fetch service files
102
- - [ ] Exact URLs or endpoints (hardcoded or env?)
103
- - [ ] Filters applied to response data (which calls filter, which don't?)
104
- - [ ] Error handling on external calls
105
-
106
- ### Entry controllers / routers / handlers
107
- - [ ] HTTP method (GET vs POST — don't assume)
108
- - [ ] Route path
109
- - [ ] What core method is called?
110
- - [ ] What is returned?
111
-
112
- ### Wiring / module files
113
- - [ ] What is imported / registered?
114
- - [ ] What is exported / exposed?
115
-
116
- ---
117
-
118
- ## Phase 3: Execution Trace
119
-
120
- After reading all files, produce a numbered step-by-step trace of the full execution path. This is not prose — it is a precise trace.
121
-
122
- **Format:**
123
- ```
124
- 1. [HTTP METHOD] /route → HandlerName.methodName()
125
- 2. HandlerName.methodName() → ServiceName.methodName()
126
- 3. @DecoratorName: step A (e.g. acquire lock, check cache)
127
- 4. → if condition X: early return [what is returned / not returned]
128
- 5. ServiceName.methodName():
129
- 6. step 1: call externalService.fetchAll() → parallel([fetchA(), fetchB()])
130
- 7. fetchA(): GET https://... → returns all items (no filter)
131
- 8. fetchB(): GET https://... → filter(x => x.field !== null) → returns filtered
132
- 9. step 2: parallel([processItems(a, 'typeA'), processItems(b, 'typeB')])
133
- 10. processItems(items, type):
134
- 11. init: totalUpdated = 0, totalInserted = 0
135
- 12. for loop (sequential): i = 0 to items.length, step batchSize
136
- 13. batch = items.slice(i, i + batchSize)
137
- 14. { updated, inserted } = await processBatch(batch)
138
- 15. totalUpdated += updated; totalInserted += inserted
139
- 16. return { total: items.length, updated: totalUpdated, inserted: totalInserted }
140
- 17. processBatch(batch):
141
- 18. guard: if batch.length === 0 → return { updated: 0, inserted: 0 }
142
- 19. step 1: names = batch.map(item => transform(item.field)) ← called ONCE per batch
143
- 20. step 2: existing = repo.find(WHERE field IN names)
144
- 21. step 3: map = existing.reduce(...)
145
- 22. step 4: for each item in batch:
146
- 23. value = transform(item.field) ← called AGAIN per item
147
- 24. ...decision tree...
148
- 25. repo.save(itemsToSave)
149
- 26. return { updated, inserted }
150
- 27. @DecoratorName finally: releaseLock()
151
- 28. BUG: decorator does not return result → caller receives undefined
152
- ```
153
-
154
- **Key things to call out in the trace:**
155
- - When a utility function is called more than once (note the count and context)
156
- - Every accumulator variable (where init, where increment, where return)
157
- - Every guard clause / early exit
158
- - Sequential vs parallel (for loop vs Promise.all / asyncio.gather / goroutines)
159
- - Any discarded return values
160
-
161
- ---
162
-
163
- ## Phase 4: Data Transformations Audit
164
-
165
- For every utility/transformation function used:
166
-
167
- | Function | What it does (step by step) | Called where | Called how many times |
168
- |----------|----------------------------|--------------|----------------------|
169
- | `transformFn(x)` | 1. step A 2. step B 3. step C | methodName | TWICE: once in step N (batch), once per item in loop |
170
-
171
- ---
172
-
173
- ## Phase 5: Gap Analysis — Docs vs Code
174
-
175
- Compare existing docs/brain files against what the code actually does:
176
-
177
- | Claim in docs | What code actually does | Verdict |
178
- |---------------|------------------------|---------|
179
- | "POST /endpoint" | `@Get()` in controller | ❌ Wrong |
180
- | "Port 3000" | `process.env.PORT \|\| 4001` in entrypoint | ❌ Wrong |
181
- | "function converts X" | Also does Y (undocumented) | ⚠️ Incomplete |
182
- | "returns JSON result" | Decorator discards return value | ❌ Bug |
183
-
184
- ---
185
-
186
- ## Phase 6: Produce Outputs
187
-
188
- Only now, after phases 1–5 are complete, produce:
189
-
190
- ### 6a. Structured Analysis Document
191
-
192
- ```markdown
193
- ## Feature Analysis: [Feature Name]
194
- Repo: [repo] | Date: [date]
195
-
196
- ### Files Read
197
- - `path/to/controller.ts` — entry point, GET /endpoint, calls ServiceA.run()
198
- - `path/to/service.ts` — core logic, orchestrates fetch + batch loop
199
- - [... every file ...]
200
-
201
- ### Execution Trace
202
- [numbered trace from Phase 3]
203
-
204
- ### Data Transformations
205
- [table from Phase 4]
206
-
207
- ### Guard Clauses & Edge Cases
208
- - processBatch: empty batch guard → returns {0,0} immediately
209
- - fetchItems: filters items where field === null
210
- - LockManager: if lock not acquired → returns void immediately (no error thrown)
211
-
212
- ### Bugs / Issues Found
213
- - path/to/decorator.ts line N: `await originalMethod.apply(this, args)` missing `return`
214
- → result is discarded, caller always receives undefined
215
- - [any others]
216
-
217
- ### Gaps: Docs vs Code
218
- [table from Phase 5]
219
-
220
- ### Files to Update
221
- - [ ] `.agents/_repos/[repo].md` — update port, endpoint method, transformation description
222
- - [ ] `.agents/_domains/[domain].md` — if architecture changed
223
- ```
224
-
225
- ### 6b. Mermaid Diagram
226
-
227
- Write the diagram. Then **immediately run the validator before doing anything else.**
228
-
229
- If you have the mermaid-validator skill:
230
- ```bash
231
- node /path/to/project/scripts/validate-mermaid.mjs [file.md]
232
- ```
233
-
234
- Otherwise validate manually — common syntax errors:
235
- - Labels with `()` must be wrapped in `"double quotes"`: `A["method()"]`
236
- - No `\n` in node labels — use `<br/>` or shorten
237
- - No HTML entities (`&amp;`, `&gt;`) in labels — use literal characters
238
- - `end` is a reserved word in Mermaid — use `END` or `done` as node IDs
239
-
240
- If errors → fix → re-run. Do not proceed until clean.
241
-
242
- **Diagram must include:**
243
- - Every step from the execution trace
244
- - Data transformation nodes (show what the function does, not just its name)
245
- - Guard clauses as decision nodes
246
- - Parallel vs sequential clearly distinguished
247
- - Bugs annotated inline (e.g. "BUG: result discarded")
248
-
249
- ### 6c. Doc / Brain File Updates
250
-
251
- Update relevant docs with:
252
- - Corrected facts (port, endpoint method, etc.)
253
- - The validated Mermaid diagram
254
- - Data transformation table
255
- - Known bugs section
256
-
257
- ---
258
-
259
- ## Anti-Patterns (What This Skill Prevents)
260
-
261
- | Anti-pattern | What gets missed | Rule violated |
262
- |---|---|---|
263
- | Drew diagram before reading utility files | Transformation called twice — not shown | READ EVERYTHING FIRST |
264
- | Trusted existing docs for endpoint method | GET vs POST wrong in docs | GAP ANALYSIS required |
265
- | Summarized service method instead of tracing | Guard clause (empty batch) missed | TRACE NOT SUMMARIZE |
266
- | Trusted existing docs for port/config | Wrong values | Verify entry point |
267
- | Read decorator without checking return | Silent result discard bug | RETURN VALUE AUDIT |
268
- | Merged H1/H2 paths into shared loop node | Sequential vs parallel distinction lost | TRACE LOOP STRUCTURE |
269
- | Assumed filter applies to all fetches | One fetch had no filter — skipped items | READ EVERY FETCH FILE |
270
-
271
- ---
272
-
273
- ## Quick Reference Checklist
274
-
275
- Before producing any output, verify:
276
-
277
- - [ ] Entry point read — port/address confirmed
278
- - [ ] All schema/model files read — every field noted
279
- - [ ] All utility files read — every transformation step documented
280
- - [ ] All decorator/middleware files read — return value audited
281
- - [ ] All core service files read — every method traced line by line
282
- - [ ] All fetch/external services read — filters noted (which have filters, which don't)
283
- - [ ] All controller/router/handler files read — HTTP method confirmed (not assumed)
284
- - [ ] All wiring/module files read — dependency graph understood
285
- - [ ] Utility functions: call count per method noted
286
- - [ ] All guard clauses documented
287
- - [ ] Accumulator variables traced (init → increment → return)
288
- - [ ] Loop structure confirmed (sequential vs parallel)
289
- - [ ] Existing docs compared against code (gap analysis done)
290
- - [ ] Mermaid diagram validated before saving
@@ -1,15 +0,0 @@
1
- {
2
- "name": "feature-analysis",
3
- "version": "2.0.0",
4
- "description": "Deep code analysis of any feature or service before writing docs, diagrams, or making changes. Enforces read-everything-first discipline with execution tracing, data transformation audits, and gap analysis.",
5
- "compatibility": "OpenCode",
6
- "agent": null,
7
- "commands": [],
8
- "tags": [
9
- "analysis",
10
- "code-review",
11
- "documentation",
12
- "mermaid",
13
- "tracing"
14
- ]
15
- }
@@ -1,76 +0,0 @@
1
- ---
2
- name: rri-t-testing
3
- description: RRI-T QA methodology skill. Execute 5-phase testing: PREPARE, DISCOVER, STRUCTURE, EXECUTE, ANALYZE. Use before release, creating test cases, or QA review.
4
- ---
5
-
6
- # RRI-T Testing Skill
7
-
8
- Execute comprehensive QA testing using Reverse Requirements Interview - Testing methodology.
9
-
10
- ## When to Use
11
-
12
- - Testing any feature before release
13
- - Creating test cases for new features
14
- - Performing thorough QA review
15
- - Running stress tests and edge case analysis
16
-
17
- ## Input
18
-
19
- | Param | Req | Description |
20
- |-------|-----|-------------|
21
- | feature | Yes | Feature name in kebab-case |
22
- | phase | No | `prepare`, `discover`, `structure`, `execute`, `analyze` |
23
- | dimensions | No | `ui-ux,api,performance,security,data,infra,edge-cases` |
24
-
25
- ## 5 Phases
26
-
27
- 1. **PREPARE** — Read specs, setup output dir
28
- 2. **DISCOVER** — Interview 5 personas for test scenarios
29
- 3. **STRUCTURE** — Format as Q-A-R-P-T test cases
30
- 4. **EXECUTE** — Run tests, capture screenshots
31
- 5. **ANALYZE** — Calculate coverage, apply release gates
32
-
33
- ## Output
34
-
35
- ```
36
- /ai/test-case/rri-t/{feature-name}/
37
- ├── 01-prepare.md
38
- ├── 02-discover.md
39
- ├── 03-structure.md
40
- ├── 04-execute.md
41
- ├── 05-analyze.md
42
- └── summary.md
43
- ```
44
-
45
- ## Templates
46
-
47
- Use templates bundled in `assets/` directory of this skill:
48
- - `rri-t-persona-interview.md` — Persona interviews
49
- - `rri-t-test-case.md` — Q-A-R-P-T format
50
- - `rri-t-coverage-dashboard.md` — 7-dimension tracking
51
- - `rri-t-stress-matrix.md` — 8-axis stress testing
52
-
53
- ## 7 Dimensions
54
-
55
- UI/UX | API | Performance | Security | Data Integrity | Infrastructure | Edge Cases
56
-
57
- ## 5 Personas
58
-
59
- End User | Business Analyst | QA Destroyer | DevOps Tester | Security Auditor
60
-
61
- ## Release Gates
62
-
63
- | GO | NO-GO |
64
- |----|-------|
65
- | All 7 dims >= 70% | Any dim < 50% |
66
- | 5/7 >= 85% | >2 P0 FAILs |
67
- | Zero P0 FAIL | Critical MISSING |
68
-
69
- ## Guardrails
70
-
71
- 1. Test all 7 dimensions
72
- 2. Use all 5 personas
73
- 3. Document everything
74
- 4. PAINFUL is valid (not failure)
75
- 5. Follow release gates
76
- 6. Test Vietnamese locale
@@ -1,101 +0,0 @@
1
- # RRI-T Coverage Dashboard — {feature_name}
2
-
3
- Feature: {feature_name}
4
- Date: {date}
5
- Release Gate Status: {release_gate_status}
6
- Release Version: {release_version}
7
- Owner: {owner}
8
- Prepared By: {prepared_by}
9
-
10
- ## Release Gate Criteria
11
-
12
- | Rule | Criteria | Status |
13
- | --- | --- | --- |
14
- | RG-1 | All 7 dimensions >= 70% coverage | {rg1_status} |
15
- | RG-2 | At least 5/7 dimensions >= 85% coverage | {rg2_status} |
16
- | RG-3 | Zero P0 items in FAIL state | {rg3_status} |
17
-
18
- ## Dimension Coverage
19
-
20
- | Dimension | Total | PASS | FAIL | PAINFUL | MISSING | Coverage % | Gate |
21
- | --- | --- | --- | --- | --- | --- | --- | --- |
22
- | D1: UI/UX | {d1_total} | {d1_pass} | {d1_fail} | {d1_painful} | {d1_missing} | {d1_coverage} | {d1_gate} |
23
- | D2: API | {d2_total} | {d2_pass} | {d2_fail} | {d2_painful} | {d2_missing} | {d2_coverage} | {d2_gate} |
24
- | D3: Performance | {d3_total} | {d3_pass} | {d3_fail} | {d3_painful} | {d3_missing} | {d3_coverage} | {d3_gate} |
25
- | D4: Security | {d4_total} | {d4_pass} | {d4_fail} | {d4_painful} | {d4_missing} | {d4_coverage} | {d4_gate} |
26
- | D5: Data Integrity | {d5_total} | {d5_pass} | {d5_fail} | {d5_painful} | {d5_missing} | {d5_coverage} | {d5_gate} |
27
- | D6: Infrastructure | {d6_total} | {d6_pass} | {d6_fail} | {d6_painful} | {d6_missing} | {d6_coverage} | {d6_gate} |
28
- | D7: Edge Cases | {d7_total} | {d7_pass} | {d7_fail} | {d7_painful} | {d7_missing} | {d7_coverage} | {d7_gate} |
29
-
30
- Legend: ✅ PASS | ❌ FAIL | ⚠️ PAINFUL | ☐ MISSING
31
-
32
- ## Priority Breakdown
33
-
34
- | Priority | Total | PASS | FAIL | PAINFUL | MISSING | Coverage % | Gate |
35
- | --- | --- | --- | --- | --- | --- | --- | --- |
36
- | P0 | {p0_total} | {p0_pass} | {p0_fail} | {p0_painful} | {p0_missing} | {p0_coverage} | {p0_gate} |
37
- | P1 | {p1_total} | {p1_pass} | {p1_fail} | {p1_painful} | {p1_missing} | {p1_coverage} | {p1_gate} |
38
- | P2 | {p2_total} | {p2_pass} | {p2_fail} | {p2_painful} | {p2_missing} | {p2_coverage} | {p2_gate} |
39
- | P3 | {p3_total} | {p3_pass} | {p3_fail} | {p3_painful} | {p3_missing} | {p3_coverage} | {p3_gate} |
40
-
41
- ## Summary Metrics
42
-
43
- - Total Test Cases: {total_tc}
44
- - Overall Coverage %: {overall_coverage}
45
- - Dimensions Passing Gate: {dimensions_passing_gate}
46
- - P0 FAIL Count: {p0_fail_count}
47
- - P0 PAINFUL Count: {p0_painful_count}
48
- - MISSING Count: {missing_count}
49
- - Latest Update: {latest_update}
50
- - Notes: {summary_notes}
51
- - Risks: {summary_risks}
52
-
53
- ## FAIL Items
54
-
55
- | TC ID | Priority | Dimension | Description | Assigned To |
56
- | --- | --- | --- | --- | --- |
57
- | {fail_tc_id_1} | {fail_priority_1} | {fail_dimension_1} | {fail_description_1} | {fail_assigned_to_1} |
58
- | {fail_tc_id_2} | {fail_priority_2} | {fail_dimension_2} | {fail_description_2} | {fail_assigned_to_2} |
59
- | {fail_tc_id_3} | {fail_priority_3} | {fail_dimension_3} | {fail_description_3} | {fail_assigned_to_3} |
60
- | {fail_tc_id_4} | {fail_priority_4} | {fail_dimension_4} | {fail_description_4} | {fail_assigned_to_4} |
61
- | {fail_tc_id_5} | {fail_priority_5} | {fail_dimension_5} | {fail_description_5} | {fail_assigned_to_5} |
62
-
63
- ## PAINFUL Items
64
-
65
- | TC ID | Priority | Dimension | Description | UX Impact |
66
- | --- | --- | --- | --- | --- |
67
- | {painful_tc_id_1} | {painful_priority_1} | {painful_dimension_1} | {painful_description_1} | {painful_ux_impact_1} |
68
- | {painful_tc_id_2} | {painful_priority_2} | {painful_dimension_2} | {painful_description_2} | {painful_ux_impact_2} |
69
- | {painful_tc_id_3} | {painful_priority_3} | {painful_dimension_3} | {painful_description_3} | {painful_ux_impact_3} |
70
- | {painful_tc_id_4} | {painful_priority_4} | {painful_dimension_4} | {painful_description_4} | {painful_ux_impact_4} |
71
- | {painful_tc_id_5} | {painful_priority_5} | {painful_dimension_5} | {painful_description_5} | {painful_ux_impact_5} |
72
-
73
- ## MISSING Items
74
-
75
- | TC ID | Priority | Dimension | Description | User Need |
76
- | --- | --- | --- | --- | --- |
77
- | {missing_tc_id_1} | {missing_priority_1} | {missing_dimension_1} | {missing_description_1} | {missing_user_need_1} |
78
- | {missing_tc_id_2} | {missing_priority_2} | {missing_dimension_2} | {missing_description_2} | {missing_user_need_2} |
79
- | {missing_tc_id_3} | {missing_priority_3} | {missing_dimension_3} | {missing_description_3} | {missing_user_need_3} |
80
- | {missing_tc_id_4} | {missing_priority_4} | {missing_dimension_4} | {missing_description_4} | {missing_user_need_4} |
81
- | {missing_tc_id_5} | {missing_priority_5} | {missing_dimension_5} | {missing_description_5} | {missing_user_need_5} |
82
-
83
- ## Regression Test List
84
-
85
- | Test ID | Title | Dimension | Priority | Status |
86
- | --- | --- | --- | --- | --- |
87
- | {regression_id_1} | {regression_title_1} | {regression_dimension_1} | {regression_priority_1} | {regression_status_1} |
88
- | {regression_id_2} | {regression_title_2} | {regression_dimension_2} | {regression_priority_2} | {regression_status_2} |
89
- | {regression_id_3} | {regression_title_3} | {regression_dimension_3} | {regression_priority_3} | {regression_status_3} |
90
- | {regression_id_4} | {regression_title_4} | {regression_dimension_4} | {regression_priority_4} | {regression_status_4} |
91
- | {regression_id_5} | {regression_title_5} | {regression_dimension_5} | {regression_priority_5} | {regression_status_5} |
92
-
93
- ## Sign-off
94
-
95
- | Role | Name | Decision | Notes |
96
- | --- | --- | --- | --- |
97
- | QA Lead | {qa_lead_name} | {qa_lead_decision} | {qa_lead_notes} |
98
- | Dev Lead | {dev_lead_name} | {dev_lead_decision} | {dev_lead_notes} |
99
- | Product | {product_name} | {product_decision} | {product_notes} |
100
-
101
- Decision Legend: {approve_label}/{reject_label}