@nano-step/skill-manager 5.2.1 → 5.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/utils.d.ts +1 -1
- package/dist/utils.js +1 -1
- package/package.json +1 -1
- package/skills/feature-analysis/SKILL.md +290 -0
- package/skills/feature-analysis/skill.json +15 -0
- package/skills/mermaid-validator/SKILL.md +163 -0
- package/skills/mermaid-validator/skill.json +15 -0
- package/skills/nano-brain/AGENTS_SNIPPET.md +46 -0
- package/skills/nano-brain/SKILL.md +77 -0
- package/skills/nano-brain/skill.json +15 -0
- package/skills/pdf/SKILL.md +303 -0
- package/skills/pdf/skill.json +17 -0
- package/skills/rtk/SKILL.md +103 -0
- package/skills/rtk/skill.json +15 -0
package/dist/utils.d.ts
CHANGED
package/dist/utils.js
CHANGED
|
@@ -13,7 +13,7 @@ exports.writeText = writeText;
|
|
|
13
13
|
const path_1 = __importDefault(require("path"));
|
|
14
14
|
const os_1 = __importDefault(require("os"));
|
|
15
15
|
const fs_extra_1 = __importDefault(require("fs-extra"));
|
|
16
|
-
exports.MANAGER_VERSION = "5.
|
|
16
|
+
exports.MANAGER_VERSION = "5.3.0";
|
|
17
17
|
async function detectOpenCodePaths() {
|
|
18
18
|
const homeConfig = path_1.default.join(os_1.default.homedir(), ".config", "opencode");
|
|
19
19
|
const cwd = process.cwd();
|
package/package.json
CHANGED
|
@@ -0,0 +1,290 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: feature-analysis
|
|
3
|
+
description: "Deep code analysis of any feature or service before writing docs, diagrams, or making changes. Enforces read-everything-first discipline. Traces exact execution paths, data transformations, guard clauses, bugs, and gaps between existing docs and actual code. Produces a validated Mermaid diagram and structured analysis output. Language and framework agnostic."
|
|
4
|
+
compatibility: "OpenCode"
|
|
5
|
+
metadata:
|
|
6
|
+
version: "2.0.0"
|
|
7
|
+
tools:
|
|
8
|
+
required:
|
|
9
|
+
- Read (every file in the feature)
|
|
10
|
+
- Bash (find all files, run mermaid validator)
|
|
11
|
+
uses:
|
|
12
|
+
- mermaid-validator skill (validate any diagram produced)
|
|
13
|
+
triggers:
|
|
14
|
+
- "analyze [feature]"
|
|
15
|
+
- "how does X work"
|
|
16
|
+
- "trace the flow of"
|
|
17
|
+
- "understand X"
|
|
18
|
+
- "what does X do"
|
|
19
|
+
- "deep dive into"
|
|
20
|
+
- "working on X - understand it first"
|
|
21
|
+
- "update docs/brain for"
|
|
22
|
+
---
|
|
23
|
+
|
|
24
|
+
# Feature Analysis Skill
|
|
25
|
+
|
|
26
|
+
A disciplined protocol for deeply analyzing any feature in any codebase before producing docs, diagrams, or making changes. Framework-agnostic. Language-agnostic.
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
30
|
+
## The Core Rule
|
|
31
|
+
|
|
32
|
+
**READ EVERYTHING. PRODUCE NOTHING. THEN SYNTHESIZE.**
|
|
33
|
+
|
|
34
|
+
Do not write a single diagram node, doc line, or description until every file in the feature has been read. Every time you produce output before reading all files, you will miss something.
|
|
35
|
+
|
|
36
|
+
---
|
|
37
|
+
|
|
38
|
+
## Phase 1: Discovery — Find Every File
|
|
39
|
+
|
|
40
|
+
Before reading anything, map the full file set.
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
# Find all source files for the feature
|
|
44
|
+
find <feature-dir> -type f | sort
|
|
45
|
+
|
|
46
|
+
# Check imports to catch shared utilities, decorators, helpers
|
|
47
|
+
grep -r "import\|require" <feature-dir> | grep -v node_modules | sort -u
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
**Read in dependency order (bottom-up — foundations first):**
|
|
51
|
+
|
|
52
|
+
1. **Entry point / bootstrap** — port, env vars, startup config
|
|
53
|
+
2. **Schema / model files** — DB schema, columns, nullable, indexes, types
|
|
54
|
+
3. **Utility / helper files** — every function, every transformation, every constant
|
|
55
|
+
4. **Decorator / middleware files** — wrapping logic, side effects, return value handling
|
|
56
|
+
5. **Infrastructure services** — cache, lock, queue, external connections
|
|
57
|
+
6. **Core business logic** — the main service/handler files
|
|
58
|
+
7. **External / fetch services** — HTTP calls, filters applied, error handling
|
|
59
|
+
8. **Entry controllers / routers / handlers** — HTTP method, route, params, return
|
|
60
|
+
9. **Wiring files** — module/DI config, middleware registration
|
|
61
|
+
|
|
62
|
+
**Do not skip any file. Do not skim.**
|
|
63
|
+
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
## Phase 2: Per-File Checklist
|
|
67
|
+
|
|
68
|
+
For each file, answer these questions before moving to the next.
|
|
69
|
+
|
|
70
|
+
### Entry point / bootstrap
|
|
71
|
+
- [ ] What port or address? (default? env override?)
|
|
72
|
+
- [ ] Any global middleware, pipes, interceptors, or lifecycle hooks?
|
|
73
|
+
|
|
74
|
+
### Schema / model files
|
|
75
|
+
- [ ] Table/collection name
|
|
76
|
+
- [ ] Every field: type, nullable, default, constraints, indexes
|
|
77
|
+
- [ ] Relations / references to other entities
|
|
78
|
+
|
|
79
|
+
### Utility / helper files
|
|
80
|
+
- [ ] Every exported function — what does it do, step by step?
|
|
81
|
+
- [ ] For transformations: what inputs? what outputs? what edge cases handled?
|
|
82
|
+
- [ ] Where is this function called? (grep for usages)
|
|
83
|
+
- [ ] How many times is it called within a single method? (once per batch? once per item?)
|
|
84
|
+
|
|
85
|
+
### Decorator / middleware files
|
|
86
|
+
- [ ] What does it wrap?
|
|
87
|
+
- [ ] What side effects before / after the original method?
|
|
88
|
+
- [ ] **Does it `return` the result of the original method?** (missing `return` = silent discard bug)
|
|
89
|
+
- [ ] Does it use try/finally? What runs in finally?
|
|
90
|
+
- [ ] What happens on the early-exit path?
|
|
91
|
+
|
|
92
|
+
### Core business logic files
|
|
93
|
+
- [ ] Every method: signature, return type
|
|
94
|
+
- [ ] For each method: trace every line — no summarizing
|
|
95
|
+
- [ ] Accumulator variables — where initialized, where incremented, where returned
|
|
96
|
+
- [ ] Loop structure: sequential or parallel?
|
|
97
|
+
- [ ] Every external call: what service/module, what args, what returned
|
|
98
|
+
- [ ] Guard clauses: every early return / continue / throw
|
|
99
|
+
- [ ] Every branch in conditionals
|
|
100
|
+
|
|
101
|
+
### External / fetch service files
|
|
102
|
+
- [ ] Exact URLs or endpoints (hardcoded or env?)
|
|
103
|
+
- [ ] Filters applied to response data (which calls filter, which don't?)
|
|
104
|
+
- [ ] Error handling on external calls
|
|
105
|
+
|
|
106
|
+
### Entry controllers / routers / handlers
|
|
107
|
+
- [ ] HTTP method (GET vs POST — don't assume)
|
|
108
|
+
- [ ] Route path
|
|
109
|
+
- [ ] What core method is called?
|
|
110
|
+
- [ ] What is returned?
|
|
111
|
+
|
|
112
|
+
### Wiring / module files
|
|
113
|
+
- [ ] What is imported / registered?
|
|
114
|
+
- [ ] What is exported / exposed?
|
|
115
|
+
|
|
116
|
+
---
|
|
117
|
+
|
|
118
|
+
## Phase 3: Execution Trace
|
|
119
|
+
|
|
120
|
+
After reading all files, produce a numbered step-by-step trace of the full execution path. This is not prose — it is a precise trace.
|
|
121
|
+
|
|
122
|
+
**Format:**
|
|
123
|
+
```
|
|
124
|
+
1. [HTTP METHOD] /route → HandlerName.methodName()
|
|
125
|
+
2. HandlerName.methodName() → ServiceName.methodName()
|
|
126
|
+
3. @DecoratorName: step A (e.g. acquire lock, check cache)
|
|
127
|
+
4. → if condition X: early return [what is returned / not returned]
|
|
128
|
+
5. ServiceName.methodName():
|
|
129
|
+
6. step 1: call externalService.fetchAll() → parallel([fetchA(), fetchB()])
|
|
130
|
+
7. fetchA(): GET https://... → returns all items (no filter)
|
|
131
|
+
8. fetchB(): GET https://... → filter(x => x.field !== null) → returns filtered
|
|
132
|
+
9. step 2: parallel([processItems(a, 'typeA'), processItems(b, 'typeB')])
|
|
133
|
+
10. processItems(items, type):
|
|
134
|
+
11. init: totalUpdated = 0, totalInserted = 0
|
|
135
|
+
12. for loop (sequential): i = 0 to items.length, step batchSize
|
|
136
|
+
13. batch = items.slice(i, i + batchSize)
|
|
137
|
+
14. { updated, inserted } = await processBatch(batch)
|
|
138
|
+
15. totalUpdated += updated; totalInserted += inserted
|
|
139
|
+
16. return { total: items.length, updated: totalUpdated, inserted: totalInserted }
|
|
140
|
+
17. processBatch(batch):
|
|
141
|
+
18. guard: if batch.length === 0 → return { updated: 0, inserted: 0 }
|
|
142
|
+
19. step 1: names = batch.map(item => transform(item.field)) ← called ONCE per batch
|
|
143
|
+
20. step 2: existing = repo.find(WHERE field IN names)
|
|
144
|
+
21. step 3: map = existing.reduce(...)
|
|
145
|
+
22. step 4: for each item in batch:
|
|
146
|
+
23. value = transform(item.field) ← called AGAIN per item
|
|
147
|
+
24. ...decision tree...
|
|
148
|
+
25. repo.save(itemsToSave)
|
|
149
|
+
26. return { updated, inserted }
|
|
150
|
+
27. @DecoratorName finally: releaseLock()
|
|
151
|
+
28. BUG: decorator does not return result → caller receives undefined
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**Key things to call out in the trace:**
|
|
155
|
+
- When a utility function is called more than once (note the count and context)
|
|
156
|
+
- Every accumulator variable (where init, where increment, where return)
|
|
157
|
+
- Every guard clause / early exit
|
|
158
|
+
- Sequential vs parallel (for loop vs Promise.all / asyncio.gather / goroutines)
|
|
159
|
+
- Any discarded return values
|
|
160
|
+
|
|
161
|
+
---
|
|
162
|
+
|
|
163
|
+
## Phase 4: Data Transformations Audit
|
|
164
|
+
|
|
165
|
+
For every utility/transformation function used:
|
|
166
|
+
|
|
167
|
+
| Function | What it does (step by step) | Called where | Called how many times |
|
|
168
|
+
|----------|----------------------------|--------------|----------------------|
|
|
169
|
+
| `transformFn(x)` | 1. step A 2. step B 3. step C | methodName | TWICE: once in step N (batch), once per item in loop |
|
|
170
|
+
|
|
171
|
+
---
|
|
172
|
+
|
|
173
|
+
## Phase 5: Gap Analysis — Docs vs Code
|
|
174
|
+
|
|
175
|
+
Compare existing docs/brain files against what the code actually does:
|
|
176
|
+
|
|
177
|
+
| Claim in docs | What code actually does | Verdict |
|
|
178
|
+
|---------------|------------------------|---------|
|
|
179
|
+
| "POST /endpoint" | `@Get()` in controller | ❌ Wrong |
|
|
180
|
+
| "Port 3000" | `process.env.PORT \|\| 4001` in entrypoint | ❌ Wrong |
|
|
181
|
+
| "function converts X" | Also does Y (undocumented) | ⚠️ Incomplete |
|
|
182
|
+
| "returns JSON result" | Decorator discards return value | ❌ Bug |
|
|
183
|
+
|
|
184
|
+
---
|
|
185
|
+
|
|
186
|
+
## Phase 6: Produce Outputs
|
|
187
|
+
|
|
188
|
+
Only now, after phases 1–5 are complete, produce:
|
|
189
|
+
|
|
190
|
+
### 6a. Structured Analysis Document
|
|
191
|
+
|
|
192
|
+
```markdown
|
|
193
|
+
## Feature Analysis: [Feature Name]
|
|
194
|
+
Repo: [repo] | Date: [date]
|
|
195
|
+
|
|
196
|
+
### Files Read
|
|
197
|
+
- `path/to/controller.ts` — entry point, GET /endpoint, calls ServiceA.run()
|
|
198
|
+
- `path/to/service.ts` — core logic, orchestrates fetch + batch loop
|
|
199
|
+
- [... every file ...]
|
|
200
|
+
|
|
201
|
+
### Execution Trace
|
|
202
|
+
[numbered trace from Phase 3]
|
|
203
|
+
|
|
204
|
+
### Data Transformations
|
|
205
|
+
[table from Phase 4]
|
|
206
|
+
|
|
207
|
+
### Guard Clauses & Edge Cases
|
|
208
|
+
- processBatch: empty batch guard → returns {0,0} immediately
|
|
209
|
+
- fetchItems: filters items where field === null
|
|
210
|
+
- LockManager: if lock not acquired → returns void immediately (no error thrown)
|
|
211
|
+
|
|
212
|
+
### Bugs / Issues Found
|
|
213
|
+
- path/to/decorator.ts line N: `await originalMethod.apply(this, args)` missing `return`
|
|
214
|
+
→ result is discarded, caller always receives undefined
|
|
215
|
+
- [any others]
|
|
216
|
+
|
|
217
|
+
### Gaps: Docs vs Code
|
|
218
|
+
[table from Phase 5]
|
|
219
|
+
|
|
220
|
+
### Files to Update
|
|
221
|
+
- [ ] `.agents/_repos/[repo].md` — update port, endpoint method, transformation description
|
|
222
|
+
- [ ] `.agents/_domains/[domain].md` — if architecture changed
|
|
223
|
+
```
|
|
224
|
+
|
|
225
|
+
### 6b. Mermaid Diagram
|
|
226
|
+
|
|
227
|
+
Write the diagram. Then **immediately run the validator before doing anything else.**
|
|
228
|
+
|
|
229
|
+
If you have the mermaid-validator skill:
|
|
230
|
+
```bash
|
|
231
|
+
node /path/to/project/scripts/validate-mermaid.mjs [file.md]
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
Otherwise validate manually — common syntax errors:
|
|
235
|
+
- Labels with `()` must be wrapped in `"double quotes"`: `A["method()"]`
|
|
236
|
+
- No `\n` in node labels — use `<br/>` or shorten
|
|
237
|
+
- No HTML entities (`&`, `>`) in labels — use literal characters
|
|
238
|
+
- `end` is a reserved word in Mermaid — use `END` or `done` as node IDs
|
|
239
|
+
|
|
240
|
+
If errors → fix → re-run. Do not proceed until clean.
|
|
241
|
+
|
|
242
|
+
**Diagram must include:**
|
|
243
|
+
- Every step from the execution trace
|
|
244
|
+
- Data transformation nodes (show what the function does, not just its name)
|
|
245
|
+
- Guard clauses as decision nodes
|
|
246
|
+
- Parallel vs sequential clearly distinguished
|
|
247
|
+
- Bugs annotated inline (e.g. "BUG: result discarded")
|
|
248
|
+
|
|
249
|
+
### 6c. Doc / Brain File Updates
|
|
250
|
+
|
|
251
|
+
Update relevant docs with:
|
|
252
|
+
- Corrected facts (port, endpoint method, etc.)
|
|
253
|
+
- The validated Mermaid diagram
|
|
254
|
+
- Data transformation table
|
|
255
|
+
- Known bugs section
|
|
256
|
+
|
|
257
|
+
---
|
|
258
|
+
|
|
259
|
+
## Anti-Patterns (What This Skill Prevents)
|
|
260
|
+
|
|
261
|
+
| Anti-pattern | What gets missed | Rule violated |
|
|
262
|
+
|---|---|---|
|
|
263
|
+
| Drew diagram before reading utility files | Transformation called twice — not shown | READ EVERYTHING FIRST |
|
|
264
|
+
| Trusted existing docs for endpoint method | GET vs POST wrong in docs | GAP ANALYSIS required |
|
|
265
|
+
| Summarized service method instead of tracing | Guard clause (empty batch) missed | TRACE NOT SUMMARIZE |
|
|
266
|
+
| Trusted existing docs for port/config | Wrong values | Verify entry point |
|
|
267
|
+
| Read decorator without checking return | Silent result discard bug | RETURN VALUE AUDIT |
|
|
268
|
+
| Merged H1/H2 paths into shared loop node | Sequential vs parallel distinction lost | TRACE LOOP STRUCTURE |
|
|
269
|
+
| Assumed filter applies to all fetches | One fetch had no filter — skipped items | READ EVERY FETCH FILE |
|
|
270
|
+
|
|
271
|
+
---
|
|
272
|
+
|
|
273
|
+
## Quick Reference Checklist
|
|
274
|
+
|
|
275
|
+
Before producing any output, verify:
|
|
276
|
+
|
|
277
|
+
- [ ] Entry point read — port/address confirmed
|
|
278
|
+
- [ ] All schema/model files read — every field noted
|
|
279
|
+
- [ ] All utility files read — every transformation step documented
|
|
280
|
+
- [ ] All decorator/middleware files read — return value audited
|
|
281
|
+
- [ ] All core service files read — every method traced line by line
|
|
282
|
+
- [ ] All fetch/external services read — filters noted (which have filters, which don't)
|
|
283
|
+
- [ ] All controller/router/handler files read — HTTP method confirmed (not assumed)
|
|
284
|
+
- [ ] All wiring/module files read — dependency graph understood
|
|
285
|
+
- [ ] Utility functions: call count per method noted
|
|
286
|
+
- [ ] All guard clauses documented
|
|
287
|
+
- [ ] Accumulator variables traced (init → increment → return)
|
|
288
|
+
- [ ] Loop structure confirmed (sequential vs parallel)
|
|
289
|
+
- [ ] Existing docs compared against code (gap analysis done)
|
|
290
|
+
- [ ] Mermaid diagram validated before saving
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "feature-analysis",
|
|
3
|
+
"version": "2.0.0",
|
|
4
|
+
"description": "Deep code analysis of any feature or service before writing docs, diagrams, or making changes. Enforces read-everything-first discipline with execution tracing, data transformation audits, and gap analysis.",
|
|
5
|
+
"compatibility": "OpenCode",
|
|
6
|
+
"agent": null,
|
|
7
|
+
"commands": [],
|
|
8
|
+
"tags": [
|
|
9
|
+
"analysis",
|
|
10
|
+
"code-review",
|
|
11
|
+
"documentation",
|
|
12
|
+
"mermaid",
|
|
13
|
+
"tracing"
|
|
14
|
+
]
|
|
15
|
+
}
|
|
@@ -0,0 +1,163 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: mermaid-validator
|
|
3
|
+
description: "Validate and write correct Mermaid diagrams. Run the validator script before finalizing any .md file containing a mermaid block. Enforces syntax rules that prevent parse errors."
|
|
4
|
+
compatibility: "OpenCode"
|
|
5
|
+
metadata:
|
|
6
|
+
version: "2.0.0"
|
|
7
|
+
tools:
|
|
8
|
+
required:
|
|
9
|
+
- bash (node scripts/validate-mermaid.mjs)
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# Mermaid Validator Skill
|
|
13
|
+
|
|
14
|
+
## MANDATORY WORKFLOW
|
|
15
|
+
|
|
16
|
+
**Any time you write or edit a Mermaid diagram, you MUST:**
|
|
17
|
+
|
|
18
|
+
1. Write the diagram
|
|
19
|
+
2. Run the validator
|
|
20
|
+
3. Fix any errors reported
|
|
21
|
+
4. Re-run until clean
|
|
22
|
+
|
|
23
|
+
**Never** mark a documentation task complete if the validator reports errors.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## Validator Script Setup (Per Project)
|
|
28
|
+
|
|
29
|
+
This skill expects a zero-dependency Node.js validator script at `scripts/validate-mermaid.mjs` in the project root.
|
|
30
|
+
|
|
31
|
+
If the script doesn't exist yet, create it — see the reference implementation in any project that has already set this up, or ask to scaffold it.
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
# Validate all markdown files (default: scans .agents/ directory)
|
|
35
|
+
node scripts/validate-mermaid.mjs
|
|
36
|
+
|
|
37
|
+
# Validate a specific file
|
|
38
|
+
node scripts/validate-mermaid.mjs path/to/file.md
|
|
39
|
+
|
|
40
|
+
# Validate a whole directory
|
|
41
|
+
node scripts/validate-mermaid.mjs path/to/dir/
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
**Expected clean output:**
|
|
45
|
+
```
|
|
46
|
+
Scanned 12 file(s), 3 mermaid block(s).
|
|
47
|
+
✅ All diagrams passed.
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
**Error output example:**
|
|
51
|
+
```
|
|
52
|
+
❌ path/to/file.md
|
|
53
|
+
Line 47 [no-literal-newline]: Literal \n inside node/edge label — use <br/> or rewrite as plain text
|
|
54
|
+
> B --> C[ServiceName.method\n@Decorator]
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
---
|
|
58
|
+
|
|
59
|
+
## Mermaid Syntax Rules (Mandatory Reference)
|
|
60
|
+
|
|
61
|
+
### ✅ Safe — No quoting needed
|
|
62
|
+
```
|
|
63
|
+
Letters, digits, spaces, hyphens, underscores, colons, slashes, dots, angle brackets
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
### ⚠️ Requires `"double quotes"` around the whole label
|
|
67
|
+
| Character | Wrong | Right |
|
|
68
|
+
|-----------|-------|-------|
|
|
69
|
+
| Parentheses `()` | `A[label (detail)]` | `A["label (detail)"]` |
|
|
70
|
+
| Percent `%` | `A[100%]` | `A["100%"]` |
|
|
71
|
+
| Ampersand `&` | `A[foo & bar]` | `A["foo & bar"]` |
|
|
72
|
+
| Hash `#` | `A[#tag]` | `A["#tag"]` |
|
|
73
|
+
| At-sign `@` | `A[@lock]` | `A["@lock"]` |
|
|
74
|
+
|
|
75
|
+
### ❌ Never use inside a diagram block
|
|
76
|
+
| Pattern | Wrong | Fix |
|
|
77
|
+
|---------|-------|-----|
|
|
78
|
+
| Literal `\n` in label | `A[Line1\nLine2]` | `A["Line1<br/>Line2"]` or just `A[Line1 Line2]` |
|
|
79
|
+
| HTML entities | `A[foo & bar]` | `A["foo & bar"]` |
|
|
80
|
+
| HTML numeric entities | `A[(parens)]` | `A["(parens)"]` |
|
|
81
|
+
| Reserved word `end` as node ID | `end[task]` | `End[task]` |
|
|
82
|
+
|
|
83
|
+
### Edge label quoting
|
|
84
|
+
```
|
|
85
|
+
A -- simple text --> B ✅ fine
|
|
86
|
+
A -- "text with (parens)" --> B ✅ quoted
|
|
87
|
+
A -- text with (parens) --> B ❌ breaks
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
### Node shape reference
|
|
91
|
+
```
|
|
92
|
+
A[Rectangle]
|
|
93
|
+
A(Rounded)
|
|
94
|
+
A([Stadium]) ← OK to have ( inside [ here — this is shape syntax
|
|
95
|
+
A{Diamond}
|
|
96
|
+
A[(Cylinder/DB)]
|
|
97
|
+
A((Circle))
|
|
98
|
+
A>Asymmetric]
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
### Mermaid entity codes (inside `"quoted"` labels only)
|
|
102
|
+
```
|
|
103
|
+
#40; = ( #41; = ) #35; = # #37; = %
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
---
|
|
107
|
+
|
|
108
|
+
## Writing a Mermaid Diagram — Checklist
|
|
109
|
+
|
|
110
|
+
Before saving any diagram, mentally check each line:
|
|
111
|
+
|
|
112
|
+
- [ ] No `\n` inside any label (bracket, brace, or paren)
|
|
113
|
+
- [ ] No `&#NN;` or `&` HTML entities
|
|
114
|
+
- [ ] Any label containing `()` is wrapped in `"double quotes"`
|
|
115
|
+
- [ ] Node IDs are alphanumeric + underscore only (no hyphens in ID itself)
|
|
116
|
+
- [ ] No node ID named `end` (lowercase)
|
|
117
|
+
- [ ] Edge labels containing special chars are `"quoted"`
|
|
118
|
+
- [ ] Diagram has at least one node and one valid statement
|
|
119
|
+
|
|
120
|
+
Then run the validator. If it passes, you're done.
|
|
121
|
+
|
|
122
|
+
---
|
|
123
|
+
|
|
124
|
+
## Common Diagram Patterns
|
|
125
|
+
|
|
126
|
+
### Service method with decorator
|
|
127
|
+
```mermaid
|
|
128
|
+
flowchart TD
|
|
129
|
+
A[Controller] --> B["Service.method - @Decorator key ttl"]
|
|
130
|
+
```
|
|
131
|
+
Note: `@` is safe after the first non-@ character. Put the whole label in quotes to be safe.
|
|
132
|
+
|
|
133
|
+
### Lock/cache decision
|
|
134
|
+
```mermaid
|
|
135
|
+
flowchart TD
|
|
136
|
+
A --> B{"Redis SET NX EX - key - TTL 1800s"}
|
|
137
|
+
B -- Lock held --> C([Return void])
|
|
138
|
+
B -- Lock acquired --> D[Continue]
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
### DB node (cylinder)
|
|
142
|
+
```mermaid
|
|
143
|
+
flowchart TD
|
|
144
|
+
A --> B[(database.table)]
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
### Parallel execution
|
|
148
|
+
```mermaid
|
|
149
|
+
flowchart TD
|
|
150
|
+
A["Promise.all"] --> B[Task 1]
|
|
151
|
+
A --> C[Task 2]
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
### Sequential batch loop
|
|
155
|
+
```mermaid
|
|
156
|
+
flowchart TD
|
|
157
|
+
A[Start loop] --> B["for i = 0 to items.length step batchSize"]
|
|
158
|
+
B --> C["batch = items.slice(i, i + batchSize)"]
|
|
159
|
+
C --> D["processBatch(batch)"]
|
|
160
|
+
D --> E{More batches?}
|
|
161
|
+
E -- Yes --> B
|
|
162
|
+
E -- No --> F[Return totals]
|
|
163
|
+
```
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "mermaid-validator",
|
|
3
|
+
"version": "2.0.0",
|
|
4
|
+
"description": "Validate and write correct Mermaid diagrams. Run the validator script before finalizing any .md file containing a mermaid block. Enforces syntax rules that prevent parse errors.",
|
|
5
|
+
"compatibility": "OpenCode",
|
|
6
|
+
"agent": null,
|
|
7
|
+
"commands": [],
|
|
8
|
+
"tags": [
|
|
9
|
+
"mermaid",
|
|
10
|
+
"validation",
|
|
11
|
+
"diagrams",
|
|
12
|
+
"documentation",
|
|
13
|
+
"syntax"
|
|
14
|
+
]
|
|
15
|
+
}
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
<!-- OPENCODE-MEMORY:START -->
|
|
2
|
+
<!-- Managed block - do not edit manually. Updated by: npx nano-brain init -->
|
|
3
|
+
|
|
4
|
+
## Memory System (nano-brain)
|
|
5
|
+
|
|
6
|
+
This project uses **nano-brain** for persistent context across sessions.
|
|
7
|
+
|
|
8
|
+
### Quick Reference
|
|
9
|
+
|
|
10
|
+
nano-brain supports two access methods. Try MCP first; if unavailable, use CLI.
|
|
11
|
+
|
|
12
|
+
| I want to... | MCP Tool | CLI Fallback |
|
|
13
|
+
|--------------|----------|--------------|
|
|
14
|
+
| Recall past work on a topic | `memory_query("topic")` | `npx nano-brain query "topic"` |
|
|
15
|
+
| Find exact error/function name | `memory_search("exact term")` | `npx nano-brain query "exact term"` |
|
|
16
|
+
| Explore a concept semantically | `memory_vsearch("concept")` | `npx nano-brain query "concept"` |
|
|
17
|
+
| Save a decision for future sessions | `memory_write("decision context")` | Create file in `~/.nano-brain/memory/` |
|
|
18
|
+
| Check index health | `memory_status` | `npx nano-brain status` |
|
|
19
|
+
|
|
20
|
+
### Session Workflow
|
|
21
|
+
|
|
22
|
+
**Start of session:** Check memory for relevant past context before exploring the codebase.
|
|
23
|
+
```
|
|
24
|
+
# MCP (if available):
|
|
25
|
+
memory_query("what have we done regarding {current task topic}")
|
|
26
|
+
|
|
27
|
+
# CLI fallback:
|
|
28
|
+
npx nano-brain query "what have we done regarding {current task topic}"
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
**End of session:** Save key decisions, patterns discovered, and debugging insights.
|
|
32
|
+
```
|
|
33
|
+
# MCP (if available):
|
|
34
|
+
memory_write("## Summary\n- Decision: ...\n- Why: ...\n- Files: ...")
|
|
35
|
+
|
|
36
|
+
# CLI fallback: create a markdown file
|
|
37
|
+
# File: ~/.nano-brain/memory/YYYY-MM-DD-summary.md
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
### When to Search Memory vs Codebase
|
|
41
|
+
|
|
42
|
+
- **"Have we done this before?"** → `memory_query` or `npx nano-brain query` (searches past sessions)
|
|
43
|
+
- **"Where is this in the code?"** → grep / ast-grep (searches current files)
|
|
44
|
+
- **"How does this concept work here?"** → Both (memory for past context + grep for current code)
|
|
45
|
+
|
|
46
|
+
<!-- OPENCODE-MEMORY:END -->
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# nano-brain
|
|
2
|
+
|
|
3
|
+
Persistent memory for AI coding agents. Hybrid search (BM25 + semantic + LLM reranking) across past sessions, codebase, notes, and daily logs.
|
|
4
|
+
|
|
5
|
+
## Slash Commands
|
|
6
|
+
|
|
7
|
+
| Command | When |
|
|
8
|
+
|---------|------|
|
|
9
|
+
| `/nano-brain-init` | First-time workspace setup |
|
|
10
|
+
| `/nano-brain-status` | Health check, embedding progress |
|
|
11
|
+
| `/nano-brain-reindex` | After branch switch, pull, or major changes |
|
|
12
|
+
|
|
13
|
+
## When to Use Memory
|
|
14
|
+
|
|
15
|
+
**Before work:** Recall past decisions, patterns, debugging insights, cross-session context.
|
|
16
|
+
**After work:** Save key decisions, architecture choices, non-obvious fixes, domain knowledge.
|
|
17
|
+
|
|
18
|
+
## Access Methods: MCP vs CLI
|
|
19
|
+
|
|
20
|
+
nano-brain can be accessed via **MCP tools** (when the MCP server is configured) or **CLI** (always available).
|
|
21
|
+
|
|
22
|
+
**Detection:** Try calling `memory_status` MCP tool first. If it fails with "MCP server not found", fall back to CLI.
|
|
23
|
+
|
|
24
|
+
### MCP Tools (preferred when available)
|
|
25
|
+
|
|
26
|
+
| Need | MCP Tool |
|
|
27
|
+
|------|----------|
|
|
28
|
+
| Exact keyword (error msg, function name) | `memory_search` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_search")` |
|
|
29
|
+
| Conceptual ("how does auth work") | `memory_vsearch` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_vsearch")` |
|
|
30
|
+
| Best quality, complex question | `memory_query` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_query")` |
|
|
31
|
+
| Retrieve specific doc | `memory_get` / `memory_multi_get` |
|
|
32
|
+
| Save insight or decision (append to daily log) | `memory_write` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_write")` |
|
|
33
|
+
| Set/update a keyed memory (overwrites previous) | `memory_set` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_set")` |
|
|
34
|
+
| Delete a keyed memory | `memory_delete` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_delete")` |
|
|
35
|
+
| List all keyed memories | `memory_keys` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_keys")` |
|
|
36
|
+
| Check health | `memory_status` via `skill_mcp(mcp_name="nano-brain", tool_name="memory_status")` |
|
|
37
|
+
| Rescan source files | `memory_index_codebase` |
|
|
38
|
+
| Refresh all indexes | `memory_update` |
|
|
39
|
+
|
|
40
|
+
### CLI Fallback (always available)
|
|
41
|
+
|
|
42
|
+
When MCP server is not available, use the CLI via Bash tool:
|
|
43
|
+
|
|
44
|
+
| Need | CLI Command |
|
|
45
|
+
|------|-------------|
|
|
46
|
+
| Best quality search (hybrid: BM25 + vector + reranking) | `npx nano-brain query "search terms"` |
|
|
47
|
+
| Search with collection filter | `npx nano-brain query "terms" -c codebase` |
|
|
48
|
+
| Search with more/fewer results | `npx nano-brain query "terms" -n 20` |
|
|
49
|
+
| Show full content of results | `npx nano-brain query "terms" --full` |
|
|
50
|
+
| Check health & stats | `npx nano-brain status` |
|
|
51
|
+
| Initialize workspace | `npx nano-brain init --root=/path/to/workspace` |
|
|
52
|
+
| Generate embeddings | `npx nano-brain embed` |
|
|
53
|
+
| Harvest sessions | `npx nano-brain harvest` |
|
|
54
|
+
| List collections | `npx nano-brain collection list` |
|
|
55
|
+
|
|
56
|
+
**CLI limitations vs MCP:**
|
|
57
|
+
- CLI only has `query` (unified hybrid search) — no separate `search` (BM25-only) or `vsearch` (vector-only)
|
|
58
|
+
- CLI cannot `write` notes — use MCP or manually create files in `~/.nano-brain/memory/`
|
|
59
|
+
- CLI cannot `get` specific docs by ID — use `query` with specific terms instead
|
|
60
|
+
|
|
61
|
+
**Default:** Use `npx nano-brain query "..."` — it combines BM25 + vector + reranking for best results.
|
|
62
|
+
|
|
63
|
+
## Collection Filtering
|
|
64
|
+
|
|
65
|
+
Works with both MCP and CLI (`-c` flag):
|
|
66
|
+
|
|
67
|
+
- `codebase` — source files only
|
|
68
|
+
- `sessions` — past AI sessions only
|
|
69
|
+
- `memory` — curated notes only
|
|
70
|
+
- Omit — search everything (recommended)
|
|
71
|
+
|
|
72
|
+
## Memory vs Native Tools
|
|
73
|
+
|
|
74
|
+
Memory excels at **recall and semantics** — past sessions, conceptual search, cross-project knowledge.
|
|
75
|
+
Native tools (grep, ast-grep, glob) excel at **precise code patterns** — exact matches, AST structure.
|
|
76
|
+
|
|
77
|
+
**They are complementary.** Use both.
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "nano-brain",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "Persistent memory for AI coding agents. Hybrid search (BM25 + semantic + LLM reranking) across past sessions, codebase, notes, and daily logs.",
|
|
5
|
+
"compatibility": "OpenCode",
|
|
6
|
+
"agent": null,
|
|
7
|
+
"commands": [],
|
|
8
|
+
"tags": [
|
|
9
|
+
"memory",
|
|
10
|
+
"persistence",
|
|
11
|
+
"search",
|
|
12
|
+
"context",
|
|
13
|
+
"sessions"
|
|
14
|
+
]
|
|
15
|
+
}
|
|
@@ -0,0 +1,303 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: pdf
|
|
3
|
+
description: "Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. Use when filling PDF forms or programmatically processing, generating, or analyzing PDF documents."
|
|
4
|
+
compatibility: "OpenCode"
|
|
5
|
+
metadata:
|
|
6
|
+
author: openclaw/skillmd
|
|
7
|
+
version: "1.0.0"
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# PDF Processing Guide
|
|
11
|
+
|
|
12
|
+
## When This Skill Activates
|
|
13
|
+
|
|
14
|
+
Activate when the user asks to:
|
|
15
|
+
- Extract text or tables from PDFs
|
|
16
|
+
- Create, merge, split, or rotate PDFs
|
|
17
|
+
- Add watermarks or password protection
|
|
18
|
+
- OCR scanned PDFs
|
|
19
|
+
- Fill PDF forms
|
|
20
|
+
- Convert PDFs to text
|
|
21
|
+
|
|
22
|
+
## Quick Start
|
|
23
|
+
```python
|
|
24
|
+
from pypdf import PdfReader, PdfWriter
|
|
25
|
+
|
|
26
|
+
# Read a PDF
|
|
27
|
+
reader = PdfReader("document.pdf")
|
|
28
|
+
print(f"Pages: {len(reader.pages)}")
|
|
29
|
+
|
|
30
|
+
# Extract text
|
|
31
|
+
text = ""
|
|
32
|
+
for page in reader.pages:
|
|
33
|
+
text += page.extract_text()
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
## Python Libraries
|
|
37
|
+
|
|
38
|
+
### pypdf - Basic Operations
|
|
39
|
+
|
|
40
|
+
#### Merge PDFs
|
|
41
|
+
```python
|
|
42
|
+
from pypdf import PdfWriter, PdfReader
|
|
43
|
+
|
|
44
|
+
writer = PdfWriter()
|
|
45
|
+
for pdf_file in ["doc1.pdf", "doc2.pdf", "doc3.pdf"]:
|
|
46
|
+
reader = PdfReader(pdf_file)
|
|
47
|
+
for page in reader.pages:
|
|
48
|
+
writer.add_page(page)
|
|
49
|
+
|
|
50
|
+
with open("merged.pdf", "wb") as output:
|
|
51
|
+
writer.write(output)
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
#### Split PDF
|
|
55
|
+
```python
|
|
56
|
+
reader = PdfReader("input.pdf")
|
|
57
|
+
for i, page in enumerate(reader.pages):
|
|
58
|
+
writer = PdfWriter()
|
|
59
|
+
writer.add_page(page)
|
|
60
|
+
with open(f"page_{i+1}.pdf", "wb") as output:
|
|
61
|
+
writer.write(output)
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
#### Rotate Pages
|
|
65
|
+
```python
|
|
66
|
+
reader = PdfReader("input.pdf")
|
|
67
|
+
writer = PdfWriter()
|
|
68
|
+
|
|
69
|
+
page = reader.pages[0]
|
|
70
|
+
page.rotate(90) # Rotate 90 degrees clockwise
|
|
71
|
+
writer.add_page(page)
|
|
72
|
+
|
|
73
|
+
with open("rotated.pdf", "wb") as output:
|
|
74
|
+
writer.write(output)
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
#### Extract Metadata
|
|
78
|
+
```python
|
|
79
|
+
reader = PdfReader("document.pdf")
|
|
80
|
+
meta = reader.metadata
|
|
81
|
+
print(f"Author: {meta.author}")
|
|
82
|
+
print(f"Title: {meta.title}")
|
|
83
|
+
print(f"Subject: {meta.subject}")
|
|
84
|
+
print(f"Creator: {meta.creator}")
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### pdfplumber - Text and Table Extraction
|
|
88
|
+
|
|
89
|
+
#### Extract Text
|
|
90
|
+
```python
|
|
91
|
+
import pdfplumber
|
|
92
|
+
|
|
93
|
+
with pdfplumber.open("document.pdf") as pdf:
|
|
94
|
+
for page in pdf.pages:
|
|
95
|
+
text = page.extract_text()
|
|
96
|
+
print(text)
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
#### Extract Tables
|
|
100
|
+
```python
|
|
101
|
+
with pdfplumber.open("document.pdf") as pdf:
|
|
102
|
+
for i, page in enumerate(pdf.pages):
|
|
103
|
+
tables = page.extract_tables()
|
|
104
|
+
for j, table in enumerate(tables):
|
|
105
|
+
print(f"Table {j+1} on page {i+1}:")
|
|
106
|
+
for row in table:
|
|
107
|
+
print(row)
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
#### Extract Tables to DataFrame
|
|
111
|
+
```python
|
|
112
|
+
import pdfplumber
|
|
113
|
+
import pandas as pd
|
|
114
|
+
|
|
115
|
+
with pdfplumber.open("document.pdf") as pdf:
|
|
116
|
+
page = pdf.pages[0]
|
|
117
|
+
table = page.extract_table()
|
|
118
|
+
df = pd.DataFrame(table[1:], columns=table[0])
|
|
119
|
+
print(df)
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
### reportlab - Create PDFs
|
|
123
|
+
|
|
124
|
+
#### Simple PDF
|
|
125
|
+
```python
|
|
126
|
+
from reportlab.lib.pagesizes import letter
|
|
127
|
+
from reportlab.pdfgen import canvas
|
|
128
|
+
|
|
129
|
+
c = canvas.Canvas("hello.pdf", pagesize=letter)
|
|
130
|
+
width, height = letter
|
|
131
|
+
|
|
132
|
+
c.drawString(100, height - 100, "Hello World!")
|
|
133
|
+
c.line(100, height - 140, 400, height - 140)
|
|
134
|
+
c.save()
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
#### Multi-page with Platypus
|
|
138
|
+
```python
|
|
139
|
+
from reportlab.lib.pagesizes import letter
|
|
140
|
+
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer
|
|
141
|
+
from reportlab.lib.styles import getSampleStyleSheet
|
|
142
|
+
|
|
143
|
+
doc = SimpleDocTemplate("report.pdf", pagesize=letter)
|
|
144
|
+
styles = getSampleStyleSheet()
|
|
145
|
+
story = []
|
|
146
|
+
|
|
147
|
+
story.append(Paragraph("Report Title", styles['Title']))
|
|
148
|
+
story.append(Spacer(1, 12))
|
|
149
|
+
story.append(Paragraph("This is the body text.", styles['Normal']))
|
|
150
|
+
|
|
151
|
+
doc.build(story)
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
## Command-Line Tools
|
|
155
|
+
|
|
156
|
+
### pdftotext (poppler-utils)
|
|
157
|
+
```bash
|
|
158
|
+
# Extract text
|
|
159
|
+
pdftotext input.pdf output.txt
|
|
160
|
+
|
|
161
|
+
# Preserve layout
|
|
162
|
+
pdftotext -layout input.pdf output.txt
|
|
163
|
+
|
|
164
|
+
# Specific pages
|
|
165
|
+
pdftotext -f 1 -l 5 input.pdf output.txt
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
### qpdf
|
|
169
|
+
```bash
|
|
170
|
+
# Merge PDFs
|
|
171
|
+
qpdf --empty --pages file1.pdf file2.pdf -- merged.pdf
|
|
172
|
+
|
|
173
|
+
# Split pages
|
|
174
|
+
qpdf input.pdf --pages . 1-5 -- pages1-5.pdf
|
|
175
|
+
|
|
176
|
+
# Rotate pages
|
|
177
|
+
qpdf input.pdf output.pdf --rotate=+90:1
|
|
178
|
+
|
|
179
|
+
# Remove password
|
|
180
|
+
qpdf --password=mypassword --decrypt encrypted.pdf decrypted.pdf
|
|
181
|
+
|
|
182
|
+
# Linearize (optimize for web)
|
|
183
|
+
qpdf --linearize input.pdf output.pdf
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
## Common Tasks
|
|
187
|
+
|
|
188
|
+
### OCR Scanned PDFs
|
|
189
|
+
```python
|
|
190
|
+
import pytesseract
|
|
191
|
+
from pdf2image import convert_from_path
|
|
192
|
+
|
|
193
|
+
images = convert_from_path('scanned.pdf')
|
|
194
|
+
text = ""
|
|
195
|
+
for i, image in enumerate(images):
|
|
196
|
+
text += f"Page {i+1}:\n"
|
|
197
|
+
text += pytesseract.image_to_string(image)
|
|
198
|
+
text += "\n\n"
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
### Add Watermark
|
|
202
|
+
```python
|
|
203
|
+
from pypdf import PdfReader, PdfWriter
|
|
204
|
+
|
|
205
|
+
watermark = PdfReader("watermark.pdf").pages[0]
|
|
206
|
+
reader = PdfReader("document.pdf")
|
|
207
|
+
writer = PdfWriter()
|
|
208
|
+
|
|
209
|
+
for page in reader.pages:
|
|
210
|
+
page.merge_page(watermark)
|
|
211
|
+
writer.add_page(page)
|
|
212
|
+
|
|
213
|
+
with open("watermarked.pdf", "wb") as output:
|
|
214
|
+
writer.write(output)
|
|
215
|
+
```
|
|
216
|
+
|
|
217
|
+
### Password Protection
|
|
218
|
+
```python
|
|
219
|
+
from pypdf import PdfReader, PdfWriter
|
|
220
|
+
|
|
221
|
+
reader = PdfReader("input.pdf")
|
|
222
|
+
writer = PdfWriter()
|
|
223
|
+
|
|
224
|
+
for page in reader.pages:
|
|
225
|
+
writer.add_page(page)
|
|
226
|
+
|
|
227
|
+
writer.encrypt("userpassword", "ownerpassword")
|
|
228
|
+
|
|
229
|
+
with open("encrypted.pdf", "wb") as output:
|
|
230
|
+
writer.write(output)
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
### Fill PDF Forms
|
|
234
|
+
```python
|
|
235
|
+
from pypdf import PdfReader, PdfWriter
|
|
236
|
+
|
|
237
|
+
reader = PdfReader("form.pdf")
|
|
238
|
+
writer = PdfWriter()
|
|
239
|
+
writer.append(reader)
|
|
240
|
+
|
|
241
|
+
# Get form field names
|
|
242
|
+
fields = reader.get_fields()
|
|
243
|
+
for name, field in fields.items():
|
|
244
|
+
print(f"Field: {name}, Type: {field.get('/FT')}")
|
|
245
|
+
|
|
246
|
+
# Fill fields
|
|
247
|
+
writer.update_page_form_field_values(
|
|
248
|
+
writer.pages[0],
|
|
249
|
+
{"field_name": "value", "another_field": "another_value"}
|
|
250
|
+
)
|
|
251
|
+
|
|
252
|
+
with open("filled_form.pdf", "wb") as output:
|
|
253
|
+
writer.write(output)
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
### PDF to Images
|
|
257
|
+
```python
|
|
258
|
+
from pdf2image import convert_from_path
|
|
259
|
+
|
|
260
|
+
# Convert all pages
|
|
261
|
+
images = convert_from_path('document.pdf', dpi=300)
|
|
262
|
+
for i, image in enumerate(images):
|
|
263
|
+
image.save(f'page_{i+1}.png', 'PNG')
|
|
264
|
+
|
|
265
|
+
# Convert specific pages
|
|
266
|
+
images = convert_from_path('document.pdf', first_page=1, last_page=3)
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
## Installation Commands
|
|
270
|
+
|
|
271
|
+
```bash
|
|
272
|
+
# Core libraries
|
|
273
|
+
pip install pypdf pdfplumber reportlab
|
|
274
|
+
|
|
275
|
+
# OCR support
|
|
276
|
+
pip install pytesseract pdf2image
|
|
277
|
+
# Also needs: apt-get install tesseract-ocr poppler-utils
|
|
278
|
+
|
|
279
|
+
# CLI tools
|
|
280
|
+
apt-get install poppler-utils qpdf
|
|
281
|
+
|
|
282
|
+
# All at once
|
|
283
|
+
pip install pypdf pdfplumber reportlab pytesseract pdf2image
|
|
284
|
+
```
|
|
285
|
+
|
|
286
|
+
## Quick Reference
|
|
287
|
+
|
|
288
|
+
| Task | Best Tool | Command/Code |
|
|
289
|
+
|------|-----------|--------------|
|
|
290
|
+
| Read/extract text | pdfplumber | `page.extract_text()` |
|
|
291
|
+
| Extract tables | pdfplumber | `page.extract_tables()` |
|
|
292
|
+
| Merge PDFs | pypdf | `writer.add_page(page)` |
|
|
293
|
+
| Split PDFs | pypdf | One page per PdfWriter |
|
|
294
|
+
| Rotate pages | pypdf | `page.rotate(90)` |
|
|
295
|
+
| Create PDFs | reportlab | Canvas or Platypus |
|
|
296
|
+
| Fill forms | pypdf | `update_page_form_field_values()` |
|
|
297
|
+
| Add watermark | pypdf | `page.merge_page(watermark)` |
|
|
298
|
+
| Password protect | pypdf | `writer.encrypt()` |
|
|
299
|
+
| OCR scanned PDFs | pytesseract + pdf2image | Convert to image first |
|
|
300
|
+
| CLI text extract | poppler-utils | `pdftotext input.pdf` |
|
|
301
|
+
| CLI merge/split | qpdf | `qpdf --empty --pages ...` |
|
|
302
|
+
| PDF to images | pdf2image | `convert_from_path()` |
|
|
303
|
+
| Extract metadata | pypdf | `reader.metadata` |
|
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "pdf",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "PDF manipulation toolkit \u2014 extract text/tables, create, merge, split, rotate, OCR, fill forms, watermark, and password protect",
|
|
5
|
+
"compatibility": "OpenCode",
|
|
6
|
+
"agent": null,
|
|
7
|
+
"commands": [],
|
|
8
|
+
"tags": [
|
|
9
|
+
"pdf",
|
|
10
|
+
"extract",
|
|
11
|
+
"merge",
|
|
12
|
+
"ocr",
|
|
13
|
+
"forms",
|
|
14
|
+
"reportlab",
|
|
15
|
+
"pypdf"
|
|
16
|
+
]
|
|
17
|
+
}
|
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: rtk
|
|
3
|
+
description: "Token optimizer for AI coding agents. Wraps common CLI commands with rtk to reduce LLM token consumption by 60-90%."
|
|
4
|
+
compatibility: "OpenCode with RTK binary installed"
|
|
5
|
+
metadata:
|
|
6
|
+
author: Sisyphus
|
|
7
|
+
version: "1.0.0"
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# RTK — Rust Token Killer
|
|
11
|
+
|
|
12
|
+
**Purpose**: Reduce token consumption by 60-90% on shell commands. RTK filters and compresses command output before it reaches your context.
|
|
13
|
+
|
|
14
|
+
## When This Skill Activates
|
|
15
|
+
|
|
16
|
+
This skill is loaded automatically. Once active, **always** prefix supported commands with `rtk`.
|
|
17
|
+
|
|
18
|
+
## Command Mappings (MANDATORY)
|
|
19
|
+
|
|
20
|
+
When running shell commands via Bash tool, use `rtk` prefix for these commands:
|
|
21
|
+
|
|
22
|
+
| Instead of | Use | Savings |
|
|
23
|
+
|-----------|-----|---------|
|
|
24
|
+
| `git status` | `rtk git status` | ~80% |
|
|
25
|
+
| `git diff` | `rtk git diff` | ~75% |
|
|
26
|
+
| `git log` | `rtk git log` | ~80% |
|
|
27
|
+
| `git add/commit/push/pull` | `rtk git add/commit/push/pull` | ~90% |
|
|
28
|
+
| `ls` / `ls -la` | `rtk ls` | ~80% |
|
|
29
|
+
| `cat file` | `rtk read file` | ~70% |
|
|
30
|
+
| `grep pattern .` | `rtk grep pattern .` | ~80% |
|
|
31
|
+
| `rg pattern` | `rtk grep pattern .` | ~80% |
|
|
32
|
+
| `npm test` / `cargo test` / `pytest` | `rtk test <cmd>` | ~90% |
|
|
33
|
+
| `npm run build` / `cargo build` | `rtk err <cmd>` | ~80% |
|
|
34
|
+
| `gh pr list/view` | `rtk gh pr list/view` | ~70% |
|
|
35
|
+
| `docker ps` | `rtk docker ps` | ~80% |
|
|
36
|
+
| `eslint` / `tsc` | `rtk lint` / `rtk tsc` | ~80% |
|
|
37
|
+
|
|
38
|
+
## Searching Inside `node_modules` / Ignored Directories
|
|
39
|
+
|
|
40
|
+
By default, `rtk grep` respects `.gitignore` rules — meaning `node_modules`, `.nuxt`, `dist`, etc. are **excluded**. This is the right behavior 99% of the time.
|
|
41
|
+
|
|
42
|
+
When you **need** to search inside ignored directories (debugging a library, checking an API signature, tracing a dependency bug):
|
|
43
|
+
|
|
44
|
+
```bash
|
|
45
|
+
# Search all files including node_modules (--no-ignore bypasses .gitignore)
|
|
46
|
+
rtk grep "defineStore" . --no-ignore
|
|
47
|
+
|
|
48
|
+
# Search a specific package only (combine --no-ignore with --glob)
|
|
49
|
+
rtk grep "defineStore" . --no-ignore --glob 'node_modules/pinia/**'
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
**What does NOT work:**
|
|
53
|
+
- `rtk grep "pattern" node_modules/pinia/` — still excluded even with direct path
|
|
54
|
+
- `rtk grep "pattern" . --glob 'node_modules/**'` — glob alone doesn't override .gitignore
|
|
55
|
+
|
|
56
|
+
**Key flag: `--no-ignore`** — this is the ONLY way to search ignored directories with rtk grep.
|
|
57
|
+
|
|
58
|
+
### Other useful `rtk grep` flags
|
|
59
|
+
|
|
60
|
+
```bash
|
|
61
|
+
rtk grep "pattern" . -t ts # Filter by file type (ts, py, rust, etc.)
|
|
62
|
+
rtk grep "pattern" . -m 100 # Increase max results (default: 50)
|
|
63
|
+
rtk grep "pattern" . -u # Ultra-compact mode (even fewer tokens)
|
|
64
|
+
rtk grep "pattern" . -l 120 # Max line length before truncation (default: 80)
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
## Commands to NOT Wrap
|
|
68
|
+
|
|
69
|
+
Do NOT prefix these with `rtk` (unsupported or counterproductive):
|
|
70
|
+
|
|
71
|
+
- `npx`, `npm install`, `pip install` (package managers)
|
|
72
|
+
- `node`, `python3`, `ruby` (interpreters)
|
|
73
|
+
- `nano-brain`, `openspec`, `opencode` (custom tools)
|
|
74
|
+
- Heredocs (`<<EOF`)
|
|
75
|
+
- Piped commands (`cmd1 | cmd2`) — wrap only the first command if applicable
|
|
76
|
+
- Commands already prefixed with `rtk`
|
|
77
|
+
|
|
78
|
+
## How RTK Works
|
|
79
|
+
|
|
80
|
+
```
|
|
81
|
+
Without RTK: git status → 50 lines raw output → 2,000 tokens
|
|
82
|
+
With RTK: rtk git status → "3 modified, 1 untracked ✓" → 200 tokens
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
RTK runs the real command, then filters/compresses the output. The agent sees a compact summary instead of verbose raw output.
|
|
86
|
+
|
|
87
|
+
## Detection
|
|
88
|
+
|
|
89
|
+
Before using RTK commands, verify it's installed:
|
|
90
|
+
```bash
|
|
91
|
+
rtk --version
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
If `rtk` is not found, skip this skill — run commands normally without the `rtk` prefix.
|
|
95
|
+
|
|
96
|
+
## Token Savings Reference
|
|
97
|
+
|
|
98
|
+
Typical 30-min coding session:
|
|
99
|
+
- Without RTK: ~150,000 tokens
|
|
100
|
+
- With RTK: ~45,000 tokens
|
|
101
|
+
- **Savings: ~70%**
|
|
102
|
+
|
|
103
|
+
Biggest wins: test output (`rtk test` — 90%), git operations (`rtk git` — 80%), file reading (`rtk read` — 70%).
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "rtk",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "Token optimizer for AI coding agents. Wraps common CLI commands with rtk to reduce LLM token consumption by 60-90%.",
|
|
5
|
+
"compatibility": "OpenCode with RTK binary installed",
|
|
6
|
+
"agent": null,
|
|
7
|
+
"commands": [],
|
|
8
|
+
"tags": [
|
|
9
|
+
"rtk",
|
|
10
|
+
"token-saving",
|
|
11
|
+
"optimization",
|
|
12
|
+
"cli",
|
|
13
|
+
"productivity"
|
|
14
|
+
]
|
|
15
|
+
}
|