codeforge-dev 1.5.7 → 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (80) hide show
  1. package/.devcontainer/.env +2 -1
  2. package/.devcontainer/CHANGELOG.md +55 -9
  3. package/.devcontainer/CLAUDE.md +65 -15
  4. package/.devcontainer/README.md +67 -6
  5. package/.devcontainer/config/keybindings.json +5 -0
  6. package/.devcontainer/config/main-system-prompt.md +63 -2
  7. package/.devcontainer/config/settings.json +25 -6
  8. package/.devcontainer/devcontainer.json +23 -7
  9. package/.devcontainer/features/README.md +21 -7
  10. package/.devcontainer/features/ccburn/README.md +60 -0
  11. package/.devcontainer/features/ccburn/devcontainer-feature.json +38 -0
  12. package/.devcontainer/features/ccburn/install.sh +174 -0
  13. package/.devcontainer/features/ccstatusline/README.md +22 -21
  14. package/.devcontainer/features/ccstatusline/devcontainer-feature.json +1 -1
  15. package/.devcontainer/features/ccstatusline/install.sh +48 -16
  16. package/.devcontainer/features/claude-code/config/settings.json +60 -24
  17. package/.devcontainer/features/mcp-qdrant/devcontainer-feature.json +1 -1
  18. package/.devcontainer/features/mcp-reasoner/devcontainer-feature.json +1 -1
  19. package/.devcontainer/plugins/devs-marketplace/plugins/auto-formatter/scripts/__pycache__/format-on-stop.cpython-314.pyc +0 -0
  20. package/.devcontainer/plugins/devs-marketplace/plugins/auto-formatter/scripts/format-on-stop.py +21 -6
  21. package/.devcontainer/plugins/devs-marketplace/plugins/auto-linter/scripts/__pycache__/lint-file.cpython-314.pyc +0 -0
  22. package/.devcontainer/plugins/devs-marketplace/plugins/auto-linter/scripts/lint-file.py +7 -10
  23. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/REVIEW-RUBRIC.md +440 -0
  24. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/architect.md +190 -0
  25. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/bash-exec.md +173 -0
  26. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/claude-guide.md +155 -0
  27. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/dependency-analyst.md +248 -0
  28. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/doc-writer.md +233 -0
  29. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/explorer.md +235 -0
  30. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/generalist.md +125 -0
  31. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/git-archaeologist.md +242 -0
  32. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/migrator.md +195 -0
  33. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/perf-profiler.md +265 -0
  34. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/refactorer.md +209 -0
  35. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/researcher.md +195 -0
  36. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/security-auditor.md +289 -0
  37. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/spec-writer.md +284 -0
  38. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/statusline-config.md +188 -0
  39. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/agents/test-writer.md +245 -0
  40. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/hooks/hooks.json +12 -0
  41. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/guard-readonly-bash.cpython-314.pyc +0 -0
  42. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/redirect-builtin-agents.cpython-314.pyc +0 -0
  43. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/skill-suggester.cpython-314.pyc +0 -0
  44. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/syntax-validator.cpython-314.pyc +0 -0
  45. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/verify-no-regression.cpython-314.pyc +0 -0
  46. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/__pycache__/verify-tests-pass.cpython-314.pyc +0 -0
  47. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/guard-readonly-bash.py +611 -0
  48. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/redirect-builtin-agents.py +83 -0
  49. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/skill-suggester.py +85 -2
  50. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/syntax-validator.py +9 -4
  51. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/verify-no-regression.py +221 -0
  52. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/scripts/verify-tests-pass.py +176 -0
  53. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/claude-agent-sdk/SKILL.md +599 -0
  54. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/claude-agent-sdk/references/sdk-typescript-reference.md +954 -0
  55. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/git-forensics/SKILL.md +276 -0
  56. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/git-forensics/references/advanced-commands.md +332 -0
  57. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/git-forensics/references/investigation-playbooks.md +319 -0
  58. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/performance-profiling/SKILL.md +341 -0
  59. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/performance-profiling/references/interpreting-results.md +235 -0
  60. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/performance-profiling/references/tool-commands.md +395 -0
  61. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/refactoring-patterns/SKILL.md +344 -0
  62. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/refactoring-patterns/references/safe-transformations.md +247 -0
  63. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/refactoring-patterns/references/smell-catalog.md +332 -0
  64. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/security-checklist/SKILL.md +277 -0
  65. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/security-checklist/references/owasp-patterns.md +269 -0
  66. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/security-checklist/references/secrets-patterns.md +253 -0
  67. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/specification-writing/SKILL.md +288 -0
  68. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/specification-writing/references/criteria-patterns.md +245 -0
  69. package/.devcontainer/plugins/devs-marketplace/plugins/code-directive/skills/specification-writing/references/ears-templates.md +239 -0
  70. package/.devcontainer/plugins/devs-marketplace/plugins/protected-files-guard/scripts/__pycache__/guard-protected.cpython-314.pyc +0 -0
  71. package/.devcontainer/plugins/devs-marketplace/plugins/protected-files-guard/scripts/guard-protected.py +40 -39
  72. package/.devcontainer/scripts/setup-aliases.sh +10 -20
  73. package/.devcontainer/scripts/setup-config.sh +2 -0
  74. package/.devcontainer/scripts/setup-plugins.sh +38 -46
  75. package/.devcontainer/scripts/setup-projects.sh +175 -0
  76. package/.devcontainer/scripts/setup-symlink-claude.sh +36 -0
  77. package/.devcontainer/scripts/setup-update-claude.sh +11 -8
  78. package/.devcontainer/scripts/setup.sh +4 -2
  79. package/package.json +1 -1
  80. package/.devcontainer/scripts/setup-irie-claude.sh +0 -32
@@ -0,0 +1,319 @@
1
+ # Git Investigation Playbooks
2
+
3
+ Step-by-step playbooks for common git forensic investigations.
4
+
5
+ ## Contents
6
+
7
+ - [Playbook 1: Finding When a Bug Was Introduced](#playbook-1-finding-when-a-bug-was-introduced)
8
+ - [Playbook 2: Finding Deleted Code](#playbook-2-finding-deleted-code)
9
+ - [Playbook 3: Tracing a Line's History](#playbook-3-tracing-a-lines-history)
10
+ - [Playbook 4: Recovering Lost Work](#playbook-4-recovering-lost-work)
11
+ - [Playbook 5: Identifying Hot Spots and Code Ownership](#playbook-5-identifying-hot-spots-and-code-ownership)
12
+ - [Playbook 6: Understanding a Complex Merge Conflict](#playbook-6-understanding-a-complex-merge-conflict)
13
+
14
+ ---
15
+
16
+ ## Playbook 1: Finding When a Bug Was Introduced
17
+
18
+ **Scenario:** A feature that used to work is now broken. You need to find the exact commit that introduced the regression.
19
+
20
+ ### Step 1: Establish the boundary
21
+
22
+ ```bash
23
+ # Find a known-good commit (e.g., last release tag)
24
+ git tag --list "v*" --sort=-version:refname | head -5
25
+
26
+ # Verify the good commit is actually good
27
+ git stash # save current work
28
+ git checkout v2.1.0
29
+ # run the failing test or reproduce the bug manually
30
+ # if it passes, this is your good commit
31
+ ```
32
+
33
+ ### Step 2: Start bisect
34
+
35
+ ```bash
36
+ git bisect start
37
+ git bisect bad HEAD # current commit is bad
38
+ git bisect good v2.1.0 # last known good commit
39
+ ```
40
+
41
+ ### Step 3: Test each checkout
42
+
43
+ Git will check out a commit roughly in the middle. Test it:
44
+
45
+ ```bash
46
+ # Option A: Manual testing
47
+ pytest tests/test_feature.py -x
48
+ # If it passes: git bisect good
49
+ # If it fails: git bisect bad
50
+
51
+ # Option B: Automated
52
+ git bisect run pytest tests/test_feature.py -x
53
+ ```
54
+
55
+ ### Step 4: Analyze the result
56
+
57
+ ```bash
58
+ # Git outputs the first bad commit
59
+ # Example: abc1234 is the first bad commit
60
+
61
+ # Examine the commit
62
+ git show abc1234
63
+ git show abc1234 --stat
64
+
65
+ # Understand the context
66
+ git log --oneline abc1234~5..abc1234
67
+ ```
68
+
69
+ ### Step 5: Clean up
70
+
71
+ ```bash
72
+ git bisect reset
73
+ git stash pop # restore your work
74
+ ```
75
+
76
+ ---
77
+
78
+ ## Playbook 2: Finding Deleted Code
79
+
80
+ **Scenario:** A function or class that used to exist has been deleted. You need to find when it was removed and why.
81
+
82
+ ### Step 1: Search for when the code was last present
83
+
84
+ ```bash
85
+ # Find commits that changed the count of the string
86
+ git log -S "def calculate_tax" --oneline
87
+
88
+ # Output shows commits where the function was added or removed
89
+ # The LAST commit in the list removed it; the FIRST added it
90
+ ```
91
+
92
+ ### Step 2: Examine the removal commit
93
+
94
+ ```bash
95
+ # Show the commit that removed the function
96
+ git show abc1234
97
+
98
+ # See the full file at the commit BEFORE removal
99
+ git show abc1234^:path/to/file.py
100
+
101
+ # See the diff to understand what replaced it
102
+ git diff abc1234^..abc1234 -- path/to/file.py
103
+ ```
104
+
105
+ ### Step 3: Find the file if it was renamed or moved
106
+
107
+ ```bash
108
+ # If the file itself was deleted, find when
109
+ git log --diff-filter=D --summary | grep "path/to/file.py"
110
+
111
+ # If the function moved to another file, search with -G
112
+ git log -G "def calculate_tax" --oneline --all
113
+ ```
114
+
115
+ ### Step 4: Recover the deleted code
116
+
117
+ ```bash
118
+ # Get the file content from before the deletion
119
+ git show abc1234^:path/to/file.py > recovered_file.py
120
+
121
+ # Or cherry-pick just the function (manual extraction)
122
+ git show abc1234^:path/to/file.py | grep -A 50 "def calculate_tax"
123
+ ```
124
+
125
+ ---
126
+
127
+ ## Playbook 3: Tracing a Line's History
128
+
129
+ **Scenario:** You need to understand why a specific line of code exists -- who wrote it, when, and in what context.
130
+
131
+ ### Step 1: Initial blame
132
+
133
+ ```bash
134
+ # Blame with whitespace and move detection
135
+ git blame -w -M -C path/to/file.py
136
+
137
+ # Focus on the specific line range
138
+ git blame -w -M -C -L 42,42 path/to/file.py
139
+ ```
140
+
141
+ ### Step 2: Go deeper if the blame shows a bulk change
142
+
143
+ If blame points to a formatting or refactoring commit:
144
+
145
+ ```bash
146
+ # Blame at the commit BEFORE the bulk change
147
+ git blame -w -M -C abc1234^ -- path/to/file.py -L 42,42
148
+
149
+ # Or use .git-blame-ignore-revs to skip bulk commits automatically
150
+ git blame --ignore-revs-file .git-blame-ignore-revs path/to/file.py
151
+ ```
152
+
153
+ ### Step 3: Read the full commit context
154
+
155
+ ```bash
156
+ # See the full commit that introduced the line
157
+ git show def5678
158
+
159
+ # See the PR/issue if the commit message references one
160
+ # e.g., "Fix #123" or "Closes #456"
161
+ git log --format="%H %s" | grep "#123"
162
+ ```
163
+
164
+ ### Step 4: See all changes to this line over time
165
+
166
+ ```bash
167
+ # Log of all commits that touched this line range
168
+ git log -L 42,42:path/to/file.py
169
+
170
+ # This shows the line's evolution across commits, including the diff at each step
171
+ ```
172
+
173
+ ---
174
+
175
+ ## Playbook 4: Recovering Lost Work
176
+
177
+ **Scenario:** You accidentally ran `git reset --hard`, deleted a branch, or lost commits.
178
+
179
+ ### Step 1: Don't panic -- check the reflog
180
+
181
+ ```bash
182
+ # See recent HEAD movements
183
+ git reflog
184
+
185
+ # Look for the commit you lost
186
+ # Example output:
187
+ # abc1234 HEAD@{0}: reset: moving to HEAD~3 ← the reset that lost your work
188
+ # def5678 HEAD@{1}: commit: add user validation ← your lost commit
189
+ # 789abcd HEAD@{2}: commit: fix login bug ← another lost commit
190
+ ```
191
+
192
+ ### Step 2: Verify the lost commit
193
+
194
+ ```bash
195
+ # Check the commit contents
196
+ git show def5678
197
+ git show def5678 --stat
198
+ ```
199
+
200
+ ### Step 3: Recover
201
+
202
+ ```bash
203
+ # Option A: Create a branch at the lost commit (safest)
204
+ git branch recovery def5678
205
+
206
+ # Option B: Cherry-pick the commit onto your current branch
207
+ git cherry-pick def5678
208
+
209
+ # Option C: Reset to the lost commit (if you want to restore the full state)
210
+ git reset --hard def5678
211
+ ```
212
+
213
+ ### Step 4: If reflog doesn't help
214
+
215
+ ```bash
216
+ # Find unreachable objects (last resort)
217
+ git fsck --unreachable --no-reflogs
218
+
219
+ # This lists dangling commits, blobs, and trees
220
+ # Look for "dangling commit" entries
221
+ # Examine them with git show
222
+ ```
223
+
224
+ ---
225
+
226
+ ## Playbook 5: Identifying Hot Spots and Code Ownership
227
+
228
+ **Scenario:** You're new to a codebase and need to understand which files change most and who knows them best.
229
+
230
+ ### Step 1: Find frequently changed files
231
+
232
+ ```bash
233
+ # Most changed files in the last 6 months
234
+ git log --since="6 months ago" --pretty=format: --name-only | sort | uniq -c | sort -rn | head -20
235
+ ```
236
+
237
+ ### Step 2: Find who knows each area
238
+
239
+ ```bash
240
+ # Top contributors overall
241
+ git shortlog -sn --no-merges --since="6 months ago"
242
+
243
+ # Top contributors for a specific directory
244
+ git shortlog -sn --no-merges -- src/auth/
245
+
246
+ # Who last touched each file in a directory
247
+ for f in src/auth/*.py; do
248
+ echo "$f: $(git log -1 --format='%an (%ar)' -- "$f")"
249
+ done
250
+ ```
251
+
252
+ ### Step 3: Find coupling (files that change together)
253
+
254
+ ```bash
255
+ # Files that frequently appear in the same commit
256
+ git log --pretty=format: --name-only | sort | uniq -c | sort -rn | head -30
257
+
258
+ # Look for pairs: if file A and file B always change together,
259
+ # they may have hidden coupling that should be made explicit
260
+ ```
261
+
262
+ ### Step 4: Identify aging code
263
+
264
+ ```bash
265
+ # Files not modified in over a year
266
+ git log --diff-filter=M --since="1 year ago" --pretty=format: --name-only | sort -u > recent.txt
267
+ git ls-files | sort > all.txt
268
+ comm -23 all.txt recent.txt | head -30
269
+ rm recent.txt all.txt
270
+ ```
271
+
272
+ ---
273
+
274
+ ## Playbook 6: Understanding a Complex Merge Conflict
275
+
276
+ **Scenario:** You have a merge conflict and need to understand the history of both sides before resolving.
277
+
278
+ ### Step 1: See what both branches changed
279
+
280
+ ```bash
281
+ # Changes on your branch since diverging from main
282
+ git log --oneline main..HEAD
283
+
284
+ # Changes on main since your branch diverged
285
+ git log --oneline HEAD..main
286
+
287
+ # The common ancestor
288
+ git merge-base HEAD main
289
+ ```
290
+
291
+ ### Step 2: Understand the conflicting file's history
292
+
293
+ ```bash
294
+ # History on your branch
295
+ git log --oneline main..HEAD -- path/to/conflicted/file.py
296
+
297
+ # History on main
298
+ git log --oneline HEAD..main -- path/to/conflicted/file.py
299
+ ```
300
+
301
+ ### Step 3: See the three-way diff
302
+
303
+ ```bash
304
+ # During a merge conflict, git stores three versions:
305
+ # :1: = common ancestor (base)
306
+ # :2: = your version (ours)
307
+ # :3: = their version (theirs)
308
+
309
+ git show :1:path/to/file.py > base.py
310
+ git show :2:path/to/file.py > ours.py
311
+ git show :3:path/to/file.py > theirs.py
312
+
313
+ # Compare
314
+ diff3 ours.py base.py theirs.py
315
+ ```
316
+
317
+ ### Step 4: Resolve with context
318
+
319
+ Understanding both sides' intent (from the commit messages in Step 1) helps you resolve the conflict correctly rather than just picking one side.
@@ -0,0 +1,341 @@
1
+ ---
2
+ name: performance-profiling
3
+ description: >-
4
+ This skill should be used when the user asks to "profile this code",
5
+ "find the bottleneck", "optimize performance", "measure execution time",
6
+ "check memory usage", "create a flamegraph", "benchmark this function",
7
+ "find memory leaks", "reduce latency", "run a performance test",
8
+ or discusses profiling tools, flamegraphs, benchmarking methodology,
9
+ cProfile, py-spy, scalene, Chrome DevTools performance,
10
+ memory profiling, or hot path analysis.
11
+ version: 0.1.0
12
+ ---
13
+
14
+ # Performance Profiling
15
+
16
+ ## Mental Model
17
+
18
+ Performance work follows one rule: **measure first, optimize second**. Bottlenecks are almost never where you think they are. Developers consistently misjudge performance by 10-100x -- the "obviously slow" nested loop is often fast, while the "simple" database query is the real bottleneck.
19
+
20
+ The profiling workflow is:
21
+ 1. **Establish a baseline** -- measure current performance with a reproducible benchmark
22
+ 2. **Profile** -- identify where time and memory are actually spent
23
+ 3. **Hypothesize** -- form a specific theory about the bottleneck
24
+ 4. **Optimize** -- make one targeted change
25
+ 5. **Measure again** -- verify the optimization actually helped
26
+ 6. **Compare** -- did the change improve the baseline? By how much?
27
+
28
+ Without this discipline, you'll waste time optimizing code that doesn't matter, introduce complexity without measurable benefit, and have no proof that your changes helped.
29
+
30
+ **Amdahl's Law** sets the ceiling: if a function consumes 5% of total runtime, making it infinitely fast saves only 5%. Focus on the biggest bars in the profile first.
31
+
32
+ ---
33
+
34
+ ## Python Profiling
35
+
36
+ ### cProfile (built-in, deterministic)
37
+
38
+ cProfile instruments every function call. It shows call count, cumulative time, and per-call time:
39
+
40
+ ```bash
41
+ # Profile a script
42
+ python -m cProfile -s cumtime myapp.py
43
+
44
+ # Profile and save to a file for analysis
45
+ python -m cProfile -o profile.prof myapp.py
46
+
47
+ # Analyze the saved profile
48
+ python -c "
49
+ import pstats
50
+ p = pstats.Stats('profile.prof')
51
+ p.sort_stats('cumulative')
52
+ p.print_stats(20) # top 20 functions
53
+ "
54
+ ```
55
+
56
+ **Tradeoff:** cProfile adds ~30% overhead and measures wall-clock time. It's deterministic (traces every call) so it catches everything but distorts timing for very fast functions.
57
+
58
+ ### py-spy (sampling, no overhead)
59
+
60
+ py-spy samples the call stack without modifying the target process. It can attach to running processes:
61
+
62
+ ```bash
63
+ # Record a flamegraph (SVG)
64
+ py-spy record -o flamegraph.svg -- python myapp.py
65
+
66
+ # Attach to a running process
67
+ py-spy record -o flamegraph.svg --pid 12345
68
+
69
+ # Top-like live view
70
+ py-spy top --pid 12345
71
+
72
+ # Profile for a specific duration
73
+ py-spy record --duration 30 -o flamegraph.svg --pid 12345
74
+ ```
75
+
76
+ **Tradeoff:** Sampling misses very short functions but has near-zero overhead. Ideal for production profiling.
77
+
78
+ ### scalene (CPU + memory + GPU)
79
+
80
+ Scalene profiles CPU time, memory allocation, and memory usage simultaneously. It distinguishes Python time from native (C) time:
81
+
82
+ ```bash
83
+ # Profile a script
84
+ scalene myapp.py
85
+
86
+ # Profile with specific options
87
+ scalene --cpu --memory --reduced-profile myapp.py
88
+
89
+ # Profile a specific function (in code)
90
+ # from scalene import scalene_profiler
91
+ # scalene_profiler.start()
92
+ # ... code to profile ...
93
+ # scalene_profiler.stop()
94
+ ```
95
+
96
+ ### memory_profiler (line-by-line memory)
97
+
98
+ ```python
99
+ from memory_profiler import profile
100
+ import pandas as pd
101
+
102
+ @profile
103
+ def process_data() -> pd.DataFrame:
104
+ data = pd.read_csv("large.csv") # Line 5: +500 MiB
105
+ filtered = data[data["active"]] # Line 6: +200 MiB
106
+ result = filtered.groupby("region").sum() # Line 7: +50 MiB
107
+ del data, filtered # Line 8: -700 MiB
108
+ return result
109
+ ```
110
+
111
+ ```bash
112
+ python -m memory_profiler myapp.py
113
+ ```
114
+
115
+ ### line_profiler (line-by-line CPU)
116
+
117
+ ```python
118
+ # Decorate functions to profile
119
+ @profile
120
+ def expensive_function():
121
+ result = [] # 0.0%
122
+ for item in large_list: # 2.1%
123
+ parsed = parse(item) # 45.3% <-- hot line
124
+ if validate(parsed): # 12.7%
125
+ result.append(parsed) # 0.4%
126
+ return result
127
+ ```
128
+
129
+ ```bash
130
+ kernprof -l -v myapp.py
131
+ ```
132
+
133
+ > **Deep dive:** See `references/tool-commands.md` for the full command reference per language and tool.
134
+
135
+ ---
136
+
137
+ ## JavaScript / Node.js Profiling
138
+
139
+ ### V8 Profiler (`--prof`)
140
+
141
+ Node's built-in V8 profiler generates a log that can be processed into a human-readable report:
142
+
143
+ ```bash
144
+ # Generate a V8 profile log
145
+ node --prof app.js
146
+
147
+ # Process the log into readable output
148
+ node --prof-process isolate-*.log > processed.txt
149
+ ```
150
+
151
+ ### clinic.js
152
+
153
+ A suite of profiling tools for Node.js:
154
+
155
+ ```bash
156
+ # Install
157
+ npm install -g clinic
158
+
159
+ # Doctor: overall health check (event loop, GC, I/O)
160
+ clinic doctor -- node app.js
161
+
162
+ # Flame: flamegraph
163
+ clinic flame -- node app.js
164
+
165
+ # Bubbleprof: async flow visualization
166
+ clinic bubbleprof -- node app.js
167
+ ```
168
+
169
+ ### Chrome DevTools
170
+
171
+ For both browser and Node.js profiling:
172
+
173
+ ```bash
174
+ # Start Node with inspector
175
+ node --inspect app.js
176
+
177
+ # Or break on first line
178
+ node --inspect-brk app.js
179
+ ```
180
+
181
+ Then open `chrome://inspect` in Chrome:
182
+ - **Performance tab:** Record a profile, see flamechart, call tree, and bottom-up views
183
+ - **Memory tab:** Take heap snapshots, record allocation timelines, detect leaks
184
+
185
+ ### Lighthouse (Web Performance)
186
+
187
+ ```bash
188
+ # CLI audit
189
+ npx lighthouse https://example.com --output json --output html
190
+
191
+ # Key metrics: FCP, LCP, TTI, TBT, CLS
192
+ # Target: Performance score > 90
193
+ ```
194
+
195
+ ---
196
+
197
+ ## System Profiling
198
+
199
+ When the bottleneck isn't in your code but in the system:
200
+
201
+ ```bash
202
+ # Wall-clock time, user CPU, system CPU
203
+ time python myapp.py
204
+
205
+ # Process-level resource usage (live)
206
+ htop # interactive process viewer
207
+ htop -p 12345 # monitor specific PID
208
+
209
+ # I/O statistics
210
+ iostat -x 1 # disk I/O per device, every 1 second
211
+
212
+ # CPU performance counters (Linux)
213
+ perf stat python myapp.py
214
+ # Counts: cycles, instructions, cache misses, branch misses
215
+
216
+ # System call tracing
217
+ strace -c python myapp.py # summary of syscall time
218
+ strace -e trace=network app # only network syscalls
219
+ ```
220
+
221
+ **Interpreting `time` output:**
222
+ - **real** > **user** + **sys** → I/O bound (waiting for disk, network, or sleep)
223
+ - **user** >> **sys** → CPU bound in userspace (computation)
224
+ - **sys** >> **user** → CPU bound in kernel (many syscalls, context switches)
225
+
226
+ ---
227
+
228
+ ## Benchmarking Methodology
229
+
230
+ Benchmarks must be reproducible, statistically sound, and isolated from noise.
231
+
232
+ ### CLI Benchmarking with hyperfine
233
+
234
+ ```bash
235
+ # Basic benchmark with warmup
236
+ hyperfine --warmup 3 'python myapp.py'
237
+
238
+ # Compare two implementations
239
+ hyperfine --warmup 3 'python v1.py' 'python v2.py'
240
+
241
+ # With parameter sweeps
242
+ hyperfine --warmup 3 -P size 100,1000,10000 'python bench.py --size {size}'
243
+
244
+ # Export results
245
+ hyperfine --warmup 3 --export-json results.json 'python myapp.py'
246
+ ```
247
+
248
+ hyperfine automatically detects outliers, calculates mean/median/stddev, and warns about statistical issues.
249
+
250
+ ### Python Benchmarking with pytest-benchmark
251
+
252
+ ```python
253
+ # benchmark fixture is injected by pytest-benchmark — no import needed
254
+ def test_sort_performance(benchmark) -> None:
255
+ data = list(range(10000, 0, -1))
256
+ result = benchmark(sorted, data)
257
+ assert result == list(range(1, 10001))
258
+
259
+
260
+ def test_json_parse_performance(benchmark) -> None:
261
+ """Benchmark with setup to exclude data preparation from timing."""
262
+ import json
263
+ payload = json.dumps({"users": [{"id": i, "name": f"user_{i}"} for i in range(1000)]})
264
+ result = benchmark(json.loads, payload)
265
+ assert len(result["users"]) == 1000
266
+ ```
267
+
268
+ ```bash
269
+ pytest --benchmark-only --benchmark-sort=mean
270
+ pytest --benchmark-compare # compare against saved baseline
271
+ pytest --benchmark-save=baseline # save current results
272
+ ```
273
+
274
+ ### Benchmarking Rules
275
+
276
+ 1. **Warmup runs** -- JIT compilers, caches, and OS page faults all affect the first run. Always include warmup.
277
+ 2. **Multiple iterations** -- A single measurement is noise. Run at least 10 iterations and report mean, median, and stddev.
278
+ 3. **Isolate variables** -- Change one thing at a time. Benchmark before and after each optimization.
279
+ 4. **Control the environment** -- Close other applications, disable turbo boost for CPU benchmarks, use consistent hardware.
280
+ 5. **Statistical significance** -- If the difference is less than 2x the standard deviation, it's probably noise.
281
+
282
+ ---
283
+
284
+ ## Interpreting Results
285
+
286
+ ### Reading Flamegraphs
287
+
288
+ Flamegraphs show the call stack over time. The x-axis is **not** time -- it's alphabetically sorted stack frames. Width represents the proportion of total samples.
289
+
290
+ - **Wide bars at the top** = functions that consume a lot of CPU directly
291
+ - **Wide bars at the bottom** = functions that call expensive children
292
+ - **Plateaus** (flat tops) = functions where time is spent in the function itself, not its children
293
+ - **Look for:** the widest bars at the top of the graph -- these are your hot paths
294
+
295
+ ### Identifying Hot Paths
296
+
297
+ A hot path is the sequence of function calls that consumes the most cumulative time:
298
+
299
+ 1. Sort by cumulative time (`cumtime` in cProfile)
300
+ 2. Find the top-level function with the highest cumulative time
301
+ 3. Follow its callees -- which child function consumes the most?
302
+ 4. Repeat until you reach a leaf function
303
+
304
+ The hot path tells you where optimization effort will have the most impact.
305
+
306
+ ### Memory Leak Patterns
307
+
308
+ Signs of a memory leak:
309
+ - Memory usage grows linearly with time/requests
310
+ - `gc.collect()` doesn't reclaim memory
311
+ - Heap snapshots show growing object counts for a specific type
312
+
313
+ Common causes:
314
+ - **Unbounded caches** -- dictionaries that grow forever. Fix: use `functools.lru_cache(maxsize=N)` or TTL-based caching.
315
+ - **Event listener accumulation** -- listeners added but never removed. Fix: use weak references or explicit cleanup.
316
+ - **Circular references with `__del__`** -- Python's GC can't collect cycles that have finalizers. Fix: use `weakref` to break the cycle.
317
+ - **Global state accumulation** -- appending to module-level lists. Fix: scope the collection to the request/session lifecycle.
318
+
319
+ > **Deep dive:** See `references/interpreting-results.md` for annotated examples of profiler output and how to read them.
320
+
321
+ ---
322
+
323
+ ## Ambiguity Policy
324
+
325
+ These defaults apply when the user does not specify a preference. State the assumption when making a choice:
326
+
327
+ - **Profiler choice:** Default to py-spy for Python (low overhead, flamegraph output), clinic.js for Node.js, and Chrome DevTools for browser. Use cProfile when the user needs exact call counts.
328
+ - **Benchmark iterations:** Default to at least 10 iterations with 3 warmup runs. Increase for sub-millisecond operations.
329
+ - **Metric focus:** Default to wall-clock time. Switch to CPU time when I/O is deliberately excluded. Switch to memory when the user mentions "memory", "leak", or "OOM".
330
+ - **Optimization scope:** Optimize only the identified hot path. Do not refactor surrounding code for "consistency" unless it's part of the hot path.
331
+ - **Baseline requirement:** Always establish a baseline measurement before optimizing. Refuse to optimize without one -- "it feels slow" is not a baseline.
332
+ - **Reporting:** Report absolute numbers (ms, MB) alongside relative improvements (%). A 50% improvement from 2ms to 1ms matters less than a 10% improvement from 10s to 9s.
333
+
334
+ ---
335
+
336
+ ## Reference Files
337
+
338
+ | File | Contents |
339
+ |------|----------|
340
+ | `references/tool-commands.md` | Full command reference for Python, JavaScript, and system profiling tools with all flags and options |
341
+ | `references/interpreting-results.md` | How to read profiler output: annotated cProfile tables, flamegraph walkthroughs, memory timeline interpretation, and benchmark result analysis |