@leejungkiin/awkit 1.3.8 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/bin/awk.js +204 -52
  2. package/core/AGENTS.md +38 -0
  3. package/core/GEMINI.md.bak +126 -199
  4. package/package.json +1 -1
  5. package/skills/awf-session-restore/SKILL.md +12 -2
  6. package/skills/brainstorm-agent/SKILL.md +11 -8
  7. package/skills/gitnexus/gitnexus-cli/SKILL.md +82 -0
  8. package/skills/gitnexus/gitnexus-debugging/SKILL.md +89 -0
  9. package/skills/gitnexus/gitnexus-exploring/SKILL.md +78 -0
  10. package/skills/gitnexus/gitnexus-guide/SKILL.md +64 -0
  11. package/skills/gitnexus/gitnexus-impact-analysis/SKILL.md +97 -0
  12. package/skills/gitnexus/gitnexus-refactoring/SKILL.md +121 -0
  13. package/skills/nm-memory-sync/SKILL.md +14 -1
  14. package/skills/orchestrator/SKILL.md +0 -38
  15. package/skills/ship-to-code/SKILL.md +115 -0
  16. package/skills/single-flow-task-execution/SKILL.md +409 -0
  17. package/skills/single-flow-task-execution/code-quality-reviewer-prompt.md +20 -0
  18. package/skills/single-flow-task-execution/implementer-prompt.md +78 -0
  19. package/skills/single-flow-task-execution/spec-reviewer-prompt.md +61 -0
  20. package/skills/symphony-enforcer/SKILL.md +36 -18
  21. package/skills/trello-sync/SKILL.md +25 -17
  22. package/templates/CODEBASE.md +26 -42
  23. package/templates/configs/trello-config.json +2 -2
  24. package/templates/project-identity/android.json +10 -0
  25. package/templates/project-identity/backend-nestjs.json +10 -0
  26. package/templates/project-identity/expo.json +10 -0
  27. package/templates/project-identity/ios.json +10 -0
  28. package/templates/project-identity/web-nextjs.json +10 -0
  29. package/templates/workflow_dual_mode_template.md +5 -5
  30. package/workflows/_uncategorized/conductor-codex.md +125 -0
  31. package/workflows/_uncategorized/conductor.md +97 -0
  32. package/workflows/_uncategorized/trello-sync.md +52 -0
  33. package/workflows/quality/visual-debug.md +66 -12
@@ -0,0 +1,409 @@
1
+ ---
2
+ name: single-flow-task-execution
3
+ description: Use when executing implementation plans, handling multiple independent tasks, or doing structured task-by-task development with review gates in Antigravity.
4
+ ---
5
+
6
+ # Single-Flow Task Execution
7
+
8
+ Execute plans by working through one task at a time with two-stage review after each: spec compliance review first, then code quality review.
9
+
10
+ **Core principle:** One task at a time + two-stage review (spec then quality) = high quality, disciplined iteration.
11
+
12
+ ## Antigravity Execution Model
13
+
14
+ Antigravity does NOT support parallel coding subagents. All work happens in a single execution thread.
15
+
16
+ **Rules:**
17
+
18
+ 1. **One active task only** — never work on multiple tasks simultaneously.
19
+ 2. **One execution thread only** — no parallel dispatch.
20
+ 3. **No parallel coding subagents** — Antigravity does not have `Task(...)`.
21
+ 4. **Browser automation** may use `browser_subagent` in isolated steps.
22
+ 5. **Track progress** via Symphony task system. Update task status at each state change.
23
+ 6. **Use `task_boundary`** to clearly delineate each unit of work.
24
+
25
+ ## When to Use
26
+
27
+ ```dot
28
+ digraph when_to_use {
29
+ "Have implementation plan?" [shape=diamond];
30
+ "Tasks mostly independent?" [shape=diamond];
31
+ "Multiple problems to solve?" [shape=diamond];
32
+ "single-flow-task-execution" [shape=box];
33
+ "executing-plans" [shape=box];
34
+ "Manual execution or brainstorm first" [shape=box];
35
+
36
+ "Have implementation plan?" -> "Tasks mostly independent?" [label="yes"];
37
+ "Have implementation plan?" -> "Manual execution or brainstorm first" [label="no"];
38
+ "Tasks mostly independent?" -> "single-flow-task-execution" [label="yes"];
39
+ "Tasks mostly independent?" -> "Manual execution or brainstorm first" [label="no - tightly coupled"];
40
+ "Multiple problems to solve?" -> "single-flow-task-execution" [label="yes - work through them sequentially"];
41
+ "Multiple problems to solve?" -> "Manual execution or brainstorm first" [label="no - single task"];
42
+ }
43
+ ```
44
+
45
+ **Use when:**
46
+
47
+ - You have an implementation plan with multiple independent tasks
48
+ - 2+ test files failing with different root causes (work through them one at a time)
49
+ - Multiple subsystems broken independently
50
+ - Each problem can be understood without context from others
51
+ - Structured execution with quality gates is needed
52
+
53
+ **Don't use when:**
54
+
55
+ - Failures are related (fix one might fix others) — investigate together first
56
+ - Tasks are tightly coupled and need full system understanding
57
+ - Single simple task that doesn't need review structure
58
+
59
+ **vs. Executing Plans (worktree-based):**
60
+
61
+ - Same session (no context switch)
62
+ - Fresh `task_boundary` per task (clean scope)
63
+ - Two-stage review after each task: spec compliance first, then code quality
64
+ - Faster iteration (no human-in-loop between tasks)
65
+
66
+ ## The Process
67
+
68
+ ```dot
69
+ digraph process {
70
+ rankdir=TB;
71
+
72
+ subgraph cluster_per_task {
73
+ label="Per Task";
74
+ "Execute implementation (./implementer-prompt.md)" [shape=box];
75
+ "Questions about requirements?" [shape=diamond];
76
+ "Answer questions, provide context" [shape=box];
77
+ "Implement, test, commit, self-review" [shape=box];
78
+ "Run spec compliance review (./spec-reviewer-prompt.md)" [shape=box];
79
+ "Spec confirms code matches spec?" [shape=diamond];
80
+ "Fix spec gaps" [shape=box];
81
+ "Run code quality review (./code-quality-reviewer-prompt.md)" [shape=box];
82
+ "Code quality approved?" [shape=diamond];
83
+ "Fix quality issues" [shape=box];
84
+ "Mark task complete via Symphony" [shape=box];
85
+ }
86
+
87
+ "Read plan, extract all tasks with full text, note context" [shape=box];
88
+ "More tasks remain?" [shape=diamond];
89
+ "Run final code review for entire implementation" [shape=box];
90
+ "Complete Symphony task and present next steps" [shape=box style=filled fillcolor=lightgreen];
91
+
92
+ "Read plan, extract all tasks with full text, note context" -> "Execute implementation (./implementer-prompt.md)";
93
+ "Execute implementation (./implementer-prompt.md)" -> "Questions about requirements?";
94
+ "Questions about requirements?" -> "Answer questions, provide context" [label="yes"];
95
+ "Answer questions, provide context" -> "Execute implementation (./implementer-prompt.md)";
96
+ "Questions about requirements?" -> "Implement, test, commit, self-review" [label="no"];
97
+ "Implement, test, commit, self-review" -> "Run spec compliance review (./spec-reviewer-prompt.md)";
98
+ "Run spec compliance review (./spec-reviewer-prompt.md)" -> "Spec confirms code matches spec?";
99
+ "Spec confirms code matches spec?" -> "Fix spec gaps" [label="no"];
100
+ "Fix spec gaps" -> "Run spec compliance review (./spec-reviewer-prompt.md)" [label="re-review"];
101
+ "Spec confirms code matches spec?" -> "Run code quality review (./code-quality-reviewer-prompt.md)" [label="yes"];
102
+ "Run code quality review (./code-quality-reviewer-prompt.md)" -> "Code quality approved?";
103
+ "Code quality approved?" -> "Fix quality issues" [label="no"];
104
+ "Fix quality issues" -> "Run code quality review (./code-quality-reviewer-prompt.md)" [label="re-review"];
105
+ "Code quality approved?" -> "Mark task complete via Symphony" [label="yes"];
106
+ "Mark task complete via Symphony" -> "More tasks remain?";
107
+ "More tasks remain?" -> "Execute implementation (./implementer-prompt.md)" [label="yes"];
108
+ "More tasks remain?" -> "Run final code review for entire implementation" [label="no"];
109
+ "Run final code review for entire implementation" -> "Complete Symphony task and present next steps";
110
+ }
111
+ ```
112
+
113
+ ## UI-First Task Ordering (Gate 4 Three-Phase — v12.3)
114
+
115
+ When a task set includes UI components (COMPLEX or MODERATE), tasks MUST be ordered in three phases:
116
+
117
+ ### Phase A: Infrastructure Tasks
118
+ ```
119
+ Priority: Execute FIRST
120
+ Examples:
121
+ - Add dependencies (Gradle, SPM, CocoaPods)
122
+ - Create project structure (packages, modules, DI)
123
+ - Set up navigation skeleton (NavGraph, Router)
124
+ - Configure build variants, signing
125
+ Gate: App MUST build successfully → proceed
126
+ ```
127
+
128
+ ### Phase B: UI Shell Tasks (Mock Data)
129
+ ```
130
+ Priority: Execute SECOND, BEFORE any logic tasks
131
+ Examples:
132
+ - Build all screen layouts with static/mock data
133
+ - Implement navigation between screens
134
+ - Add animations, transitions, loading/empty/error states
135
+ - Wire up UI components (no real API/DB calls)
136
+ Gate: 🧪 USER TEST CHECKPOINT — user must test UI on device
137
+ → Present test guidance (see symphony-enforcer TP1.7)
138
+ → User confirms UI OK → proceed to Phase C
139
+ → User reports issue → fix → re-checkpoint
140
+ ```
141
+
142
+ ### Phase C: Logic Tasks (Per Feature)
143
+ ```
144
+ Priority: Execute LAST, after UI is confirmed
145
+ Examples:
146
+ - Replace mock data with real API/DB calls
147
+ - Implement business logic, validation
148
+ - Add error handling, retry, offline support
149
+ - Wire up hardware features (camera, GPS, sensors)
150
+ Gate: 🧪 USER TEST CHECKPOINT per feature (batch small ones)
151
+ → Especially important for hardware-dependent features
152
+ ```
153
+
154
+ ### Task Sorting Rule
155
+ ```
156
+ When creating task list from implementation plan:
157
+ 1. Tag each task: [INFRA] [UI] [LOGIC]
158
+ 2. Sort: INFRA first → UI second → LOGIC last
159
+ 3. Within each phase: respect dependency ordering
160
+ 4. Between phases: MANDATORY checkpoint where indicated
161
+ ```
162
+
163
+ ## Task Decomposition
164
+
165
+ When facing multiple problems (e.g., 5 test failures across 3 files):
166
+
167
+ ### 1. Identify Independent Domains
168
+
169
+ Group failures by what's broken:
170
+
171
+ - File A tests: User authentication flow
172
+ - File B tests: Data validation logic
173
+ - File C tests: API response handling
174
+
175
+ Each domain is independent — fixing authentication doesn't affect validation tests.
176
+
177
+ ### 2. Create Task Units
178
+
179
+ Each task gets:
180
+
181
+ - **Specific scope:** One test file or subsystem
182
+ - **Clear goal:** Make these tests pass / implement this feature
183
+ - **Constraints:** Don't change unrelated code
184
+ - **Expected output:** Summary of what changed and verification results
185
+
186
+ ### 3. Execute Sequentially with Review
187
+
188
+ Work through each task one at a time using the full review cycle.
189
+
190
+ ### 4. Review and Integrate
191
+
192
+ After all tasks:
193
+
194
+ - Run full test suite to verify no regressions
195
+ - Check for conflicts between task changes
196
+ - Run final code review on entire implementation
197
+
198
+ ## Task Brief Structure
199
+
200
+ For each task, prepare:
201
+
202
+ ```
203
+ task_boundary:
204
+ description: "Implement Task N: [task name]"
205
+ prompt: |
206
+ ## Task Description
207
+ [FULL TEXT of task from plan — paste it here]
208
+
209
+ ## Context
210
+ [Where this fits, dependencies, architectural context]
211
+
212
+ ## Constraints
213
+ - Only modify [specific files/directories]
214
+ - Follow existing patterns in the codebase
215
+ - Write tests for new functionality
216
+
217
+ ## Verification
218
+ - Run: [specific test command]
219
+ - Expected: [what success looks like]
220
+ ```
221
+
222
+ **Key:** Provide full task text and context upfront. Don't make the task boundary re-read the plan file.
223
+
224
+ ## Review Templates
225
+
226
+ This skill includes prompt templates for structured reviews:
227
+
228
+ - **`./implementer-prompt.md`** — Template for implementation task boundaries
229
+ - **`./spec-reviewer-prompt.md`** — Template for spec compliance review (did we build what was requested?)
230
+ - **`./code-quality-reviewer-prompt.md`** — Template for code quality review (is it well-built?)
231
+
232
+ **Review order matters:** Always run spec compliance FIRST, then code quality. There's no point reviewing code quality if the implementation doesn't match the spec.
233
+
234
+ ## Checkpoint Pattern
235
+
236
+ At logical boundaries (after each task, at major milestones), report:
237
+
238
+ - **What changed** — files modified, features implemented
239
+ - **What verification ran** — test results, lint results
240
+ - **What remains** — remaining tasks, known issues
241
+
242
+ Update Symphony task progress with current status.
243
+
244
+ ## Common Mistakes
245
+
246
+ **Task scoping:**
247
+
248
+ - **Bad:** "Fix all the tests" — loses focus
249
+ - **Good:** "Fix user-auth.test.ts failures" — clear scope
250
+
251
+ **Context:**
252
+
253
+ - **Bad:** "Fix the validation bug" — unclear where
254
+ - **Good:** Paste error messages, test names, relevant code paths
255
+
256
+ **Constraints:**
257
+
258
+ - **Bad:** No constraints — task might refactor everything
259
+ - **Good:** "Only modify src/auth/ directory"
260
+
261
+ **Output:**
262
+
263
+ - **Bad:** "Fix it" — no visibility into what changed
264
+ - **Good:** "Report: root cause, changes made, test results"
265
+
266
+ **Reviews:**
267
+
268
+ - **Bad:** "It works, move on" — quality debt
269
+ - **Good:** Implement then spec review then quality review then next task
270
+
271
+ ## Example Workflow
272
+
273
+ ```
274
+ You: I'm using single-flow-task-execution to execute this plan.
275
+
276
+ [Read plan file: docs/plans/feature-plan.md]
277
+ [Extract all 5 tasks with full text and context]
278
+ [Create Symphony tasks for tracking]
279
+
280
+ --- Task 1: Hook installation script ---
281
+
282
+ [Prepare task brief with full text + context]
283
+ [Execute implementation following ./implementer-prompt.md structure]
284
+
285
+ Questions: "Should the hook be installed at user or system level?"
286
+ Answer: "User level (~/.config/superpowers/hooks/)"
287
+
288
+ Implementation:
289
+ - Implemented install-hook command
290
+ - Added tests, 5/5 passing
291
+ - Self-review: Found I missed --force flag, added it
292
+ - Committed
293
+
294
+ [Run spec compliance review following ./spec-reviewer-prompt.md]
295
+ Spec review: Spec compliant — all requirements met, nothing extra
296
+
297
+ [Run code quality review following ./code-quality-reviewer-prompt.md]
298
+ Code review: Strengths: Good test coverage, clean. Issues: None. Approved.
299
+
300
+ [Mark Task 1 complete in Symphony]
301
+
302
+ --- Task 2: Recovery modes ---
303
+
304
+ [Prepare task brief with full text + context]
305
+ [Execute implementation]
306
+
307
+ Implementation:
308
+ - Added verify/repair modes
309
+ - 8/8 tests passing
310
+ - Self-review: All good
311
+ - Committed
312
+
313
+ [Run spec compliance review]
314
+ Spec review: Issues found:
315
+ - Missing: Progress reporting (spec says "report every 100 items")
316
+ - Extra: Added --json flag (not requested)
317
+
318
+ [Fix issues: remove --json flag, add progress reporting]
319
+ [Run spec compliance review again]
320
+ Spec review: Spec compliant now
321
+
322
+ [Run code quality review]
323
+ Code review: Issue (Important): Magic number (100) should be a constant
324
+
325
+ [Fix: extract PROGRESS_INTERVAL constant]
326
+ [Run code quality review again]
327
+ Code review: Approved
328
+
329
+ [Mark Task 2 complete in Symphony]
330
+
331
+ ... [Continue through remaining tasks] ...
332
+
333
+ [After all tasks complete]
334
+ [Run final code review on entire implementation]
335
+ Final review: All requirements met, ready to merge
336
+
337
+ [Complete Symphony task and present next steps]
338
+ Done!
339
+ ```
340
+
341
+ ## Red Flags
342
+
343
+ **Never:**
344
+
345
+ - Start implementation on main/master branch without explicit user consent
346
+ - Skip reviews (spec compliance OR code quality)
347
+ - Proceed with unfixed review issues
348
+ - Work on multiple tasks simultaneously
349
+ - Skip scene-setting context (task needs to understand where it fits)
350
+ - Accept "close enough" on spec compliance (reviewer found issues = not done)
351
+ - Skip review loops (reviewer found issues = fix = review again)
352
+ - Let self-review replace actual review (both are needed)
353
+ - **Start code quality review before spec compliance passes** (wrong order)
354
+ - Move to next task while either review has open issues
355
+
356
+ **If you have questions about requirements:**
357
+
358
+ - Ask clearly and wait for answers
359
+ - Don't guess or make assumptions
360
+ - Better to ask upfront than rework later
361
+
362
+ **If reviewer finds issues:**
363
+
364
+ - Fix them
365
+ - Run reviewer again
366
+ - Repeat until approved
367
+ - Don't skip the re-review
368
+
369
+ ## Completion
370
+
371
+ Before claiming all work is done:
372
+
373
+ 1. Ensure all Symphony tasks are marked `done` or `cancelled`
374
+ 2. Run full test/validation command
375
+ 3. Verify no regressions across all tasks
376
+ 4. Summarize evidence (test output, review approvals)
377
+
378
+ ## Advantages
379
+
380
+ **Structured execution:**
381
+
382
+ - Clear task boundaries prevent scope creep
383
+ - Review gates catch issues early (cheaper than debugging later)
384
+ - Progress tracking provides visibility
385
+
386
+ **Quality gates:**
387
+
388
+ - Self-review catches obvious issues before handoff
389
+ - Two-stage review: spec compliance prevents over/under-building, code quality ensures maintainability
390
+ - Review loops ensure fixes actually work
391
+
392
+ **Efficiency:**
393
+
394
+ - Provide full task text upfront (no re-reading plan files)
395
+ - Controller curates exactly what context is needed
396
+ - Questions surfaced before work begins (not after)
397
+ - Sequential execution avoids conflicts between tasks
398
+
399
+ ## Integration
400
+
401
+ **Required workflow skills:**
402
+
403
+ - **`~/.gemini/antigravity/skills/symphony-orchestrator/SKILL.md`** — Task tracking and lifecycle
404
+ - **`~/.gemini/antigravity/skills/symphony-enforcer/SKILL.md`** — Enforce task discipline
405
+
406
+ **Should also use:**
407
+
408
+ - **test-driven-development** — Follow TDD for each task
409
+ - **verification-before-completion** — Final verification checklist
@@ -0,0 +1,20 @@
1
+ # Code Quality Reviewer Prompt Template
2
+
3
+ Use this template when running a code quality review step in single-flow mode.
4
+
5
+ **Purpose:** Verify implementation is well-built (clean, tested, maintainable)
6
+
7
+ **Only proceed after spec compliance review passes.**
8
+
9
+ ```
10
+ task_boundary:
11
+ Use template at requesting-code-review/code-reviewer.md
12
+
13
+ WHAT_WAS_IMPLEMENTED: [from implementer's report]
14
+ PLAN_OR_REQUIREMENTS: Task N from [plan-file]
15
+ BASE_SHA: [commit before task]
16
+ HEAD_SHA: [current commit]
17
+ DESCRIPTION: [task summary]
18
+ ```
19
+
20
+ **Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment
@@ -0,0 +1,78 @@
1
+ # Implementer Task Template
2
+
3
+ Use this template when executing an implementation task in single-flow mode.
4
+
5
+ ```
6
+ task_boundary:
7
+ description: "Implement Task N: [task name]"
8
+ prompt: |
9
+ You are implementing Task N: [task name]
10
+
11
+ ## Task Description
12
+
13
+ [FULL TEXT of task from plan - paste it here]
14
+
15
+ ## Context
16
+
17
+ [Scene-setting: where this fits, dependencies, architectural context]
18
+
19
+ ## Before You Begin
20
+
21
+ If you have questions about:
22
+ - The requirements or acceptance criteria
23
+ - The approach or implementation strategy
24
+ - Dependencies or assumptions
25
+ - Anything unclear in the task description
26
+
27
+ **Ask them now.** Raise any concerns before starting work.
28
+
29
+ ## Your Job
30
+
31
+ Once you're clear on requirements:
32
+ 1. Implement exactly what the task specifies
33
+ 2. Write tests (following TDD if task says to)
34
+ 3. Verify implementation works
35
+ 4. Commit your work
36
+ 5. Self-review (see below)
37
+ 6. Report back
38
+
39
+ Work from: [directory]
40
+
41
+ **While you work:** If you encounter something unexpected or unclear, **ask questions**.
42
+ It's always OK to pause and clarify. Don't guess or make assumptions.
43
+
44
+ ## Before Reporting Back: Self-Review
45
+
46
+ Review your work with fresh eyes. Ask yourself:
47
+
48
+ **Completeness:**
49
+ - Did I fully implement everything in the spec?
50
+ - Did I miss any requirements?
51
+ - Are there edge cases I didn't handle?
52
+
53
+ **Quality:**
54
+ - Is this my best work?
55
+ - Are names clear and accurate (match what things do, not how they work)?
56
+ - Is the code clean and maintainable?
57
+
58
+ **Discipline:**
59
+ - Did I avoid overbuilding (YAGNI)?
60
+ - Did I only build what was requested?
61
+ - Did I follow existing patterns in the codebase?
62
+
63
+ **Testing:**
64
+ - Do tests actually verify behavior (not just mock behavior)?
65
+ - Did I follow TDD if required?
66
+ - Are tests comprehensive?
67
+
68
+ If you find issues during self-review, fix them now before reporting.
69
+
70
+ ## Report Format
71
+
72
+ When done, report:
73
+ - What you implemented
74
+ - What you tested and test results
75
+ - Files changed
76
+ - Self-review findings (if any)
77
+ - Any issues or concerns
78
+ ```
@@ -0,0 +1,61 @@
1
+ # Spec Compliance Reviewer Prompt Template
2
+
3
+ Use this template when running a spec compliance review step in single-flow mode.
4
+
5
+ **Purpose:** Verify implementer built what was requested (nothing more, nothing less)
6
+
7
+ ```
8
+ task_boundary:
9
+ description: "Review spec compliance for Task N"
10
+ prompt: |
11
+ You are reviewing whether an implementation matches its specification.
12
+
13
+ ## What Was Requested
14
+
15
+ [FULL TEXT of task requirements]
16
+
17
+ ## What Implementer Claims They Built
18
+
19
+ [From implementer's report]
20
+
21
+ ## CRITICAL: Do Not Trust the Report
22
+
23
+ The implementer finished suspiciously quickly. Their report may be incomplete,
24
+ inaccurate, or optimistic. You MUST verify everything independently.
25
+
26
+ **DO NOT:**
27
+ - Take their word for what they implemented
28
+ - Trust their claims about completeness
29
+ - Accept their interpretation of requirements
30
+
31
+ **DO:**
32
+ - Read the actual code they wrote
33
+ - Compare actual implementation to requirements line by line
34
+ - Check for missing pieces they claimed to implement
35
+ - Look for extra features they didn't mention
36
+
37
+ ## Your Job
38
+
39
+ Read the implementation code and verify:
40
+
41
+ **Missing requirements:**
42
+ - Did they implement everything that was requested?
43
+ - Are there requirements they skipped or missed?
44
+ - Did they claim something works but didn't actually implement it?
45
+
46
+ **Extra/unneeded work:**
47
+ - Did they build things that weren't requested?
48
+ - Did they over-engineer or add unnecessary features?
49
+ - Did they add "nice to haves" that weren't in spec?
50
+
51
+ **Misunderstandings:**
52
+ - Did they interpret requirements differently than intended?
53
+ - Did they solve the wrong problem?
54
+ - Did they implement the right feature but wrong way?
55
+
56
+ **Verify by reading code, not by trusting report.**
57
+
58
+ Report:
59
+ - ✅ Spec compliant (if everything matches after code inspection)
60
+ - ❌ Issues found: [list specifically what's missing or extra, with file:line references]
61
+ ```