@tekyzinc/gsd-t 2.31.17 → 2.31.18
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -17,6 +17,30 @@ If status is not VERIFIED:
|
|
|
17
17
|
|
|
18
18
|
If `--force` flag provided, proceed with warning in archive.
|
|
19
19
|
|
|
20
|
+
## Step 1.5: Smoke Test Artifact Gate (MANDATORY — Categories 2 and 7)
|
|
21
|
+
|
|
22
|
+
Before archiving, verify that high-risk features have testable artifacts. This gate catches what code review and unit tests cannot.
|
|
23
|
+
|
|
24
|
+
**Scan this milestone's domains for any of the following:**
|
|
25
|
+
- Audio capture/playback, speech recognition/synthesis
|
|
26
|
+
- GPU/WebGPU/WebGL compute or rendering
|
|
27
|
+
- ML inference, model loading, quantized model execution
|
|
28
|
+
- Background workers, service workers, IPC channels
|
|
29
|
+
- Native APIs (camera, bluetooth, filesystem, microphone)
|
|
30
|
+
- WebAssembly modules
|
|
31
|
+
- Any feature whose only prior "test" was manual user interaction
|
|
32
|
+
|
|
33
|
+
**For each high-risk feature found:**
|
|
34
|
+
|
|
35
|
+
1. Check that a smoke test script exists (in `scripts/`, `tests/`, or `.gsd-t/smoke-tests/`)
|
|
36
|
+
2. Check that the script was run and passed (evidence in token-log.md, CI output, or a `.gsd-t/smoke-tests/{feature}.md` file with run results)
|
|
37
|
+
3. If manual steps remain unavoidable: `.gsd-t/smoke-tests/{feature}.md` must exist documenting exact steps and confirming they passed
|
|
38
|
+
|
|
39
|
+
**If any high-risk feature lacks a smoke test artifact → BLOCK completion.**
|
|
40
|
+
Do not proceed to archiving. Create the smoke test now, run it, confirm it passes, then continue.
|
|
41
|
+
|
|
42
|
+
> This gate exists because complete-milestone is the last opportunity to catch "shipped blind" features before they become user-facing bugs requiring 15 debug sessions to resolve.
|
|
43
|
+
|
|
20
44
|
## Step 2: Gap Analysis Gate
|
|
21
45
|
|
|
22
46
|
After verification passes, run a gap analysis against `docs/requirements.md` scoped to this milestone's deliverables:
|
package/commands/gsd-t-debug.md
CHANGED
|
@@ -71,6 +71,27 @@ The contract didn't specify something it should have. Symptoms:
|
|
|
71
71
|
|
|
72
72
|
→ Update the contract, then fix implementations on both sides.
|
|
73
73
|
|
|
74
|
+
## Step 2.5: Reproduce First (MANDATORY — Category 5)
|
|
75
|
+
|
|
76
|
+
**A fix attempt without a reproduction script is a guess, not a fix.**
|
|
77
|
+
|
|
78
|
+
Before touching any code:
|
|
79
|
+
|
|
80
|
+
1. **Write a reproduction script** that demonstrates the bug. Automate as much as possible:
|
|
81
|
+
- Unit/integration bug → write a failing test that proves the bug exists
|
|
82
|
+
- UI/audio/GPU/worker bug (not fully automatable) → write the closest possible script: a headless probe, a log-based trigger, a mock that replicates the failure path. Document the manual remainder explicitly.
|
|
83
|
+
- If you cannot write any form of reproduction → you do not yet understand the bug. Keep investigating until you can.
|
|
84
|
+
|
|
85
|
+
2. **Run the reproduction** and confirm it fails before attempting any fix.
|
|
86
|
+
|
|
87
|
+
3. **Never close a debug session with "ready for testing."** A session closes only when the reproduction script passes. If manual steps remain, document them explicitly and confirm they passed.
|
|
88
|
+
|
|
89
|
+
4. **Log the reproduction script path** in `.gsd-t/progress.md` Decision Log: what it tests, how to run it, what passing looks like.
|
|
90
|
+
|
|
91
|
+
> This rule exists because code review cannot detect silent runtime failures (GPU compute shaders, audio context state, worker message drops). Only execution proves correctness.
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
74
95
|
## Step 3: Debug (Solo or Team)
|
|
75
96
|
|
|
76
97
|
### Deviation Rules
|
|
@@ -84,13 +105,14 @@ When you encounter unexpected situations during the fix:
|
|
|
84
105
|
**3-attempt limit**: If your fix doesn't work after 3 attempts, log to `.gsd-t/deferred-items.md` and stop trying.
|
|
85
106
|
|
|
86
107
|
### Solo Mode
|
|
87
|
-
1. Reproduce the issue
|
|
108
|
+
1. Reproduce the issue — **reproduction script must exist before step 2** (see Step 2.5)
|
|
88
109
|
2. Trace through the relevant domain(s)
|
|
89
110
|
3. Check contract compliance at each boundary
|
|
90
111
|
4. Identify root cause
|
|
91
112
|
5. **Destructive Action Guard**: If the fix requires destructive or structural changes (dropping tables, removing columns, changing schema, replacing architecture patterns, removing working modules) → STOP and present the change to the user with what exists, what will change, what will break, and a safe migration path. Wait for explicit approval.
|
|
92
113
|
6. Fix and test — **adapt the fix to existing structures**, not the other way around
|
|
93
114
|
7. Update contracts if needed
|
|
115
|
+
8. **Category 6 — Bug Isolation Check**: After applying the fix, run the FULL test suite and all smoke tests — not just the reproduction script. Do not assume the bug was isolated. A fix that resolves one failure frequently uncovers adjacent failures. Every test must pass before the session closes.
|
|
94
116
|
|
|
95
117
|
### Team Mode (for complex cross-domain bugs)
|
|
96
118
|
```
|
|
@@ -17,6 +17,63 @@ If `.gsd-t/` doesn't exist, create the full directory structure:
|
|
|
17
17
|
└── progress.md
|
|
18
18
|
```
|
|
19
19
|
|
|
20
|
+
## Step 1.5: Assumption Audit (MANDATORY — complete before domain work begins)
|
|
21
|
+
|
|
22
|
+
Before partitioning, surface and lock down all assumptions baked into the requirements. Unexamined assumptions become architectural decisions no one approved.
|
|
23
|
+
|
|
24
|
+
Work through each category below. For every match found, write the explicit disposition into the affected domain's `constraints.md` and into the Decision Log in `.gsd-t/progress.md`.
|
|
25
|
+
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
### Category 1: External Reference Assumptions
|
|
29
|
+
|
|
30
|
+
Scan requirements for any external project, file, component, library, or URL mentioned by name or path. For each one found, explicitly confirm which disposition applies — and lock it in the contract before any domain touches it:
|
|
31
|
+
|
|
32
|
+
| Disposition | Meaning |
|
|
33
|
+
|-------------|---------|
|
|
34
|
+
| `USE` | Import and depend on it — treat as a dependency |
|
|
35
|
+
| `INSPECT` | Read source for patterns only — do not import or copy code |
|
|
36
|
+
| `BUILD` | Build equivalent functionality from scratch — do not read or use it |
|
|
37
|
+
|
|
38
|
+
**No external reference survives partition without a locked disposition.**
|
|
39
|
+
|
|
40
|
+
Trigger phrases to watch for: "reference X", "like X", "similar to Y", "see W for how it handles Z", any file path or project name, any URL.
|
|
41
|
+
|
|
42
|
+
> If Level 3 (Full Auto): state the inferred disposition and reason; lock it unless it's ambiguous.
|
|
43
|
+
> If ambiguous (e.g., "reference X" could mean USE or INSPECT): pause and ask the user before proceeding.
|
|
44
|
+
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
### Category 3: Black Box Assumptions
|
|
48
|
+
|
|
49
|
+
Any component, module, or library **not written in this milestone** that a domain will call, import, or depend on → the agent that executes that domain must read its source before treating it as correct. This includes internal project modules written in a previous milestone.
|
|
50
|
+
|
|
51
|
+
For each such component identified:
|
|
52
|
+
1. Name it explicitly in the domain's `constraints.md` under a `## Must Read Before Using` section
|
|
53
|
+
2. List the specific functions or behaviors the domain depends on
|
|
54
|
+
3. The execute agent is prohibited from treating it as a black box — it must read the listed items before implementing
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
### Category 4: User Intent Assumptions
|
|
59
|
+
|
|
60
|
+
Scan requirements for ambiguous language. Flag every instance where intent could be interpreted more than one way. Common patterns:
|
|
61
|
+
|
|
62
|
+
- "like X" / "similar to Y" — does this mean the same UX, the same architecture, or just the same concept?
|
|
63
|
+
- "the way X handles it" — inspiration, direct port, or behavioral equivalent?
|
|
64
|
+
- "reference Z" — does this mean read it, use it, or replicate it?
|
|
65
|
+
- "build something that does W" — from scratch, or using an existing library?
|
|
66
|
+
- Any requirement where a reasonable developer could make two different implementation choices
|
|
67
|
+
|
|
68
|
+
For each ambiguous item:
|
|
69
|
+
1. State the two (or more) possible interpretations explicitly
|
|
70
|
+
2. State which interpretation you are locking in and why
|
|
71
|
+
3. If genuinely unclear: pause and ask the user — do not infer and proceed
|
|
72
|
+
|
|
73
|
+
> **Rule**: Ambiguous intent that reaches execute unresolved becomes a wrong assumption. Resolve it here or pay for it in debug sessions.
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
20
77
|
## Step 2: Identify Domains
|
|
21
78
|
|
|
22
79
|
Decompose the milestone into 2-5 independent domains. Each domain should:
|
package/commands/gsd-t-verify.md
CHANGED
|
@@ -24,6 +24,37 @@ Run the full test audit directly:
|
|
|
24
24
|
|
|
25
25
|
Verification cannot complete if any test fails or critical contract gaps remain.
|
|
26
26
|
|
|
27
|
+
## Step 2.5: High-Risk Domain Gate (MANDATORY — Categories 2 and 7)
|
|
28
|
+
|
|
29
|
+
Before running standard verification dimensions, check whether this milestone involves any high-risk domain:
|
|
30
|
+
|
|
31
|
+
**High-risk domains**: audio capture/playback, GPU/WebGPU/WebGL, ML/inference/model loading, background workers, native APIs (camera, bluetooth, filesystem), IPC, WebAssembly, real-time data streams.
|
|
32
|
+
|
|
33
|
+
**If any high-risk domain is present:**
|
|
34
|
+
|
|
35
|
+
### Category 2 — Technology Reliability Gate
|
|
36
|
+
Initialization success does not prove runtime correctness. These technologies can initialize cleanly and fail silently at runtime (compute shader errors, audio context state loss, worker message drops, inference failures).
|
|
37
|
+
|
|
38
|
+
For each high-risk domain:
|
|
39
|
+
1. A **smoke test script** must exist that exercises actual runtime behavior — not just initialization
|
|
40
|
+
2. The smoke test must have been run and passed
|
|
41
|
+
3. "It initialized without throwing" is NOT a passing smoke test
|
|
42
|
+
4. If no smoke test exists → create one now before proceeding with any other verification dimension
|
|
43
|
+
5. Smoke test failure → verification FAIL (not WARN)
|
|
44
|
+
|
|
45
|
+
### Category 7 — Manual QA as Test Gate
|
|
46
|
+
"The user will manually test it" is not a test artifact. Scan the milestone's domains for any feature whose acceptance criteria relies solely on manual user testing.
|
|
47
|
+
|
|
48
|
+
For each such feature:
|
|
49
|
+
1. A smoke test script must exist that automates as much of the verification as possible
|
|
50
|
+
2. Any remaining manual steps must be explicitly documented in `.gsd-t/smoke-tests/{feature}.md` with exact steps and expected outcomes
|
|
51
|
+
3. The documented manual steps must have been executed and passed (noted in the file)
|
|
52
|
+
4. If neither automated smoke test nor documented manual procedure exists → verification FAIL
|
|
53
|
+
|
|
54
|
+
> These gates exist because the pre-commit checklist "did you run the affected tests?" is meaningless when the only test is "user presses Ctrl+Space." That is not a test. It is hope.
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
27
58
|
## Step 3: Define Verification Dimensions
|
|
28
59
|
|
|
29
60
|
Standard dimensions (adjust based on project):
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@tekyzinc/gsd-t",
|
|
3
|
-
"version": "2.31.
|
|
3
|
+
"version": "2.31.18",
|
|
4
4
|
"description": "GSD-T: Contract-Driven Development for Claude Code — 46 slash commands with backlog management, impact analysis, test sync, milestone archival, and PRD generation",
|
|
5
5
|
"author": "Tekyz, Inc.",
|
|
6
6
|
"license": "MIT",
|