qualia-framework 4.1.0 → 4.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,238 @@
1
+ ---
2
+ name: qualia-postmortem
3
+ description: "Self-healing AI layer — when /qualia-verify returns FAIL, identify which agent/rule/skill should have caught the failure and propose a delta to that file so the same class of bug never recurs. Trigger on 'postmortem', 'why did the framework miss this', 'self-heal', 'qualia-postmortem', or auto-invoked by /qualia-verify on FAIL with --auto."
4
+ allowed-tools:
5
+ - Bash
6
+ - Read
7
+ - Write
8
+ - Edit
9
+ - Grep
10
+ - Glob
11
+ ---
12
+
13
+ # /qualia-postmortem — Self-Healing AI Layer
14
+
15
+ When the verifier finds a gap, fixing the **code** alone is not enough.
16
+ The framework let the bug through — that means a rule, agent prompt,
17
+ plan-checker check, or skill instruction was insufficient. This skill
18
+ runs after a verify FAIL and asks: *which file in the AI layer should
19
+ have caught this, and what delta would catch the next one like it?*
20
+
21
+ This is Cole Medin's **pillar 5** from the parallel-worktrees playbook
22
+ (NotebookLM 2026-04-25): "anytime we encounter a bug in a pull request,
23
+ we don't just fix the bug and move on, we fix the underlying system that
24
+ allowed for the bug." Without this loop, the same class of bug ships in
25
+ PR-3, PR-7, PR-11 of every project.
26
+
27
+ ## When to run
28
+
29
+ - **Manually:** `/qualia-postmortem` after any verify FAIL where the gap
30
+ feels preventable — i.e. a rule could have flagged it, a builder
31
+ instruction could have steered around it, a plan-checker rule could
32
+ have rejected the plan that produced it.
33
+ - **Auto:** `/qualia-verify --auto` will invoke this skill on every FAIL
34
+ before the gap-closure loop fires. The postmortem write happens before
35
+ the user re-plans, so the next planner spawn benefits from the
36
+ updated AI layer immediately.
37
+
38
+ ## Inputs
39
+
40
+ - `--phase N` (default: current phase from STATE.md)
41
+ - `--apply` (optional) — apply the proposed delta to disk. Without
42
+ `--apply`, the skill writes `.planning/phase-{N}-postmortem.md` for
43
+ human review and stops there.
44
+ - `--report-only` (optional) — just emit the analysis, write nothing.
45
+
46
+ If invoked from `/qualia-verify --auto`, default to writing the
47
+ postmortem report but **not** applying — applying touches the AI layer
48
+ itself, which is high-stakes. The user reviews and types `/qualia-learn`
49
+ or applies manually.
50
+
51
+ ## Process
52
+
53
+ ### 1. Load the failure
54
+
55
+ ```bash
56
+ node ~/.claude/bin/qualia-ui.js banner postmortem 2>/dev/null || true
57
+
58
+ PHASE="${PHASE:-$(node ~/.claude/bin/state.js check 2>/dev/null | node -e 'let s=""; process.stdin.on("data",d=>s+=d).on("end",()=>{try{console.log(JSON.parse(s).phase)}catch{console.log("")}})')}"
59
+ [ -z "$PHASE" ] && { echo "QUALIA: Could not resolve current phase. Pass --phase N explicitly."; exit 1; }
60
+
61
+ VERIFY_FILE=".planning/phase-${PHASE}-verification.md"
62
+ PLAN_FILE=".planning/phase-${PHASE}-plan.md"
63
+
64
+ [ -f "$VERIFY_FILE" ] || { echo "QUALIA: $VERIFY_FILE missing — nothing to post-mortem."; exit 1; }
65
+ [ -f "$PLAN_FILE" ] || echo "QUALIA: $PLAN_FILE missing — analysis will be partial."
66
+ ```
67
+
68
+ Read both files. The verification report tells you *what failed*. The
69
+ plan tells you *what was supposed to happen*. The gap between them is
70
+ where the AI layer fell short.
71
+
72
+ ### 2. Read the AI layer
73
+
74
+ The framework's prompts live in three places. Load all three so you can
75
+ match a finding to the file most likely responsible:
76
+
77
+ ```bash
78
+ # Agent prompts (planner, builder, plan-checker, verifier, etc.)
79
+ ls ~/.claude/agents/
80
+ # Skill descriptions (qualia-plan, qualia-build, qualia-verify, etc.)
81
+ ls ~/.claude/skills/
82
+ # Project rules + grounding
83
+ ls ~/.claude/rules/
84
+ # Already-curated knowledge
85
+ node ~/.claude/bin/knowledge.js
86
+ ```
87
+
88
+ You don't read all of them — you read the ones whose job description
89
+ matches the failure. Use this lookup:
90
+
91
+ | Failure shape | Likely AI-layer owner |
92
+ |---|---|
93
+ | Builder produced a stub / placeholder | `agents/builder.md` (Read Before Write, no-stub rule) |
94
+ | Plan listed Validation: `test -f file.ts` only — missed behavior check | `agents/plan-checker.md` (Rule 8: at least one grep-match or command-exit per task) |
95
+ | Wave 2 task ran before wave 1 committed | `agents/planner.md` (dependency graph) |
96
+ | Build passed locally, broke in CI | `rules/deployment.md` or a missing pre-deploy-gate scan |
97
+ | RLS missing on new table | `rules/security.md` + `agents/builder.md` (security persona handling) |
98
+ | Design regression — fonts off, contrast fail | `rules/frontend.md` + `skills/qualia-design/SKILL.md` |
99
+ | Migration unsafe (DROP without IF EXISTS, etc.) | `hooks/migration-guard.js` |
100
+ | Verifier missed it | `agents/verifier.md` — most embarrassing case, address with extra care |
101
+
102
+ ### 3. Diagnose
103
+
104
+ For **each** gap in the verification report (process them one at a time
105
+ when there are multiple), produce a four-field analysis:
106
+
107
+ ```markdown
108
+ ## Gap: {gap title from verification report}
109
+
110
+ **Severity:** {CRITICAL | HIGH | MEDIUM | LOW} (from verifier's rubric)
111
+ **Owner file:** `agents/builder.md` (or rules/X.md, skills/Y/SKILL.md, etc.)
112
+ **Why it slipped:**
113
+ Quote the relevant section of the owner file. Show the gap. The owner
114
+ file SHOULD have prevented this — either the rule wasn't there, or it
115
+ was there but not enforceable, or a rule was there but the agent was
116
+ free to ignore it.
117
+
118
+ **Proposed delta:**
119
+ Concrete diff for the owner file. New line, edited section, or rule
120
+ addition. Keep it minimal — one new sentence is better than a new
121
+ paragraph. The goal is "the next planner/builder spawn catches this
122
+ class of bug."
123
+ ```
124
+
125
+ ### 4. Write the postmortem report
126
+
127
+ ```bash
128
+ cat > .planning/phase-${PHASE}-postmortem.md <<'EOF'
129
+ # Phase {N} Postmortem
130
+
131
+ **Phase:** {N}
132
+ **Verify result:** FAIL ({gap_count} gaps)
133
+ **Date:** {ISO date}
134
+ **Run ID:** {short uid}
135
+
136
+ ## Findings
137
+
138
+ {one ## Gap section per gap, format from step 3}
139
+
140
+ ## Cumulative AI-layer drift
141
+
142
+ {If multiple postmortems exist for this project, group recurring owner
143
+ files: "agents/builder.md has been flagged 3 times this milestone — its
144
+ no-stub rule may need reinforcement or wave-2 stubs need a hard hook."}
145
+
146
+ ## Apply?
147
+
148
+ To apply all proposed deltas:
149
+ /qualia-postmortem --apply --phase {N}
150
+
151
+ To save the recurring patterns to knowledge instead (recommended for
152
+ project-spanning lessons):
153
+ /qualia-learn (pick the relevant entries from this report)
154
+
155
+ EOF
156
+ ```
157
+
158
+ ### 5. (Optional) Apply
159
+
160
+ If invoked with `--apply`, walk each "Proposed delta" and use the Edit
161
+ tool to make the literal change to the owner file. After every edit, run
162
+ the framework's own type/test gates (`node --test tests/runner.js` if you
163
+ modified anything in `bin/`, `agents/`, or `rules/`) to confirm no
164
+ regression.
165
+
166
+ If a proposed delta is to a `~/.claude/agents/X.md` file (the installed
167
+ copy), edit that copy directly — the user re-running the installer will
168
+ overwrite it next release, so also flag a TODO in the postmortem report
169
+ saying "this delta should be PR'd back to the framework repo at
170
+ `agents/X.md`" so it survives reinstall.
171
+
172
+ ### 6. Promote durable lessons to the knowledge layer
173
+
174
+ For lessons that apply across projects (e.g. "Supabase RLS must be in
175
+ the same migration as the table — applying it later creates a window
176
+ where data is unprotected"), append to the curated tier so future
177
+ builders pick it up:
178
+
179
+ ```bash
180
+ node ~/.claude/bin/knowledge.js append \
181
+ --type pattern \
182
+ --title "{lesson title}" \
183
+ --body "{lesson body}" \
184
+ --project "{project name from .planning/PROJECT.md or 'general' if cross-project}" \
185
+ --context "Postmortem from phase ${PHASE}, verify FAIL on {date}"
186
+ ```
187
+
188
+ ### 7. Summarize
189
+
190
+ ```
191
+ ⬢ Postmortem complete — phase {N}
192
+ {G} gaps analyzed
193
+ Owner files implicated:
194
+ - agents/builder.md (gap 1, gap 3)
195
+ - rules/security.md (gap 2)
196
+ Report: .planning/phase-${PHASE}-postmortem.md
197
+ {if --apply: deltas applied; framework reinstall TODO'd}
198
+ {if no --apply: review and run --apply OR /qualia-learn the patterns}
199
+ ```
200
+
201
+ ## Style
202
+
203
+ - **Be charitable.** The framework didn't fail because someone was
204
+ careless. It failed because a contract wasn't tight enough. Frame
205
+ every finding as "the contract was X; it should have been Y."
206
+ - **Keep deltas surgical.** A 2-line addition to a rule is durable; a
207
+ paragraph rewrite is brittle. Smaller deltas survive future framework
208
+ updates.
209
+ - **Don't reach.** If a gap genuinely doesn't map to an AI-layer file
210
+ (e.g. it's an external service outage), say so explicitly — don't
211
+ invent a rule to retroactively own it.
212
+ - **Rate limit yourself.** Don't propose more than 3 deltas per
213
+ postmortem. If there are 8 gaps, the top 3 by severity get deltas; the
214
+ rest get noted but not deltad. Otherwise the AI layer becomes a museum
215
+ of edge cases.
216
+
217
+ ## Anti-patterns
218
+
219
+ - **Re-fixing the same code as the verifier.** This skill is about the
220
+ AI layer, not the codebase. The gap-closure loop (`/qualia-plan
221
+ {N} --gaps`) handles the code. Postmortem only touches `agents/`,
222
+ `rules/`, `skills/`, and the knowledge layer.
223
+ - **Auto-applying deltas to `~/.claude/agents/X.md` without flagging a
224
+ framework PR TODO.** That edit lasts until the next reinstall. Always
225
+ note the TODO so the lesson reaches the framework repo.
226
+ - **Promoting every postmortem finding to knowledge.** Most are
227
+ project-specific contract tightening — they belong in the project's
228
+ AI-layer files, not in cross-project knowledge. Only generalizable
229
+ patterns get appended via `knowledge.js`.
230
+
231
+ ## Output contract
232
+
233
+ The skill writes `.planning/phase-{N}-postmortem.md` and either applies
234
+ deltas (with `--apply`) or stops with a summary that points the user at
235
+ the next step (`--apply` to apply, or `/qualia-learn` to promote
236
+ specific patterns). No follow-up prompts. Idempotent: re-running on the
237
+ same phase appends `### Re-run {timestamp}` to the existing report
238
+ rather than overwriting.
@@ -80,10 +80,23 @@ Each session report gets a stable, sequential client-side identifier that travel
80
80
 
81
81
  ```bash
82
82
  # --dry-run: peek without incrementing
83
- if [ "$DRY_RUN" = "true" ]; then
84
- CLIENT_REPORT_ID=$(node ~/.claude/bin/state.js next-report-id --peek 2>/dev/null | node -e "process.stdout.write(JSON.parse(require('fs').readFileSync(0,'utf8')).report_id||'')")
85
- else
86
- CLIENT_REPORT_ID=$(node ~/.claude/bin/state.js next-report-id 2>/dev/null | node -e "process.stdout.write(JSON.parse(require('fs').readFileSync(0,'utf8')).report_id||'')")
83
+ # Wrap the pipe in try/catch so a state.js failure (missing tracking.json,
84
+ # corrupt JSON) produces a clear error instead of silently becoming "".
85
+ PEEK_FLAG=""
86
+ [ "$DRY_RUN" = "true" ] && PEEK_FLAG="--peek"
87
+ CLIENT_REPORT_ID=$(node ~/.claude/bin/state.js next-report-id $PEEK_FLAG 2>/dev/null | node -e "
88
+ try {
89
+ const raw = require('fs').readFileSync(0,'utf8');
90
+ if (!raw.trim()) process.exit(2);
91
+ const j = JSON.parse(raw);
92
+ if (!j.report_id) process.exit(3);
93
+ process.stdout.write(j.report_id);
94
+ } catch (e) { process.exit(1); }
95
+ ")
96
+
97
+ if [ -z "$CLIENT_REPORT_ID" ]; then
98
+ node ~/.claude/bin/qualia-ui.js fail "Could not obtain report ID from state.js — is .planning/tracking.json valid?"
99
+ exit 1
87
100
  fi
88
101
  ```
89
102
 
@@ -113,54 +126,73 @@ ERP_ENABLED=$(node -e "try{const c=JSON.parse(require('fs').readFileSync(require
113
126
 
114
127
  API_KEY=$(cat ~/.claude/.erp-api-key 2>/dev/null)
115
128
  REPORT_FILE=".planning/reports/report-{date}.md"
116
- SUBMITTED_BY=$(git config user.name)
129
+ SUBMITTED_BY=$(git config user.name || echo "unknown")
117
130
  SUBMITTED_AT=$(date -u +%Y-%m-%dT%H:%M:%SZ)
118
131
 
132
+ # Guard: ERP upload requires a non-empty API key. Without this check, curl
133
+ # would POST with "Authorization: Bearer " (blank bearer) and the server
134
+ # returns a generic 401 that is hard to diagnose.
135
+ if [ "$ERP_ENABLED" = "true" ] && [ -z "$API_KEY" ] && [ "$DRY_RUN" != "true" ]; then
136
+ node ~/.claude/bin/qualia-ui.js warn "ERP API key missing (~/.claude/.erp-api-key is empty or unreadable). Skipping upload."
137
+ node ~/.claude/bin/qualia-ui.js info "Ask Fawzi for the ERP key, save to ~/.claude/.erp-api-key, then re-run /qualia-report --upload-only."
138
+ ERP_ENABLED="false"
139
+ fi
140
+
119
141
  # Build structured JSON payload from tracking.json (matches ERP contract /api/v1/reports)
120
142
  # v4: include milestone_name, milestones[], team_id, project_id, git_remote,
121
143
  # session_started_at, last_pushed_at, build_count, deploy_count — the ERP
122
144
  # uses these to render the project tree (milestone → phases → unphased) correctly.
123
145
  # v4.0.4: client_report_id carries the QS-REPORT-NN identifier.
124
- PAYLOAD=$(node -e "
125
- const fs = require('fs');
126
- const t = JSON.parse(fs.readFileSync('.planning/tracking.json', 'utf8'));
127
- const notes = fs.readFileSync('$REPORT_FILE', 'utf8').substring(0, 60000);
128
- const commits = [];
129
- try {
130
- const { spawnSync } = require('child_process');
131
- const r = spawnSync('git', ['log', '--oneline', '--since=8 hours ago', '--format=%h'], { encoding: 'utf8', timeout: 3000 });
132
- if (r.stdout) commits.push(...r.stdout.trim().split('\n').filter(Boolean));
133
- } catch {}
134
- console.log(JSON.stringify({
135
- project: t.project || require('path').basename(process.cwd()),
136
- project_id: t.project_id || '',
137
- team_id: t.team_id || '',
138
- git_remote: t.git_remote || '',
139
- client: t.client || '',
140
- client_report_id: '$CLIENT_REPORT_ID',
141
- milestone: t.milestone || 1,
142
- milestone_name: t.milestone_name || '',
143
- milestones: Array.isArray(t.milestones) ? t.milestones : [],
144
- phase: t.phase,
145
- phase_name: t.phase_name,
146
- total_phases: t.total_phases,
147
- status: t.status,
148
- tasks_done: t.tasks_done || 0,
149
- tasks_total: t.tasks_total || 0,
150
- verification: t.verification || 'pending',
151
- gap_cycles: (t.gap_cycles || {})[String(t.phase)] || 0,
152
- build_count: t.build_count || 0,
153
- deploy_count: t.deploy_count || 0,
154
- deployed_url: t.deployed_url || '',
155
- session_started_at: t.session_started_at || '',
156
- last_pushed_at: t.last_pushed_at || '',
157
- lifetime: t.lifetime || {},
158
- commits: commits,
159
- notes: notes,
160
- submitted_by: '$SUBMITTED_BY',
161
- submitted_at: '$SUBMITTED_AT'
162
- }));
163
- ")
146
+ # Build payload. Pass user-controlled values (SUBMITTED_BY, CLIENT_REPORT_ID,
147
+ # SUBMITTED_AT, REPORT_FILE) via env vars instead of shell interpolation — a
148
+ # single quote or backslash in git user.name would otherwise break the node -e
149
+ # script silently. process.env.* is inert to shell metacharacters.
150
+ PAYLOAD=$(
151
+ SUBMITTED_BY="$SUBMITTED_BY" \
152
+ SUBMITTED_AT="$SUBMITTED_AT" \
153
+ CLIENT_REPORT_ID="$CLIENT_REPORT_ID" \
154
+ REPORT_FILE="$REPORT_FILE" \
155
+ node -e "
156
+ const fs = require('fs');
157
+ const t = JSON.parse(fs.readFileSync('.planning/tracking.json', 'utf8'));
158
+ const notes = fs.readFileSync(process.env.REPORT_FILE, 'utf8').substring(0, 60000);
159
+ const commits = [];
160
+ try {
161
+ const { spawnSync } = require('child_process');
162
+ const r = spawnSync('git', ['log', '--oneline', '--since=8 hours ago', '--format=%h'], { encoding: 'utf8', timeout: 3000 });
163
+ if (r.stdout) commits.push(...r.stdout.trim().split('\n').filter(Boolean));
164
+ } catch {}
165
+ console.log(JSON.stringify({
166
+ project: t.project || require('path').basename(process.cwd()),
167
+ project_id: t.project_id || '',
168
+ team_id: t.team_id || '',
169
+ git_remote: t.git_remote || '',
170
+ client: t.client || '',
171
+ client_report_id: process.env.CLIENT_REPORT_ID,
172
+ milestone: t.milestone || 1,
173
+ milestone_name: t.milestone_name || '',
174
+ milestones: Array.isArray(t.milestones) ? t.milestones : [],
175
+ phase: t.phase,
176
+ phase_name: t.phase_name,
177
+ total_phases: t.total_phases,
178
+ status: t.status,
179
+ tasks_done: t.tasks_done || 0,
180
+ tasks_total: t.tasks_total || 0,
181
+ verification: t.verification || 'pending',
182
+ gap_cycles: (t.gap_cycles || {})[String(t.phase)] || 0,
183
+ build_count: t.build_count || 0,
184
+ deploy_count: t.deploy_count || 0,
185
+ deployed_url: t.deployed_url || '',
186
+ session_started_at: t.session_started_at || '',
187
+ last_pushed_at: t.last_pushed_at || '',
188
+ lifetime: t.lifetime || {},
189
+ commits: commits,
190
+ notes: notes,
191
+ submitted_by: process.env.SUBMITTED_BY || 'unknown',
192
+ submitted_at: process.env.SUBMITTED_AT
193
+ }));
194
+ "
195
+ )
164
196
 
165
197
  # --dry-run: print payload and stop (no POST, no commit, no increment already handled in step 4)
166
198
  if [ "$DRY_RUN" = "true" ]; then
@@ -199,7 +231,11 @@ if [ "$ERP_ENABLED" = "true" ]; then
199
231
 
200
232
  # 401 / 422 are permanent failures — no retry.
201
233
  if [ "$HTTP_CODE" = "401" ] || [ "$HTTP_CODE" = "422" ]; then
202
- node ~/.claude/bin/qualia-ui.js warn "ERP rejected report (HTTP $HTTP_CODE). Ask Fawzi."
234
+ if [ "$HTTP_CODE" = "401" ]; then
235
+ node ~/.claude/bin/qualia-ui.js warn "ERP auth failed (HTTP 401) — API key in ~/.claude/.erp-api-key is invalid or revoked. Ask Fawzi for a fresh key."
236
+ else
237
+ node ~/.claude/bin/qualia-ui.js warn "ERP rejected payload (HTTP 422) — schema validation failed. Response body:"
238
+ fi
203
239
  echo "$BODY" | head -3
204
240
  break
205
241
  fi
@@ -29,8 +29,9 @@ node ~/.claude/bin/qualia-ui.js banner review
29
29
  ### 0. Load Context
30
30
 
31
31
  ```bash
32
- cat ~/.claude/knowledge/common-fixes.md 2>/dev/null
33
- cat ~/.claude/knowledge/learned-patterns.md 2>/dev/null
32
+ node ~/.claude/bin/knowledge.js
33
+ node ~/.claude/bin/knowledge.js load fixes
34
+ node ~/.claude/bin/knowledge.js load patterns
34
35
  ```
35
36
 
36
37
  Detect project shape:
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: qualia-ship
3
- description: "Deploy to production — quality gates, commit, push, deploy, verify. Use when ready to go live."
3
+ description: "Deploy to production — state-guard, full security scan, quality gates, commit, push, deploy, verify. Trigger on 'deploy', 'ship it', 'go live', 'push to prod', 'launch', 'release to production'."
4
4
  allowed-tools:
5
5
  - Bash
6
6
  - Read
@@ -18,6 +18,35 @@ Full deployment pipeline with quality gates.
18
18
  node ~/.claude/bin/qualia-ui.js banner ship
19
19
  ```
20
20
 
21
+ ### 0. State Guard — refuse to ship from an invalid state
22
+
23
+ `/qualia-ship` is a terminal operation — it writes a deployed tag, bumps counters, and produces a verified URL. It must NEVER run on an unpolished, unverified, or malformed project.
24
+
25
+ ```bash
26
+ STATE=$(node ~/.claude/bin/state.js check 2>/dev/null)
27
+ if [ -z "$STATE" ]; then
28
+ node ~/.claude/bin/qualia-ui.js fail "No project loaded. Run /qualia-new first or cd to a Qualia-managed project."
29
+ exit 1
30
+ fi
31
+
32
+ STATUS=$(echo "$STATE" | node -e "try{const d=JSON.parse(require('fs').readFileSync(0,'utf8'));process.stdout.write(d.status||'')}catch{}")
33
+ VERIFICATION=$(echo "$STATE" | node -e "try{const d=JSON.parse(require('fs').readFileSync(0,'utf8'));process.stdout.write(d.verification||'')}catch{}")
34
+
35
+ # Valid ship-from states:
36
+ # polished — /qualia-polish ran cleanly; ready for deploy
37
+ # verified+pass — final phase verified; skipping polish is allowed for hotfixes
38
+ # Anything else (setup, planned, built, shipped, handed_off, verified+fail) is refused.
39
+ if [ "$STATUS" != "polished" ] && ! { [ "$STATUS" = "verified" ] && [ "$VERIFICATION" = "pass" ]; }; then
40
+ node ~/.claude/bin/qualia-ui.js fail "Cannot ship from state '$STATUS' (verification: ${VERIFICATION:-none})."
41
+ node ~/.claude/bin/qualia-ui.js info "Run /qualia-polish first, or /qualia-verify {phase} if verification is still pending."
42
+ node ~/.claude/bin/qualia-ui.js info "Override: add --force to the skill invocation (hotfix escape hatch, use with care)."
43
+ # The --force escape hatch exists for production hotfixes where the polished
44
+ # state was never reached. The operator is expected to have read and
45
+ # understood the pending verification findings.
46
+ exit 1
47
+ fi
48
+ ```
49
+
21
50
  ### 1. Quality Gates
22
51
 
23
52
  Run in sequence. Auto-fix failures (up to 2 attempts).
@@ -34,12 +63,49 @@ On failure:
34
63
  3. Re-run the gate
35
64
  4. If still failing after 2 attempts: tell the employee, suggest `/qualia-debug`
36
65
 
37
- ### 2. Security Check
66
+ ### 2. Security Check — full depth
67
+
68
+ Shallow grep on `service_role` alone was missing hardcoded keys, tracked `.env` files, and dangerous DOM injection. Match the CRITICAL checks from `/qualia-review` exactly so the two skills agree.
38
69
 
39
70
  ```bash
40
- # service_role in client code?
41
- grep -r "service_role" app/ components/ src/ 2>/dev/null | grep -v node_modules | grep -v ".server."
42
- # Should be ZERO matches
71
+ SEC_FAIL=0
72
+
73
+ # CRITICAL: service_role in client-facing code
74
+ HITS=$(grep -rn "service_role" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" app/ components/ src/ lib/ 2>/dev/null | grep -v node_modules | grep -v "\.server\.\|[\\/]server[\\/]\|[\\/]app[\\/]api[\\/]\|route\.\|middleware\.")
75
+ if [ -n "$HITS" ]; then
76
+ node ~/.claude/bin/qualia-ui.js fail "service_role leaked to client code:"
77
+ echo "$HITS" | head -5
78
+ SEC_FAIL=1
79
+ fi
80
+
81
+ # CRITICAL: hardcoded secrets
82
+ HITS=$(grep -rn "sk_live\|sk_test\|SUPABASE_SERVICE_ROLE\|eyJhbGciOi" --include="*.ts" --include="*.tsx" --include="*.js" app/ components/ src/ lib/ 2>/dev/null | grep -v node_modules | grep -v "\.env")
83
+ if [ -n "$HITS" ]; then
84
+ node ~/.claude/bin/qualia-ui.js fail "Hardcoded secret found:"
85
+ echo "$HITS" | head -5
86
+ SEC_FAIL=1
87
+ fi
88
+
89
+ # CRITICAL: dangerouslySetInnerHTML / eval
90
+ HITS=$(grep -rn "dangerouslySetInnerHTML\|eval(" --include="*.ts" --include="*.tsx" --include="*.js" app/ components/ src/ 2>/dev/null | grep -v node_modules)
91
+ if [ -n "$HITS" ]; then
92
+ node ~/.claude/bin/qualia-ui.js fail "Dangerous innerHTML/eval pattern:"
93
+ echo "$HITS" | head -5
94
+ SEC_FAIL=1
95
+ fi
96
+
97
+ # CRITICAL: .env files tracked in git
98
+ HITS=$(git ls-files | grep -i "\.env" | grep -v "\.example\|\.template\|\.sample")
99
+ if [ -n "$HITS" ]; then
100
+ node ~/.claude/bin/qualia-ui.js fail ".env files tracked in git:"
101
+ echo "$HITS"
102
+ SEC_FAIL=1
103
+ fi
104
+
105
+ if [ $SEC_FAIL -ne 0 ]; then
106
+ node ~/.claude/bin/qualia-ui.js fail "Security check failed. Fix findings above or run /qualia-review for full audit."
107
+ exit 1
108
+ fi
43
109
  ```
44
110
 
45
111
  ### 3. Git
@@ -64,29 +130,44 @@ wrangler deploy # Cloudflare Workers
64
130
 
65
131
  ### 5. Post-Deploy Verification
66
132
 
67
- ```bash
68
- # HTTP 200
69
- curl -s -o /dev/null -w "%{http_code}" {domain}
70
-
71
- # Latency under 500ms
72
- curl -s -o /dev/null -w "%{time_total}" {domain}
133
+ Read the deployed URL from `tracking.json.deployed_url` — set by the deploy tool's output parser, or passed via `--url` to this skill. Do NOT use a `{domain}` placeholder — that expects the LLM to hallucinate the URL, which is exactly the kind of silent fail the state guard above prevents.
73
134
 
74
- # Auth endpoint responds
75
- curl -s -o /dev/null -w "%{http_code}" {domain}/api/auth/callback
135
+ ```bash
136
+ # Read URL from tracking.json (set by /qualia-handoff or previous ship), or
137
+ # let the operator pass it as an argument. Never assume a placeholder.
138
+ URL=$(node -e "try{const t=JSON.parse(require('fs').readFileSync('.planning/tracking.json','utf8'));process.stdout.write(t.deployed_url||'')}catch{}")
139
+ if [ -z "$URL" ]; then
140
+ node ~/.claude/bin/qualia-ui.js warn "No deployed_url in tracking.json — parse it from the deploy command output (vercel/supabase/wrangler all print the URL on success)."
141
+ node ~/.claude/bin/qualia-ui.js info "Re-run with: /qualia-ship --url https://your-site.com"
142
+ exit 1
143
+ fi
144
+
145
+ # HTTP 200 + latency under 500ms (combined)
146
+ RESP=$(curl -sS -o /dev/null -w "%{http_code} %{time_total}" --max-time 15 "$URL")
147
+ HTTP_CODE=$(echo "$RESP" | awk '{print $1}')
148
+ LATENCY=$(echo "$RESP" | awk '{print $2}')
149
+
150
+ if [ "$HTTP_CODE" != "200" ]; then
151
+ node ~/.claude/bin/qualia-ui.js fail "Post-deploy check failed: HTTP $HTTP_CODE at $URL"
152
+ exit 1
153
+ fi
154
+
155
+ # Auth endpoint (best-effort — not every project has one)
156
+ AUTH_CODE=$(curl -sS -o /dev/null -w "%{http_code}" --max-time 10 "$URL/api/auth/callback" 2>/dev/null)
76
157
  ```
77
158
 
78
159
  ### 6. Report
79
160
 
80
161
  ```bash
81
162
  node ~/.claude/bin/qualia-ui.js divider
82
- node ~/.claude/bin/qualia-ui.js ok "URL: {production url}"
83
- node ~/.claude/bin/qualia-ui.js ok "Status: HTTP 200"
84
- node ~/.claude/bin/qualia-ui.js ok "Latency: {time}ms"
85
- node ~/.claude/bin/qualia-ui.js ok "Auth endpoint responds"
163
+ node ~/.claude/bin/qualia-ui.js ok "URL: $URL"
164
+ node ~/.claude/bin/qualia-ui.js ok "Status: HTTP $HTTP_CODE"
165
+ node ~/.claude/bin/qualia-ui.js ok "Latency: ${LATENCY}s"
166
+ [ "$AUTH_CODE" = "200" ] || [ "$AUTH_CODE" = "401" ] && node ~/.claude/bin/qualia-ui.js ok "Auth endpoint responds (HTTP $AUTH_CODE)"
86
167
  ```
87
168
 
88
169
  ```bash
89
- node ~/.claude/bin/state.js transition --to shipped --deployed-url {production url}
170
+ node ~/.claude/bin/state.js transition --to shipped --deployed-url "$URL"
90
171
  ```
91
172
  Do NOT manually edit STATE.md or tracking.json — state.js handles both.
92
173
 
@@ -19,6 +19,7 @@ Spawn a verifier agent to check if the phase goal was achieved. Does NOT trust b
19
19
  `/qualia-verify` — verify the current built phase
20
20
  `/qualia-verify {N}` — verify specific phase
21
21
  `/qualia-verify {N} --auto` — verify + auto-chain: PASS → next phase (or milestone close); FAIL → gap closure; gap limit → halt with escalation
22
+ `/qualia-verify {N} --adversarial` — run a SECOND verifier in fresh context with an adversarial prompt ("find what's wrong, not what's right"). Union the findings. Recommended for high-stakes phases (Handoff milestone, payment/auth/migration code) where a biased single-pass review would silently approve a bad change. v4.3.0+.
22
23
 
23
24
  ## Process
24
25
 
@@ -80,6 +81,52 @@ Drive the running dev server and test the routes this phase touched. Append a '#
80
81
 
81
82
  Wait for both the main verifier and the QA browser agent before moving to step 3. If Playwright MCP is unavailable, the QA browser agent returns BLOCKED — that's not a phase failure, just a note in the report.
82
83
 
84
+ ### 2c. Adversarial Second Opinion (--adversarial flag, optional)
85
+
86
+ When `--adversarial` is in the args, OR when the current milestone is
87
+ `Handoff` OR the phase plan touches files matching `auth|payment|migration|rls|service_role`, spawn a SECOND verifier in fresh context with an
88
+ adversarial prompt. This is the "kid-grading-their-own-homework"
89
+ mitigation — a single verifier instance trained on the same rubric the
90
+ planner+builder optimized against gets ~70% fewer real findings than a
91
+ fresh-context adversarial pass (Cole Medin, NotebookLM 2026-04-25, citing
92
+ PR-acceptance studies).
93
+
94
+ ```bash
95
+ node ~/.claude/bin/qualia-ui.js spawn verifier "Adversarial pass — find what's wrong"
96
+ ```
97
+
98
+ ```
99
+ Agent(prompt="
100
+ Read your role: @~/.claude/agents/verifier.md
101
+ Grounding + rubrics: @~/.claude/rules/grounding.md
102
+
103
+ You are an ADVERSARIAL reviewer. Your job is to find what's WRONG with
104
+ this phase, not to confirm it works. Assume the previous verifier missed
105
+ something. Use the same Severity Rubric, the same evidence-citation
106
+ requirement, but bias your search toward edge cases the cooperative
107
+ verifier would skip:
108
+ • What untested error path exists?
109
+ • What input would crash this?
110
+ • What concurrent access pattern is unhandled?
111
+ • What downstream consumer breaks if this contract changes?
112
+ • Where is a security assumption (auth, RLS, secrets) implicit
113
+ instead of enforced?
114
+
115
+ Project conventions: @.planning/PROJECT.md
116
+ Phase plan: @.planning/phase-{N}-plan.md
117
+ Cooperative verifier's report (do NOT re-find what they found, find
118
+ what they MISSED): @.planning/phase-{N}-verification.md
119
+
120
+ Append a '## Adversarial Findings' section to the verification file.
121
+ Empty section is fine if you genuinely found nothing — better that than
122
+ inventing findings to look productive.
123
+ ", subagent_type="qualia-verifier", description="Adversarial verify phase {N}")
124
+ ```
125
+
126
+ Findings from the adversarial pass merge into the main verification
127
+ report. The combined PASS/FAIL is the union: if either pass found a
128
+ CRITICAL or HIGH gap, the phase is FAIL.
129
+
83
130
  ### 3. Present Results
84
131
 
85
132
  Read the verification report. Present:
@@ -102,6 +149,19 @@ Then for each gap:
102
149
  node ~/.claude/bin/qualia-ui.js fail "{gap description}"
103
150
  ```
104
151
 
152
+ **Self-healing layer (v4.3.0+):** before re-planning the gaps, run a
153
+ postmortem so the framework itself learns from the miss. This is Cole
154
+ Medin's pillar 5: don't just fix the bug, fix the AI-layer file that
155
+ should have caught it. The postmortem writes a report to
156
+ `.planning/phase-{N}-postmortem.md` for review — it does NOT auto-apply
157
+ deltas to agents/rules unless the user runs `/qualia-postmortem --apply`
158
+ explicitly. Without this loop, the same class of bug ships in PR-3, PR-7,
159
+ PR-11 of the next project.
160
+
161
+ ```
162
+ /qualia-postmortem --phase {N}
163
+ ```
164
+
105
165
  End:
106
166
  ```bash
107
167
  node ~/.claude/bin/qualia-ui.js end "PHASE {N} GAPS FOUND" "/qualia-plan {N} --gaps"
@@ -407,7 +407,7 @@
407
407
  <p class="cmd-group-note">When you don't know what to do next.</p>
408
408
  <div class="commands">
409
409
  <div class="cmd"><span class="cmd-name">/qualia</span><span class="cmd-desc">Smart router &mdash; reads project state, classifies your situation, tells you the exact next command. Use whenever you're unsure about your next step.</span></div>
410
- <div class="cmd"><span class="cmd-name">/qualia-idk</span><span class="cmd-desc">Alias for /qualia. The smart router handles all 'idk', 'what now', 'I'm stuck' situations.</span></div>
410
+ <div class="cmd"><span class="cmd-name">/qualia-idk</span><span class="cmd-desc">Diagnostic intelligence &mdash; spawns two isolated scans (planning + codebase) in parallel, cross-references against your confusion, explains the situation in plain language with a concrete next step. Use when something feels off or you need to understand what's going on.</span></div>
411
411
  <div class="cmd"><span class="cmd-name">/qualia-help</span><span class="cmd-desc">Open the Qualia Framework reference guide in the browser. A beautiful themed HTML page with all commands, rules, services, and the road.</span></div>
412
412
  </div>
413
413
  </div>