qualia-framework 2.4.8 → 2.4.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,42 +1,39 @@
1
- # CLAUDE.md — Qualia Solutions
1
+ # CLAUDE.md — OWNER Profile
2
2
 
3
3
  ## Identity
4
- **Fawzi Goussous** — Founder, Qualia Solutions. Nicosia, Cyprus.
5
-
4
+ **Fawzi Goussous** — Founder at Qualia Solutions. Nicosia, Cyprus.
5
+ One-man dev shop. Websites, AI agents, voice agents, AI automation.
6
+ - Direct, action-oriented, no fluff. Code > theory.
7
+ - يعني اذا حكى عربي رد عربي، اذا انجليزي رد انجليزي، واذا خلط خلط معه
6
8
  - Stack: Next.js 16+, React 19, TypeScript, Supabase, Vercel, VAPI, ElevenLabs, Telnyx, Retell AI, OpenRouter
7
- - Partner: Jay | Team: Moayad (full-time, Jordan), Ahasan (part-time, Cyprus)
8
9
 
9
10
  ## Role: OWNER
10
-
11
- Full authority over all projects, deployments, architecture, and client decisions.
12
- - Deploy directly to production
13
- - Make architectural decisions unilaterally
14
- - Access all Supabase projects and service role keys
15
- - Modify the Qualia framework (CLAUDE.md, skills, hooks)
11
+ You are the founder. Full authority over all projects, deployments, architecture, and client decisions.
12
+ - Can deploy directly to production
13
+ - Can make architectural decisions unilaterally
14
+ - Can access all Supabase projects and service role keys
15
+ - Can modify CLAUDE.md, skills, hooks, and framework config
16
+ - Can approve/reject employee work
16
17
 
17
18
  ## Rules
18
19
  - Read before Write/Edit — no exceptions
19
- - Feature branches onlynever commit to main/master
20
+ - Feature branches preferredOWNER may push to main when necessary (branch-guard enforces for non-OWNER)
20
21
  - MVP first. Build only what's asked. No over-engineering.
21
22
  - Root cause on failures — no band-aids
22
23
  - `npx tsc --noEmit` after multi-file TS changes
23
24
  - Glob/Grep directly — no Task(Explore) unless 5+ rounds needed
24
- - For non-trivial work (multi-file changes, architectural decisions, unfamiliar codebases), confirm understanding before coding — quick tasks are exempt
25
+ - For non-trivial work (multi-file changes, architectural decisions, unfamiliar codebases), confirm understanding before coding: "Here's what I understand: [summary]. Correct?" — quick tasks (typo, single-file, familiar pattern) are exempt
25
26
  - See `rules/security.md` for auth, RLS, Zod, secrets rules
26
27
  - See `rules/frontend.md` for design standards
27
28
  - See `rules/deployment.md` for deploy checklist
28
- - See `rules/speed.md` for tool usage and workflow shortcuts
29
- - See `rules/context7.md` for library documentation lookup
30
29
 
31
30
  ## Collaboration
32
31
  Collaborator, not executor. Speak up about bugs, simpler approaches, bad architecture.
33
32
  Be honest. Default to action. Never speculate on unread code. Say when blocked.
34
- - Direct, action-oriented, no fluff. Code > theory.
35
- - Arabic or English — match whatever language is used
36
33
 
37
34
  ## Workflow
38
35
  - **MANDATORY FIRST ACTION**: On every session start, invoke the `qualia-start` skill before doing anything else. This is non-negotiable — do not wait for user input, do not skip it, do not just acknowledge the hook message. Actually invoke the skill using the Skill tool.
39
- - Subagents (Opus) for research and complex reasoning.
36
+ - Subagents default to Opus (set via CLAUDE_CODE_SUBAGENT_MODEL).
40
37
  - `/compact` at 60%. `/clear` between tasks. `/learn` after mistakes.
41
38
 
42
39
  ## Qualia Mode (always active)
@@ -3,7 +3,7 @@ name: backend-agent
3
3
  description: Backend/database specialist - Supabase, APIs, Edge Functions. Spawned for parallel backend development.
4
4
  category: development
5
5
  tools: Read, Write, Edit, Glob, Grep, Bash
6
- model: claude-opus-4-6
6
+ model: inherit
7
7
  skills:
8
8
  - supabase
9
9
  tags: [backend, supabase, api, database, typescript]
@@ -2,8 +2,8 @@
2
2
  name: frontend-agent
3
3
  description: Frontend implementation specialist - React, TypeScript, CSS, animations. Spawned for parallel UI development.
4
4
  category: development
5
- tools: Read, Write, Edit, Glob, Grep, Bash(npm:*), Bash(npx:*)
6
- model: claude-opus-4-6
5
+ tools: Read, Write, Edit, Glob, Grep, Bash
6
+ model: inherit
7
7
  skills:
8
8
  - frontend-master
9
9
  - responsive
@@ -3,7 +3,7 @@ name: test-agent
3
3
  description: Testing and QA specialist - unit tests, integration tests, E2E with Playwright. Spawned for parallel test development.
4
4
  category: testing
5
5
  tools: Read, Write, Edit, Glob, Grep, Bash
6
- model: claude-sonnet-4-20250514
6
+ model: inherit
7
7
  tags: [testing, jest, vitest, playwright, qa]
8
8
  ---
9
9
 
@@ -1,6 +1,7 @@
1
1
  #!/bin/bash
2
- # Auto-format hook for PostToolUse(Write)
2
+ # Auto-format hook for PostToolUse(Write/Edit)
3
3
  # Runs prettier on supported file types if available in the project
4
+ source "$(dirname "$0")/qualia-colors.sh"
4
5
 
5
6
  # Parse file path from stdin JSON (Claude Code hook protocol)
6
7
  if [ ! -t 0 ]; then
@@ -9,7 +9,10 @@ else
9
9
  FILE_PATH="${1:-}"
10
10
  fi
11
11
 
12
- [ -z "$FILE_PATH" ] && exit 0
12
+ if [ -z "$FILE_PATH" ]; then
13
+ printf '{"continue":true}'
14
+ exit 0
15
+ fi
13
16
 
14
17
  BASENAME=$(basename "$FILE_PATH")
15
18
 
@@ -35,4 +38,5 @@ EOJSON
35
38
  exit 2
36
39
  fi
37
40
 
41
+ printf '{"continue":true}'
38
42
  exit 0
@@ -58,9 +58,9 @@ if [ -n "$COMMAND" ] && echo "$COMMAND" | grep -qE 'supabase\s+db\s+push'; then
58
58
  # Scan only uncommitted or recently modified migration files
59
59
  MODIFIED_MIGRATIONS=$(git diff --name-only HEAD -- supabase/migrations/*.sql 2>/dev/null; git diff --cached --name-only -- supabase/migrations/*.sql 2>/dev/null; git ls-files --others -- supabase/migrations/*.sql 2>/dev/null)
60
60
  if [ -n "$MODIFIED_MIGRATIONS" ]; then
61
- echo "$MODIFIED_MIGRATIONS" | sort -u | while IFS= read -r sql_file; do
61
+ while IFS= read -r sql_file; do
62
62
  [ -f "$sql_file" ] && check_sql_file "$sql_file"
63
- done
63
+ done < <(echo "$MODIFIED_MIGRATIONS" | sort -u)
64
64
  fi
65
65
  fi
66
66
 
@@ -12,4 +12,5 @@ if [ -n "$TEXT" ]; then
12
12
  ~/.claude/scripts/speak.sh "$TEXT" &
13
13
  fi
14
14
 
15
+ printf '{"continue":true}'
15
16
  exit 0
@@ -52,4 +52,5 @@ cat > "$COMPACT_FILE" << EOF
52
52
  }
53
53
  EOF
54
54
 
55
+ printf '{"continue":true}'
55
56
  exit 0
@@ -1,10 +1,10 @@
1
1
  # Qualia teal brand palette — source this in all hooks
2
2
  # Usage: source "$(dirname "$0")/qualia-colors.sh"
3
3
 
4
- Q_TEAL='\033[38;2;0;188;175m'
5
- Q_BRIGHT='\033[38;2;45;226;210m'
6
- Q_DIM='\033[38;2;0;120;112m'
7
- Q_WHITE='\033[38;2;220;225;230m'
4
+ Q_TEAL='\033[38;2;0;140;130m'
5
+ Q_BRIGHT='\033[38;2;0;200;185m'
6
+ Q_DIM='\033[38;2;0;100;92m'
7
+ Q_WHITE='\033[38;2;230;230;230m'
8
8
  Q_PASS='\033[38;2;52;211;153m'
9
9
  Q_WARN='\033[38;2;234;179;8m'
10
10
  Q_FAIL='\033[38;2;239;68;68m'
@@ -44,9 +44,19 @@ find "$CLAUDE_DIR/tasks/" -mindepth 1 -type d -empty -delete 2>/dev/null
44
44
  # plans/ — keep 7 days (old Claude Code plan mode files)
45
45
  find "$CLAUDE_DIR/plans/" -type f -mtime +7 -delete 2>/dev/null
46
46
 
47
+ # downloads/ — keep 7 days
48
+ find "$CLAUDE_DIR/downloads/" -type f -mtime +7 -delete 2>/dev/null
49
+
50
+ # image-cache/ — keep 7 days
51
+ find "$CLAUDE_DIR/image-cache/" -type f -mtime +7 -delete 2>/dev/null
52
+
53
+ # thoughts/ — keep 14 days
54
+ find "$CLAUDE_DIR/thoughts/" -type f -mtime +14 -delete 2>/dev/null
55
+
47
56
  # session-env/ session_*.json — keep last 50 (matches save-session-state.sh)
48
57
  ls -t "$CLAUDE_DIR/session-env"/session_*.json 2>/dev/null | tail -n +51 | while IFS= read -r f; do
49
58
  rm -f "$f"
50
59
  done
51
60
 
61
+ printf '{"continue":true}'
52
62
  exit 0
@@ -15,6 +15,7 @@ if [ -f "$SESSION_START_FILE" ]; then
15
15
  NOW_TS=$(date +%s)
16
16
  DURATION=$((NOW_TS - START_TS))
17
17
  if [ "$DURATION" -lt 30 ]; then
18
+ printf '{"continue":true}'
18
19
  exit 0
19
20
  fi
20
21
  fi
@@ -180,4 +181,5 @@ ls -t "$SESSION_DIR"/session_*.json 2>/dev/null | tail -n +51 | while IFS= read
180
181
  rm -f "$f"
181
182
  done
182
183
 
184
+ printf '{"continue":true}'
183
185
  exit 0
@@ -1,6 +1,7 @@
1
1
  #!/usr/bin/env bash
2
2
  # SessionEnd hook: prompt for lessons learned if significant work was done
3
3
  # "Significant" = more than 3 files modified in the session
4
+ source "$(dirname "$0")/qualia-colors.sh"
4
5
 
5
6
  LEARNED_FILE="$HOME/.claude/knowledge/learned-patterns.md"
6
7
 
@@ -1 +1 @@
1
- 2.4.8
1
+ 2.4.9
@@ -2,10 +2,13 @@
2
2
  alwaysApply: true
3
3
  ---
4
4
 
5
- When working with libraries, frameworks, or APIs fetch current documentation instead of relying on training data. This includes setup questions, code generation, API references, and anything involving specific packages.
5
+ Use Context7 MCP to fetch current documentation whenever the user asks about a library, framework, SDK, API, CLI tool, or cloud service -- even well-known ones like React, Next.js, Prisma, Express, Tailwind, Django, or Spring Boot. This includes API syntax, configuration, version migration, library-specific debugging, setup instructions, and CLI tool usage. Use even when you think you know the answer -- your training data may not reflect recent changes. Prefer this over web search for library docs.
6
+
7
+ Do not use for: refactoring, writing scripts from scratch, debugging business logic, code review, or general programming concepts.
6
8
 
7
9
  ## Steps
8
10
 
9
- 1. If Context7 MCP is available, use `resolve-library-id` + `query-docs`
10
- 2. Otherwise, use WebFetch or WebSearch to find current docs
11
- 3. Answer using the fetched docs include code examples and cite the version
11
+ 1. Always start with `resolve-library-id` using the library name and the user's question, unless the user provides an exact library ID in `/org/project` format
12
+ 2. Pick the best match (ID format: `/org/project`) by: exact name match, description relevance, code snippet count, source reputation (High/Medium preferred), and benchmark score (higher is better). If results don't look right, try alternate names or queries (e.g., "next.js" not "nextjs", or rephrase the question). Use version-specific IDs when the user mentions a version
13
+ 3. `query-docs` with the selected library ID and the user's full question (not single words)
14
+ 4. Answer using the fetched docs
@@ -17,8 +17,7 @@ alwaysApply: true
17
17
  **Use shortcuts:**
18
18
  - `/ship` — Full deploy pipeline (quality gates → commit → deploy → verify)
19
19
  - `/status` — Quick project health check
20
- - `/audit` — Deep security + quality audit
21
- - `/switch <project>` — Switch project context
20
+ - `/qualia-review` — Deep security + quality audit
22
21
  - `/memory` — View/manage persistent rules
23
22
  - `/learn` — Save a lesson from a mistake
24
- - `/handoff` — Save context before /clear
23
+ - `/qualia-pause-work` — Save context before /clear
@@ -30,8 +30,8 @@ You cannot do a great job without having necessary context, such as target audie
30
30
 
31
31
  Attempt to gather these from the current thread or codebase.
32
32
 
33
- 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and call the AskUserQuestionTool to clarify. whether you got it right.
34
- 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and call the AskUserQuestionTool to clarify. clarifying questions first to complete your context.
33
+ 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and ask the user directly to clarify. whether you got it right.
34
+ 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and ask the user directly to clarify. clarifying questions first to complete your context.
35
35
 
36
36
  Do NOT proceed until you have answers. Guessing leads to inappropriate or excessive animation.
37
37
 
@@ -58,7 +58,7 @@ Analyze where motion would improve the experience:
58
58
  - Who's the audience? (Motion-sensitive users? Power users who want speed?)
59
59
  - What matters most? (One hero animation vs many micro-interactions?)
60
60
 
61
- If any of these are unclear from the codebase, STOP and call the AskUserQuestionTool to clarify.
61
+ If any of these are unclear from the codebase, STOP and ask the user directly to clarify.
62
62
 
63
63
  **CRITICAL**: Respect `prefers-reduced-motion`. Always provide non-animated alternatives for users who need them.
64
64
 
@@ -30,8 +30,8 @@ You cannot do a great job without having necessary context, such as target audie
30
30
 
31
31
  Attempt to gather these from the current thread or codebase.
32
32
 
33
- 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and call the AskUserQuestionTool to clarify. whether you got it right.
34
- 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and call the AskUserQuestionTool to clarify. clarifying questions first to complete your context.
33
+ 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and ask the user directly to clarify. whether you got it right.
34
+ 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and ask the user directly to clarify. clarifying questions first to complete your context.
35
35
 
36
36
  Do NOT proceed until you have answers. Guessing leads to generic AI slop.
37
37
 
@@ -59,7 +59,7 @@ Analyze what makes the design feel too safe or boring:
59
59
  - Who's the audience? (What will resonate?)
60
60
  - What are the constraints? (Brand guidelines, accessibility, performance)
61
61
 
62
- If any of these are unclear from the codebase, STOP and call the AskUserQuestionTool to clarify.
62
+ If any of these are unclear from the codebase, STOP and ask the user directly to clarify.
63
63
 
64
64
  **CRITICAL**: "Bolder" doesn't mean chaotic or garish. It means distinctive, memorable, and confident. Think intentional drama, not random chaos.
65
65
 
@@ -30,8 +30,8 @@ You cannot do a great job without having necessary context, such as target audie
30
30
 
31
31
  Attempt to gather these from the current thread or codebase.
32
32
 
33
- 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and call the AskUserQuestionTool to clarify. whether you got it right.
34
- 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and call the AskUserQuestionTool to clarify. clarifying questions first to complete your context.
33
+ 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and ask the user directly to clarify. whether you got it right.
34
+ 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and ask the user directly to clarify. clarifying questions first to complete your context.
35
35
 
36
36
  Do NOT proceed until you have answers. Guessing leads to generic AI slop colors.
37
37
 
@@ -59,7 +59,7 @@ Analyze the current state and identify opportunities:
59
59
  - **Wayfinding**: Helping users navigate and understand structure
60
60
  - **Delight**: Moments of visual interest and personality
61
61
 
62
- If any of these are unclear from the codebase, STOP and call the AskUserQuestionTool to clarify.
62
+ If any of these are unclear from the codebase, STOP and ask the user directly to clarify.
63
63
 
64
64
  **CRITICAL**: More color ≠ better. Strategic color beats rainbow vomit every time. Every color should have a purpose.
65
65
 
@@ -30,8 +30,8 @@ You cannot do a great job without having necessary context, such as target audie
30
30
 
31
31
  Attempt to gather these from the current thread or codebase.
32
32
 
33
- 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and call the AskUserQuestionTool to clarify. whether you got it right.
34
- 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and call the AskUserQuestionTool to clarify. clarifying questions first to complete your context.
33
+ 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and ask the user directly to clarify. whether you got it right.
34
+ 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and ask the user directly to clarify. clarifying questions first to complete your context.
35
35
 
36
36
  Do NOT proceed until you have answers. Delight that's wrong for the context is worse than no delight at all.
37
37
 
@@ -66,7 +66,7 @@ Identify where delight would enhance (not distract from) the experience:
66
66
  - **Helpful surprises**: Anticipating needs before users ask (productivity tools)
67
67
  - **Sensory richness**: Satisfying sounds, smooth animations (creative tools)
68
68
 
69
- If any of these are unclear from the codebase, STOP and call the AskUserQuestionTool to clarify.
69
+ If any of these are unclear from the codebase, STOP and ask the user directly to clarify.
70
70
 
71
71
  **CRITICAL**: Delight should enhance usability, never obscure it. If users notice the delight more than accomplishing their goal, you've gone too far.
72
72
 
@@ -30,8 +30,8 @@ You cannot do a great job without having necessary context, such as target audie
30
30
 
31
31
  Attempt to gather these from the current thread or codebase.
32
32
 
33
- 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and call the AskUserQuestionTool to clarify. whether you got it right.
34
- 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and call the AskUserQuestionTool to clarify. clarifying questions first to complete your context.
33
+ 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and ask the user directly to clarify. whether you got it right.
34
+ 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and ask the user directly to clarify. clarifying questions first to complete your context.
35
35
 
36
36
  Do NOT proceed until you have answers. Guessing leads to generic design.
37
37
 
@@ -59,7 +59,7 @@ Analyze what makes the design feel too intense:
59
59
  - What's working? (Don't throw away good ideas)
60
60
  - What's the core message? (Preserve what matters)
61
61
 
62
- If any of these are unclear from the codebase, STOP and call the AskUserQuestionTool to clarify.
62
+ If any of these are unclear from the codebase, STOP and ask the user directly to clarify.
63
63
 
64
64
  **CRITICAL**: "Quieter" doesn't mean boring or generic. It means refined, sophisticated, and easier on the eyes. Think luxury, not laziness.
65
65
 
@@ -30,8 +30,8 @@ You cannot do a great job without having necessary context, such as target audie
30
30
 
31
31
  Attempt to gather these from the current thread or codebase.
32
32
 
33
- 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and call the AskUserQuestionTool to clarify. whether you got it right.
34
- 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and call the AskUserQuestionTool to clarify. clarifying questions first to complete your context.
33
+ 1. If you don't find *exact* information and have to infer from existing design and functionality, you MUST STOP and STOP and ask the user directly to clarify. whether you got it right.
34
+ 2. Otherwise, if you can't fully infer or your level of confidence is medium or lower, you MUST STOP and ask the user directly to clarify. clarifying questions first to complete your context.
35
35
 
36
36
  Do NOT proceed until you have answers. Simplifying the wrong things destroys usability.
37
37
 
@@ -59,7 +59,7 @@ Analyze what makes the design feel complex or cluttered:
59
59
  - What can be removed, hidden, or combined?
60
60
  - What's the 20% that delivers 80% of value?
61
61
 
62
- If any of these are unclear from the codebase, STOP and call the AskUserQuestionTool to clarify.
62
+ If any of these are unclear from the codebase, STOP and ask the user directly to clarify.
63
63
 
64
64
  **CRITICAL**: Simplicity is not about removing features - it's about removing obstacles between users and their goals. Every element should justify its existence.
65
65
 
@@ -83,7 +83,7 @@ Classify project type: `web` | `voice` | `mobile` | `agent` | `edge-functions` |
83
83
 
84
84
  ### Step 4: Spawn Wave 1 Agents (parallel)
85
85
 
86
- Based on mode, spawn agents in a **single message** with multiple Task() calls.
86
+ Based on mode, spawn agents in a **single message** with multiple Agent() calls.
87
87
 
88
88
  | Mode | Agents |
89
89
  |------|--------|
@@ -93,12 +93,12 @@ Based on mode, spawn agents in a **single message** with multiple Task() calls.
93
93
  | `backend` | backend-agent only |
94
94
  | `alignment` | general-purpose with alignment prompt |
95
95
 
96
- **CRITICAL**: Inline ALL planning context into each agent prompt. `@` references don't work across Task() boundaries.
96
+ **CRITICAL**: Inline ALL planning context into each agent prompt. `@` references don't work across Agent() boundaries.
97
97
 
98
98
  #### Frontend Agent Prompt
99
99
 
100
100
  ```
101
- Task(
101
+ Agent(
102
102
  prompt="You are optimizing a project's frontend. Read the planning docs and codebase rules below, then analyze the actual code.
103
103
 
104
104
  <planning>
@@ -148,7 +148,7 @@ For EVERY finding, output in this exact format:
148
148
  #### Backend Agent Prompt
149
149
 
150
150
  ```
151
- Task(
151
+ Agent(
152
152
  prompt="You are optimizing a project's backend. Read the planning docs and security rules below, then analyze the actual code.
153
153
 
154
154
  <planning>
@@ -201,7 +201,7 @@ For EVERY finding, output:
201
201
  #### Performance Oracle Prompt
202
202
 
203
203
  ```
204
- Task(
204
+ Agent(
205
205
  prompt="You are analyzing cross-cutting performance issues. Read the project context, then analyze the codebase.
206
206
 
207
207
  <planning>
@@ -250,7 +250,7 @@ For EVERY finding, output:
250
250
  After all Wave 1 agents return, spawn the architecture strategist with their combined findings:
251
251
 
252
252
  ```
253
- Task(
253
+ Agent(
254
254
  prompt="You are synthesizing optimization findings from 3 specialist agents. Look for cross-cutting architectural issues.
255
255
 
256
256
  <wave1_findings>
@@ -1,314 +0,0 @@
1
- ---
2
- name: qualia-production-check
3
- description: "Final client-handoff production audit — spawns 5+ specialist agents to check EVERYTHING before handing a project to a client. Frontend, backend, auth, UX, Supabase, silent errors, misconfigs, SEO, performance, accessibility, legal pages, error handling — the works. Use this skill whenever the user says 'production check', 'client ready', 'is it ready', 'final check', 'handoff check', 'production audit', 'ready for client', 'qualia-production-check', 'final audit', 'pre-handoff', or wants to verify a project is truly ready to give to a client."
4
- ---
5
-
6
- # Qualia Production Check — Client-Handoff Audit
7
-
8
- This is THE final check before handing a project to a client. Not a code review — a **production readiness audit** that checks everything a real user will experience.
9
-
10
- Spawns 5 specialist agents in parallel, each checking a different dimension. Then synthesizes into a structured verdict with actionable next steps.
11
-
12
- ## Usage
13
-
14
- - `/qualia-production-check` — Full audit (all 5 dimensions)
15
-
16
- ## Process
17
-
18
- ### Step 1: Load Context
19
-
20
- ```bash
21
- cat .planning/PROJECT.md 2>/dev/null || echo "NO_PROJECT"
22
- cat .planning/REQUIREMENTS.md 2>/dev/null || echo "NO_REQUIREMENTS"
23
- cat .planning/ROADMAP.md 2>/dev/null || echo "NO_ROADMAP"
24
- ```
25
-
26
- ```bash
27
- node -e "try{const p=require('./package.json');console.log(JSON.stringify({name:p.name,deps:Object.keys(p.dependencies||{}),devDeps:Object.keys(p.devDependencies||{})}))}catch(e){console.log('{}')}" 2>/dev/null
28
- ```
29
-
30
- ```bash
31
- ls -d app/ src/ pages/ components/ lib/ supabase/ public/ 2>/dev/null
32
- ```
33
-
34
- Read `~/.claude/rules/security.md` and `~/.claude/rules/frontend.md`.
35
-
36
- Detect project type: website, AI agent, voice agent, web app, mobile.
37
-
38
- Store all content — inline into agent prompts.
39
-
40
- ### Step 2: Spawn 5 Agents (parallel, single message)
41
-
42
- All agents get PROJECT.md + REQUIREMENTS.md inlined. Every finding must include: **What** | **Where** (file:line) | **Impact on client/users** | **Fix** | **Severity** (BLOCKER / WARNING / INFO).
43
-
44
- BLOCKER = client will see this and it's bad. WARNING = should fix but won't break. INFO = nice to have.
45
-
46
- #### Agent 1: User Experience & Frontend
47
-
48
- ```
49
- Task(
50
- prompt="You are auditing the user experience of a production web app that will be handed to a client.
51
-
52
- <planning>{PROJECT.md + REQUIREMENTS.md}</planning>
53
- <rules>{frontend.md}</rules>
54
-
55
- Check EVERY page in app/ — open each page.tsx and layout.tsx:
56
-
57
- 1. **First impression** — Does the homepage look professional? Distinctive design or generic AI slop?
58
- 2. **Navigation** — Can users find everything? Are all nav links working? No dead links?
59
- 3. **Loading states** — Every async operation shows feedback (skeleton, spinner, progress)
60
- 4. **Error states** — What happens when API fails? Network disconnects? Wrong URL?
61
- 5. **Empty states** — What do lists/tables show when empty? Helpful message or just blank?
62
- 6. **Forms** — Validation on submit? Clear error messages? Success feedback? Disabled state while submitting?
63
- 7. **Mobile responsive** — Check for fixed widths, overflow, touch targets too small, text readable
64
- 8. **Typography** — Consistent fonts, readable sizes, proper hierarchy, no tiny gray text
65
- 9. **Images** — Using next/image? Alt text? Proper sizing? Not stretched or pixelated?
66
- 10. **404 page** — Does app/not-found.tsx exist? Is it styled? Does it help the user?
67
- 11. **Favicon & metadata** — Title, description, OG tags, favicon all set?
68
- 12. **Console errors** — Grep for console.log/console.error left in production code
69
- 13. **Accessibility** — Alt text, ARIA labels, keyboard navigation, color contrast
70
-
71
- For EVERY finding: What | Where (file:line) | Impact on client | Fix | Severity (BLOCKER/WARNING/INFO)",
72
- subagent_type="frontend-agent",
73
- description="UX & frontend production audit"
74
- )
75
- ```
76
-
77
- #### Agent 2: Auth, Security & Data Protection
78
-
79
- ```
80
- Task(
81
- prompt="You are auditing authentication and security for client handoff.
82
-
83
- <planning>{PROJECT.md + REQUIREMENTS.md}</planning>
84
- <rules>{security.md}</rules>
85
-
86
- Check:
87
-
88
- 1. **Auth flow completeness** — Login, signup, logout, password reset ALL work? Email verification?
89
- 2. **Protected routes** — Every dashboard/admin/settings page checks auth? What happens if unauthenticated user visits?
90
- 3. **RLS policies** — EVERY Supabase table has Row Level Security enabled WITH policies. No table left unprotected.
91
- 4. **Service role exposure** — Grep for service_role or SUPABASE_SERVICE_ROLE_KEY in ANY client-side file (app/, components/, src/). Must be ZERO.
92
- 5. **Server-side auth** — All mutations use supabase.auth.getUser() server-side. No client-side mutations with user-supplied IDs.
93
- 6. **Input validation** — All user inputs validated with Zod or equivalent. No raw req.body usage.
94
- 7. **XSS prevention** — No dangerouslySetInnerHTML. No eval(). No user content rendered unsanitized.
95
- 8. **CORS** — Properly restricted, not wildcard '*' in production.
96
- 9. **Rate limiting** — Auth endpoints (login, signup, reset) have rate limiting.
97
- 10. **Secrets in code** — No hardcoded API keys, passwords, tokens in source files.
98
- 11. **Environment variables** — All secrets in env vars. NEXT_PUBLIC_ only for client-safe values.
99
- 12. **Session management** — Tokens expire and refresh properly. Logout clears session.
100
-
101
- For EVERY finding: What | Where (file:line) | Impact | Fix | Severity (BLOCKER/WARNING/INFO)",
102
- subagent_type="backend-agent",
103
- description="Auth & security production audit"
104
- )
105
- ```
106
-
107
- #### Agent 3: Backend, Database & API
108
-
109
- ```
110
- Task(
111
- prompt="You are auditing the backend and database for production readiness.
112
-
113
- <planning>{PROJECT.md + REQUIREMENTS.md}</planning>
114
-
115
- Check:
116
-
117
- 1. **API error handling** — Every API route/server action has try/catch. Errors return proper HTTP status codes with meaningful messages. No stack traces exposed to client.
118
- 2. **Database queries** — N+1 queries (Supabase calls in loops)? Missing indexes on filtered columns? Sequential queries that could be parallel?
119
- 3. **Server actions** — All data mutations use 'use server' actions, not client-side Supabase calls? Each checks auth first?
120
- 4. **Supabase connection** — Using server.ts for mutations, client.ts for reads? Connection pooling configured?
121
- 5. **Migrations** — All migrations applied? Types generated and up to date? Schema matches what code expects?
122
- 6. **Edge functions** — If supabase/functions/ exists: error handling, CORS, timeout protection, proper responses.
123
- 7. **Caching** — revalidatePath/revalidateTag after mutations? SWR/React Query configured with sensible stale times?
124
- 8. **Pagination** — Large data sets paginated? Not loading 10,000 rows into memory?
125
- 9. **File uploads** — If any: size limits, type validation, virus scanning, proper storage?
126
- 10. **Webhooks** — If any: signature verification, idempotency, error handling, retry logic?
127
- 11. **Background jobs** — Long-running operations handled async? Not blocking API responses?
128
- 12. **Monitoring** — Error tracking configured (Sentry or similar)? Logging meaningful events?
129
-
130
- For EVERY finding: What | Where (file:line) | Impact | Fix | Severity (BLOCKER/WARNING/INFO)",
131
- subagent_type="backend-agent",
132
- description="Backend & database production audit"
133
- )
134
- ```
135
-
136
- #### Agent 4: Performance & SEO
137
-
138
- ```
139
- Task(
140
- prompt="You are auditing performance and SEO for a client-facing production site.
141
-
142
- Check:
143
-
144
- 1. **Build succeeds** — Run npm run build mentally (check for obvious build errors in imports, missing modules)
145
- 2. **Bundle size** — Large library imports without tree-shaking? Barrel exports? Missing dynamic imports for heavy components (charts, editors, maps)?
146
- 3. **Images** — All using next/image? WebP/AVIF format? Proper width/height? Lazy loading below fold?
147
- 4. **Fonts** — Using next/font? No render-blocking external font loads?
148
- 5. **Core Web Vitals** — Largest Contentful Paint risks? Cumulative Layout Shift risks? Large unoptimized images above fold?
149
- 6. **SEO metadata** — Every page has title, description. Root layout has proper metadata. Open Graph tags for social sharing?
150
- 7. **Sitemap** — public/sitemap.xml exists? Lists all public pages?
151
- 8. **Robots.txt** — public/robots.txt exists? Not blocking important pages?
152
- 9. **Structured data** — JSON-LD for business info, breadcrumbs, or relevant schema?
153
- 10. **Canonical URLs** — Proper canonical tags to avoid duplicate content?
154
- 11. **Lighthouse hints** — Server components used where possible? No unnecessary 'use client' directives?
155
- 12. **API latency** — Any obvious slow queries or waterfalls in the data fetching pattern?
156
-
157
- For EVERY finding: What | Where (file:line) | Impact | Fix | Severity (BLOCKER/WARNING/INFO)",
158
- subagent_type="performance-oracle",
159
- description="Performance & SEO production audit"
160
- )
161
- ```
162
-
163
- #### Agent 5: Completeness & Missing Features
164
-
165
- ```
166
- Task(
167
- prompt="You are checking if a project is COMPLETE — nothing missing that a client would expect.
168
-
169
- <planning>{PROJECT.md + REQUIREMENTS.md + ROADMAP.md}</planning>
170
-
171
- Check against requirements AND common expectations:
172
-
173
- 1. **Requirements coverage** — Every requirement in REQUIREMENTS.md marked complete: does the feature ACTUALLY work in code? Grep for routes, components, API endpoints that implement each requirement.
174
- 2. **Missing pages** — Common pages that clients expect: About, Contact, Privacy Policy, Terms of Service, 404, 500 error page. Which are missing?
175
- 3. **Missing functionality** — Based on project type:
176
- - Website: contact form works? Newsletter signup? Social links?
177
- - Web app: settings page? Profile management? Change password? Delete account?
178
- - AI agent: fallback responses? Rate limiting? Usage tracking?
179
- 4. **Email** — If the app sends emails: are they configured? Working? Proper templates? Not going to spam?
180
- 5. **Analytics** — Tracking configured? Google Analytics / Plausible / PostHog?
181
- 6. **Error boundaries** — app/error.tsx exists? Styled? Provides recovery action?
182
- 7. **Loading** — app/loading.tsx or Suspense boundaries on data-fetching pages?
183
- 8. **Favicon & branding** — Custom favicon (not Next.js default)? Proper app title? OG image?
184
- 9. **Legal compliance** — Cookie consent if in EU? Privacy policy if collecting data? Terms if SaaS?
185
- 10. **Mobile** — Site works on mobile? No horizontal scroll? Touch targets adequate?
186
- 11. **Cross-browser** — Any IE/Safari-specific CSS issues? Webkit prefixes needed?
187
- 12. **Deployment config** — Vercel configured? Environment variables set? Domain connected?
188
-
189
- For EVERY finding: What | Where | Impact on client | Fix | Severity (BLOCKER/WARNING/INFO)",
190
- subagent_type="general-purpose",
191
- description="Completeness & missing features audit"
192
- )
193
- ```
194
-
195
- ### Step 3: Collect & Score
196
-
197
- After all 5 agents return:
198
-
199
- 1. Deduplicate (same file:line from multiple agents)
200
- 2. Group by severity: BLOCKER → WARNING → INFO
201
- 3. Count totals
202
- 4. Determine verdict:
203
- - **READY** — 0 blockers, 0 warnings (or all warnings are cosmetic)
204
- - **ALMOST** — 0 blockers, some warnings
205
- - **NOT READY** — has blockers
206
-
207
- ### Step 4: Present Report
208
-
209
- Output as direct text (NOT via Bash):
210
-
211
- ```
212
- ◆ PRODUCTION CHECK — CLIENT HANDOFF AUDIT
213
- ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
214
-
215
- Project: {name}
216
- Date: {date}
217
- Verdict: {READY / ALMOST / NOT READY}
218
-
219
- BLOCKERS: {N} WARNINGS: {N} INFO: {N}
220
-
221
- ── BLOCKERS (must fix before handoff) ────────
222
- {If none: "None — no blockers found"}
223
-
224
- 1. [{dimension}] {finding}
225
- Location: {file:line}
226
- Client impact: {what the client/user will experience}
227
- Fix: {how to fix}
228
-
229
- 2. ...
230
-
231
- ── WARNINGS (should fix) ────────────────────
232
- {findings}
233
-
234
- ── INFO (nice to have) ──────────────────────
235
- {findings}
236
-
237
- ── DIMENSION SCORES ─────────────────────────
238
-
239
- UX & Frontend {PASS/ISSUES} ({N} findings)
240
- Auth & Security {PASS/ISSUES} ({N} findings)
241
- Backend & Data {PASS/ISSUES} ({N} findings)
242
- Performance & SEO {PASS/ISSUES} ({N} findings)
243
- Completeness {PASS/ISSUES} ({N} findings)
244
-
245
- ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
246
- ```
247
-
248
- ### Step 5: Save Report
249
-
250
- Write to `.planning/PRODUCTION-CHECK.md`:
251
-
252
- ```markdown
253
- ---
254
- date: YYYY-MM-DD HH:MM
255
- verdict: ready|almost|not_ready
256
- blockers: N
257
- warnings: N
258
- info: N
259
- dimensions:
260
- ux: {pass|issues}
261
- security: {pass|issues}
262
- backend: {pass|issues}
263
- performance: {pass|issues}
264
- completeness: {pass|issues}
265
- ---
266
-
267
- # Production Check — YYYY-MM-DD
268
-
269
- {Full report content}
270
- ```
271
-
272
- Commit:
273
- ```bash
274
- git add .planning/PRODUCTION-CHECK.md && git commit -m "docs: production check ({verdict}, {blockers} blockers)"
275
- ```
276
-
277
- ### Step 6: Actionable Next Steps
278
-
279
- Based on verdict:
280
-
281
- **If NOT READY (has blockers):**
282
- ```
283
- ## What's Next?
284
-
285
- This project has {N} blockers that clients WILL notice.
286
-
287
- 1. Fix blockers now — I'll fix them one by one with /qualia-quick
288
- 2. Create a fix milestone — /qualia-new-milestone to plan systematic fixes
289
- 3. See specific blocker — tell me which number to investigate
290
- ```
291
-
292
- **If ALMOST (warnings only):**
293
- ```
294
- ## What's Next?
295
-
296
- No blockers — the project works. {N} warnings to consider.
297
-
298
- 1. Fix warnings — I'll handle the quick ones now
299
- 2. Ship as-is — warnings won't break anything for the client
300
- 3. Run /qualia-design — polish the visual design before handoff
301
- ```
302
-
303
- **If READY:**
304
- ```
305
- ## What's Next?
306
-
307
- ✓ Production ready. No blockers, no warnings worth fixing.
308
-
309
- 1. Ship it — /ship
310
- 2. Run /qualia-design — final visual polish (optional)
311
- 3. Generate handoff docs — project summary for the client
312
- ```
313
-
314
- Wait for user to select. Act on their choice.
@@ -0,0 +1,75 @@
1
+ # Review Team
2
+
3
+ > Five specialist reviewers analyze code in parallel, results synthesized into unified report.
4
+
5
+ ## Agents
6
+
7
+ - **code-simplicity-reviewer**
8
+ - subagent_type: code-simplicity-reviewer
9
+ - role: Identify unnecessary complexity, premature abstractions, YAGNI violations, over-engineering
10
+ - focus: Code structure, abstractions, function complexity, dead code
11
+
12
+ - **performance-oracle**
13
+ - subagent_type: performance-oracle
14
+ - role: Identify performance bottlenecks, N+1 queries, memory leaks, missing indexes, bundle size issues
15
+ - focus: Database queries, API latency, rendering performance, caching opportunities
16
+
17
+ - **kieran-typescript-reviewer**
18
+ - subagent_type: kieran-typescript-reviewer
19
+ - role: TypeScript quality — strict types, naming conventions, pattern adherence, type safety gaps
20
+ - focus: Type definitions, generics usage, any/unknown, null handling, naming
21
+
22
+ - **security-auditor**
23
+ - subagent_type: security-auditor
24
+ - role: RLS policies, service_role exposure, auth patterns, input validation, secrets scanning, dependency vulnerabilities
25
+ - focus: Supabase security, auth flows, env var handling, XSS/injection prevention
26
+
27
+ - **red-team-qa** (optional — spawn when reviewing auth, payments, or user input)
28
+ - subagent_type: red-team-qa
29
+ - role: Adversarial QA — actively tries to break the implementation via edge cases, error paths, boundary conditions
30
+ - focus: Permission bypasses, unexpected inputs, race conditions, error handling gaps
31
+
32
+ ## Pattern
33
+
34
+ fan-out (all 5 parallel) → synthesize into REVIEW-REPORT.md
35
+
36
+ ## Shared Context
37
+
38
+ - .planning/STATE.md — what was built, current phase
39
+ - Recent git diff (last N commits relevant to the review scope)
40
+
41
+ ## Coordination Rules
42
+
43
+ - Each reviewer produces findings independently — no coordination needed
44
+ - Reviewers are read-only — they analyze and report, they don't fix
45
+ - Findings should include file:line references
46
+ - Each reviewer rates findings: CRITICAL / HIGH / MEDIUM / LOW
47
+
48
+ ## Output
49
+
50
+ REVIEW-REPORT.md in current directory with sections:
51
+
52
+ ```markdown
53
+ # Review Report
54
+
55
+ ## Summary
56
+ {Overall assessment — 1-2 sentences}
57
+
58
+ ## Simplicity Review
59
+ {From code-simplicity-reviewer}
60
+
61
+ ## Performance Review
62
+ {From performance-oracle}
63
+
64
+ ## TypeScript Quality Review
65
+ {From kieran-typescript-reviewer}
66
+
67
+ ## Security Review
68
+ {From security-auditor}
69
+
70
+ ## Action Items
71
+ | # | Severity | Finding | File:Line | Reviewer |
72
+ |---|----------|---------|-----------|----------|
73
+ | 1 | Critical | ... | ... | ... |
74
+ | 2 | Warning | ... | ... | ... |
75
+ ```
@@ -0,0 +1,86 @@
1
+ # Ship Team
2
+
3
+ > Quality gate → Deploy → Verify. Pipeline pattern — abort if any step fails.
4
+
5
+ ## Agents
6
+
7
+ - **quality-gate**
8
+ - subagent_type: test-agent
9
+ - role: Run tsc, eslint, build. Ensure no type errors, lint violations, or build failures.
10
+ - commands: |
11
+ npx tsc --noEmit
12
+ npx next lint (or eslint .)
13
+ npm run build (or next build)
14
+ - abort_on_fail: true
15
+
16
+ - **deploy**
17
+ - subagent_type: general-purpose
18
+ - role: Commit staged changes, push to remote, deploy to hosting platform
19
+ - commands: |
20
+ git add -A && git commit (if uncommitted changes)
21
+ git push origin {branch}
22
+ vercel --prod (default) OR wrangler deploy (if armenius)
23
+ - abort_on_fail: true
24
+
25
+ - **verify**
26
+ - subagent_type: test-agent
27
+ - role: Run 6-check post-deploy verification against production URL
28
+ - checks: |
29
+ 1. HTTP 200 — homepage loads
30
+ 2. Auth flow — login/signup endpoint responds
31
+ 3. Console errors — no critical JS errors
32
+ 4. API latency — key endpoints < 500ms
33
+ 5. SSL — valid certificate
34
+ 6. Build artifacts — no source maps exposed
35
+ - abort_on_fail: false (report issues but don't rollback)
36
+
37
+ ## Pattern
38
+
39
+ pipeline: quality-gate → deploy → verify
40
+
41
+ Each step must succeed before the next begins. If quality-gate fails, deployment is blocked. If deploy fails, verification is skipped.
42
+
43
+ ## Shared Context
44
+
45
+ - ~/.claude/knowledge/qualia-context.md — project inventory, deploy commands, Supabase refs
46
+ - .planning/STATE.md — current project state
47
+ - Project's local CLAUDE.md — project-specific deploy config
48
+
49
+ ## Coordination Rules
50
+
51
+ - quality-gate runs ALL checks before passing — partial pass is a fail
52
+ - deploy detects hosting platform from project context (Vercel default, Cloudflare for armenius)
53
+ - verify uses the production URL from deploy output
54
+ - If Supabase project: deploy also runs `supabase db push` if pending migrations exist
55
+
56
+ ## Output
57
+
58
+ SHIP-REPORT.md in current directory:
59
+
60
+ ```markdown
61
+ # Ship Report
62
+
63
+ **Date:** {date}
64
+ **Branch:** {branch}
65
+ **Deploy URL:** {url}
66
+
67
+ ## Quality Gate
68
+ - tsc: ✓ / ✗ ({error count})
69
+ - lint: ✓ / ✗ ({warning count})
70
+ - build: ✓ / ✗ ({duration})
71
+
72
+ ## Deployment
73
+ - Platform: Vercel / Cloudflare
74
+ - URL: {production url}
75
+ - Commit: {sha}
76
+
77
+ ## Verification
78
+ | Check | Status | Details |
79
+ |-------|--------|---------|
80
+ | HTTP 200 | ✓/✗ | {status code} |
81
+ | Auth flow | ✓/✗ | {details} |
82
+ | Console errors | ✓/✗ | {count} |
83
+ | API latency | ✓/✗ | {ms} |
84
+ | SSL | ✓/✗ | {expiry} |
85
+ | Source maps | ✓/✗ | {exposed?} |
86
+ ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "qualia-framework",
3
- "version": "2.4.8",
3
+ "version": "2.4.9",
4
4
  "description": "Qualia Solutions — Claude Code Framework",
5
5
  "bin": {
6
6
  "qualia-framework": "./bin/cli.js"
@@ -1,130 +0,0 @@
1
- ---
2
- name: qualia-workflow
3
- description: Qualia Solutions project conventions - structure, patterns, deployment checklist. Use when starting or auditing Qualia client projects.
4
- tags: [qualia, workflow, nextjs, supabase, vercel]
5
- ---
6
-
7
- # Qualia Solutions Conventions
8
-
9
- Project conventions for Qualia Solutions client work. For specific tooling, use dedicated skills (`/supabase`, `/voice-agent`, `/deploy`, etc.).
10
-
11
- ## Standard Tech Stack
12
-
13
- | Layer | Technology | Notes |
14
- |-------|------------|-------|
15
- | Frontend | Next.js 16+ + React 19 + TypeScript | App Router, Server Components |
16
- | Styling | Tailwind CSS + shadcn/ui | Custom themes per client |
17
- | Backend | Supabase | Postgres, Auth, RLS, Edge Functions |
18
- | Deployment | Vercel | Preview on PR, Production on main |
19
- | Voice AI | Retell AI + ElevenLabs | Call orchestration + TTS/voice cloning |
20
- | AI Models | OpenRouter | Model-flexible (Claude, Mistral, etc.) |
21
- | Payments | Stripe / HyperPay | Region-dependent |
22
-
23
- ## Project Structure
24
-
25
- ```
26
- project/
27
- ├── app/ # Next.js App Router
28
- │ ├── (auth)/ # Auth-required routes
29
- │ ├── (marketing)/ # Public pages
30
- │ ├── api/ # API routes (minimal, prefer server actions)
31
- │ └── layout.tsx # Root layout
32
- ├── components/
33
- │ ├── ui/ # shadcn/ui components
34
- │ ├── forms/ # Form components
35
- │ └── [feature]/ # Feature-specific components
36
- ├── lib/
37
- │ ├── supabase/ # Supabase clients (server/client)
38
- │ ├── utils.ts # Utility functions
39
- │ └── constants.ts # App constants
40
- ├── types/
41
- │ ├── database.ts # Generated from Supabase
42
- │ └── index.ts # App types
43
- ├── supabase/
44
- │ ├── migrations/ # SQL migrations
45
- │ └── functions/ # Edge functions
46
- ├── hooks/ # Custom React hooks
47
- ├── actions/ # Server actions
48
- └── public/ # Static assets
49
- ```
50
-
51
- ## Key Patterns
52
-
53
- - **Server Components first** -- only add `'use client'` when interactivity is needed
54
- - **Server Actions for mutations** -- always auth check first (`supabase.auth.getUser()`)
55
- - **Prefer server actions over API routes** -- fewer files, same security
56
- - **Generate types** after every migration: `npx supabase gen types typescript`
57
-
58
- ## Server Action Template
59
-
60
- ```typescript
61
- 'use server'
62
-
63
- import { revalidatePath } from 'next/cache'
64
- import { createClient } from '@/lib/supabase/server'
65
-
66
- export async function createProduct(formData: FormData) {
67
- const supabase = await createClient()
68
-
69
- // Auth check first
70
- const { data: { user }, error: authError } = await supabase.auth.getUser()
71
- if (authError || !user) throw new Error('Unauthorized')
72
-
73
- const { error } = await supabase
74
- .from('products')
75
- .insert({ name: formData.get('name') })
76
-
77
- if (error) throw new Error('Failed to create product')
78
-
79
- revalidatePath('/products')
80
- }
81
- ```
82
-
83
- ## Deployment Checklist
84
-
85
- 1. **Environment Variables**
86
- - [ ] All secrets in Vercel (not in code)
87
- - [ ] Different keys for preview vs production
88
- - [ ] NEXT_PUBLIC_ prefix only for client-safe vars
89
-
90
- 2. **Database**
91
- - [ ] Migrations applied
92
- - [ ] RLS policies on all tables
93
- - [ ] Indexes on frequently queried columns
94
- - [ ] Types generated and committed
95
-
96
- 3. **Security**
97
- - [ ] No exposed API keys
98
- - [ ] CORS configured
99
- - [ ] Rate limiting on API routes
100
- - [ ] Input validation everywhere
101
-
102
- 4. **Performance**
103
- - [ ] Images optimized (next/image)
104
- - [ ] Fonts optimized (next/font)
105
- - [ ] Bundle analyzed
106
- - [ ] Core Web Vitals passing
107
-
108
- ## New Feature Workflow
109
-
110
- ```
111
- 1. Database schema first (if needed)
112
- 2. Supabase migration
113
- 3. Generate types: npx supabase gen types typescript
114
- 4. Server Components for data display
115
- 5. Server Actions for mutations
116
- 6. Client Components only for interactivity
117
- ```
118
-
119
- ## External Tools Available
120
-
121
- - **Supabase**: Use `/supabase` skill (CLI-first) or Supabase MCP plugin
122
- - **Context7 MCP**: Library documentation lookup
123
- - **Retell AI**: Voice agent call orchestration
124
- - **ElevenLabs MCP**: Voice synthesis and cloning
125
- - **Telnyx MCP**: Phone numbers, messaging, calls
126
- - **Playwright MCP**: Browser automation and testing
127
- - **Sentry MCP**: Error tracking and monitoring
128
- - **Firecrawl MCP**: Web scraping and search
129
- - **Google Calendar MCP**: Calendar operations
130
- - **GitHub CLI** (`gh`): PRs, issues, code review (NOT MCP — use Bash)