@aslomon/effectum 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. package/README.md +633 -0
  2. package/bin/install.js +652 -0
  3. package/package.json +29 -0
  4. package/system/README.md +118 -0
  5. package/system/commands/build-fix.md +89 -0
  6. package/system/commands/cancel-ralph.md +90 -0
  7. package/system/commands/checkpoint.md +63 -0
  8. package/system/commands/code-review.md +120 -0
  9. package/system/commands/e2e.md +92 -0
  10. package/system/commands/plan.md +111 -0
  11. package/system/commands/ralph-loop.md +163 -0
  12. package/system/commands/refactor-clean.md +104 -0
  13. package/system/commands/tdd.md +84 -0
  14. package/system/commands/verify.md +71 -0
  15. package/system/stacks/generic.md +96 -0
  16. package/system/stacks/nextjs-supabase.md +114 -0
  17. package/system/stacks/python-fastapi.md +140 -0
  18. package/system/stacks/swift-ios.md +136 -0
  19. package/system/templates/AUTONOMOUS-WORKFLOW.md +1368 -0
  20. package/system/templates/CLAUDE.md.tmpl +141 -0
  21. package/system/templates/guardrails.md.tmpl +39 -0
  22. package/system/templates/settings.json.tmpl +201 -0
  23. package/workshop/knowledge/01-prd-template.md +275 -0
  24. package/workshop/knowledge/02-questioning-framework.md +209 -0
  25. package/workshop/knowledge/03-decomposition-guide.md +234 -0
  26. package/workshop/knowledge/04-examples.md +435 -0
  27. package/workshop/knowledge/05-quality-checklist.md +166 -0
  28. package/workshop/knowledge/06-network-map-guide.md +413 -0
  29. package/workshop/knowledge/07-prompt-templates.md +315 -0
  30. package/workshop/knowledge/08-workflow-modes.md +198 -0
  31. package/workshop/projects/_example-project/PROJECT.md +33 -0
  32. package/workshop/projects/_example-project/notes/decisions.md +15 -0
  33. package/workshop/projects/_example-project/notes/discovery-log.md +9 -0
  34. package/workshop/templates/PROJECT.md +25 -0
  35. package/workshop/templates/network-map.mmd +13 -0
  36. package/workshop/templates/prd.md +133 -0
  37. package/workshop/templates/requirements-map.md +48 -0
  38. package/workshop/templates/shared-contracts.md +89 -0
  39. package/workshop/templates/vision.md +66 -0
@@ -0,0 +1,163 @@
1
+ # /ralph-loop -- Self-Referential Agentic Loop for Autonomous Implementation
2
+
3
+ You enter an autonomous iteration loop. Each iteration: assess state, implement the next step, run quality gates, repeat -- until all criteria are met or max-iterations is reached.
4
+
5
+ ## Step 1: Parse Arguments
6
+
7
+ Parse `$ARGUMENTS` for these components:
8
+
9
+ 1. **Prompt text**: The complete work order (everything that is not a flag).
10
+ 2. **`--max-iterations N`**: Maximum number of iterations (required, default 30).
11
+ 3. **`--completion-promise "PHRASE"`**: The exact phrase to output when done (required).
12
+
13
+ If either `--max-iterations` or `--completion-promise` is missing, ask the user to provide them and wait.
14
+
15
+ ## Step 2: Initialize State
16
+
17
+ Create the state file at `.claude/ralph-loop.local.md`:
18
+
19
+ ```yaml
20
+ ---
21
+ active: true
22
+ iteration: 0
23
+ max_iterations: [N]
24
+ completion_promise: "[PHRASE]"
25
+ started_at: [ISO 8601 timestamp]
26
+ errors_consecutive: 0
27
+ last_error: ""
28
+ ---
29
+
30
+ ## Original Prompt
31
+
32
+ [FULL PROMPT TEXT FROM $ARGUMENTS]
33
+
34
+ ## Progress Log
35
+
36
+ [Empty -- will be updated each iteration]
37
+ ```
38
+
39
+ Read `CLAUDE.md` for the project's build, test, lint, and type-check commands.
40
+
41
+ ## Step 3: Execute Iteration Loop
42
+
43
+ For each iteration (1 through max_iterations):
44
+
45
+ ### 3a. Read State
46
+
47
+ 1. Read `.claude/ralph-loop.local.md` for current iteration count, progress log, and error state.
48
+ 2. Check current project state: `git diff --stat`, existing files, latest test results.
49
+ 3. If a PRD is referenced in the prompt, re-read the acceptance criteria.
50
+
51
+ ### 3b. Determine Next Step
52
+
53
+ Based on the prompt, progress log, and current project state, determine the single next logical step to implement. Follow this general sequence:
54
+
55
+ 1. Data layer: migrations, schemas, types, validation.
56
+ 2. Backend: services, API routes, business logic.
57
+ 3. Frontend: components, pages, hooks.
58
+ 4. Tests: unit tests, integration tests (follow TDD where practical).
59
+ 5. E2E tests: critical user journeys.
60
+ 6. Polish: code review, cleanup, final verification.
61
+
62
+ If an `<iteration_plan>` was provided in the prompt, follow it as a roadmap.
63
+
64
+ ### 3c. Implement
65
+
66
+ Execute the next step. Write code, tests, configurations -- whatever the step requires. Follow project conventions from CLAUDE.md.
67
+
68
+ ### 3d. Run Quality Gates
69
+
70
+ After every significant change, run the project's quality gates:
71
+
72
+ 1. Build command.
73
+ 2. Type-check command.
74
+ 3. Test command.
75
+ 4. Lint command.
76
+
77
+ Record the results.
78
+
79
+ ### 3e. Update State
80
+
81
+ Update `.claude/ralph-loop.local.md`:
82
+
83
+ - Increment `iteration` counter.
84
+ - Add a progress log entry: what was done, what passed/failed.
85
+ - Update `errors_consecutive` (reset to 0 on success, increment on failure).
86
+ - Update `last_error` with the most recent error message (if any).
87
+
88
+ ### 3f. Check Completion
89
+
90
+ Evaluate whether ALL of these conditions are met:
91
+
92
+ 1. Every acceptance criterion from the PRD/prompt is implemented and verified.
93
+ 2. All quality gates pass (build, types, tests, lint).
94
+ 3. The completion promise statement is 100% TRUE.
95
+
96
+ **If all conditions are met:**
97
+ Output the completion promise: `<promise>[COMPLETION PROMISE TEXT]</promise>`
98
+ Set `active: false` in the state file.
99
+ Write a final status report to `.claude/ralph-status.md`.
100
+ **STOP.**
101
+
102
+ **If conditions are NOT yet met:**
103
+ Continue to the next iteration.
104
+
105
+ ## Step 4: Error Recovery
106
+
107
+ Apply these escalation rules:
108
+
109
+ ### Build/Test Error
110
+
111
+ - Next iteration sees the error output and fixes it. This is normal -- continue.
112
+
113
+ ### Same Error 3 Consecutive Times
114
+
115
+ - The current approach is not working. Try a fundamentally different approach:
116
+ - Different algorithm or data structure.
117
+ - Different library or API.
118
+ - Simpler implementation that still satisfies the criteria.
119
+ - Document what was tried and why it failed in the progress log.
120
+
121
+ ### Same Error 5 Consecutive Times
122
+
123
+ - This is a blocker. Document it in `.claude/ralph-blockers.md`:
124
+ - What the error is.
125
+ - What approaches were tried.
126
+ - Why none worked.
127
+ - Suggested investigation paths.
128
+ - Move to the next task/acceptance criterion. Do not keep retrying.
129
+
130
+ ### 80% of Iterations Consumed
131
+
132
+ - Write a comprehensive status report to `.claude/ralph-status.md`:
133
+ - What is done (with test results).
134
+ - What remains.
135
+ - Active blockers.
136
+ - Suggested next steps.
137
+ - Estimated remaining work.
138
+ - Continue implementing, but prioritize the most impactful remaining work.
139
+
140
+ ### Flaky Test
141
+
142
+ - Investigate the root cause: stateful dependency, timing issue, test isolation problem.
143
+ - Do NOT retry blindly. Fix the underlying cause.
144
+ - If the root cause cannot be fixed in one iteration: document it as a known issue and skip to the next task.
145
+
146
+ ### Max Iterations Reached
147
+
148
+ - Set `active: false` in the state file.
149
+ - Write a final status report to `.claude/ralph-status.md`.
150
+ - Do NOT output the completion promise -- it would be dishonest.
151
+ - Report what was accomplished and what remains.
152
+
153
+ ## CRITICAL RULES
154
+
155
+ 1. **Honesty above all**: The completion promise may ONLY be output when the statement is 100% TRUE. Do NOT output false promises to escape the loop. If the build is broken, tests fail, or criteria are unmet, the promise MUST NOT be output.
156
+ 2. **One step per iteration**: Each iteration should accomplish one meaningful piece of work. Do not try to implement the entire feature in a single iteration.
157
+ 3. **Always run quality gates**: After every significant change, verify the build and tests. Do not accumulate changes without verification.
158
+ 4. **Self-awareness**: Read your own progress log to avoid repeating failed approaches. Learn from previous iterations.
159
+
160
+ ## Communication
161
+
162
+ Follow the language settings defined in CLAUDE.md for user-facing communication.
163
+ All code, state files, progress logs, and technical content in English.
@@ -0,0 +1,104 @@
1
+ # /refactor-clean -- Remove Dead Code and Improve Quality
2
+
3
+ You clean up the codebase by removing dead code and improving quality, without changing any observable behavior. Every change is validated by the test suite.
4
+
5
+ ## Step 1: Establish a Passing Baseline
6
+
7
+ 1. Read `CLAUDE.md` for the project's test command, build command, and lint command.
8
+ 2. Run the full test suite.
9
+ 3. **If any tests fail: STOP immediately.** Do not refactor a codebase with failing tests. Report the failures and suggest running `/build-fix` or `/tdd` first.
10
+ 4. Record the test count and pass rate as the baseline.
11
+
12
+ ## Step 2: Analyze the Codebase
13
+
14
+ Scan for refactoring opportunities across these categories:
15
+
16
+ ### Dead Code
17
+
18
+ - **Unused imports**: Imports that are not referenced anywhere in the file.
19
+ - **Unused variables and functions**: Declared but never called or read.
20
+ - **Unused files**: Files that are not imported by any other file.
21
+ - **Commented-out code blocks**: Old code left in comments (not explanatory comments).
22
+ - **Unreachable code**: Code after return/throw statements, never-true conditions.
23
+
24
+ ### Oversized Files
25
+
26
+ - Files exceeding 300 lines. Identify natural split points (separate concerns, distinct feature areas).
27
+
28
+ ### Duplicated Logic
29
+
30
+ - Functions or code blocks that appear in multiple places with minor variations. Identify candidates for extraction into shared utilities.
31
+
32
+ ### Overly Complex Functions
33
+
34
+ - Functions exceeding 40 lines. Identify extraction points (helper functions, early returns, guard clauses).
35
+
36
+ ### Inconsistent Naming
37
+
38
+ - Variables, functions, or files that don't follow the project's naming conventions (as defined in CLAUDE.md).
39
+
40
+ ## Step 3: Create a Refactoring Plan
41
+
42
+ Present the findings as a prioritized list:
43
+
44
+ ```
45
+ | # | Category | Location | Description | Risk |
46
+ | - | -------------- | --------------------------- | ------------------------------------ | ---- |
47
+ | 1 | Dead code | src/lib/utils/old-helper.ts | File not imported anywhere | Low |
48
+ | 2 | Oversized file | src/lib/auth/service.ts | 412 lines, split auth + session | Med |
49
+ | 3 | Duplication | src/components/form/*.tsx | Validation logic duplicated 3 times | Low |
50
+ | 4 | Complex fn | src/lib/billing/calc.ts | calculateTotal: 67 lines | Med |
51
+ ```
52
+
53
+ **Wait for user approval before proceeding.** The user may exclude specific items or reprioritize.
54
+
55
+ ## Step 4: Refactor One Thing at a Time
56
+
57
+ For each approved refactoring item:
58
+
59
+ 1. **Make the change**: Apply the refactoring (delete dead code, extract function, rename, split file).
60
+ 2. **Run the full test suite**: Verify all tests still pass.
61
+ 3. **If tests pass**: Move to the next item.
62
+ 4. **If tests fail**: Revert the change immediately. Investigate why the "dead" code or refactoring broke something. Report the finding -- it may indicate a test gap or hidden dependency.
63
+
64
+ ### Refactoring Patterns
65
+
66
+ - **Remove dead code**: Delete the unused import, variable, function, or file. If a file is deleted, check for any remaining references.
67
+ - **Split oversized file**: Extract related functions/classes into a new file. Update all import paths. Ensure the public API surface remains identical.
68
+ - **Extract duplicated logic**: Create a shared utility function. Replace all instances with calls to the shared function. Verify behavior is identical.
69
+ - **Simplify complex functions**: Extract helper functions, use early returns to reduce nesting, separate concerns into distinct functions.
70
+ - **Fix naming**: Rename using the IDE/tool's rename capability or update all references manually. Verify no references are missed.
71
+
72
+ ## Step 5: Final Verification
73
+
74
+ After all refactoring is complete:
75
+
76
+ 1. Run the full test suite -- verify all tests pass (same count as baseline, no new failures).
77
+ 2. Run the build command -- verify it passes.
78
+ 3. Run the lint command -- verify it passes.
79
+ 4. Compare the codebase metrics:
80
+ - Total lines of code (before vs. after).
81
+ - Number of files (before vs. after).
82
+ - Largest file size (before vs. after).
83
+
84
+ ## Step 6: Report
85
+
86
+ Present a summary:
87
+
88
+ ```
89
+ | Metric | Before | After |
90
+ | --------------------- | ------ | ------ |
91
+ | Total source files | 87 | 84 |
92
+ | Total lines of code | 12,450 | 11,200 |
93
+ | Largest file (lines) | 412 | 245 |
94
+ | Tests passing | 142 | 142 |
95
+ | Build status | PASS | PASS |
96
+ | Lint status | PASS | PASS |
97
+ ```
98
+
99
+ List each refactoring that was applied and its impact. Suggest running `/verify` for a full quality gate check.
100
+
101
+ ## Communication
102
+
103
+ Follow the language settings defined in CLAUDE.md for user-facing communication.
104
+ All code, file paths, and technical content in English.
@@ -0,0 +1,84 @@
1
+ # /tdd -- Test-Driven Development: RED -> GREEN -> REFACTOR
2
+
3
+ You implement features using strict Test-Driven Development. Every piece of functionality starts with a failing test.
4
+
5
+ ## Step 1: Establish Context
6
+
7
+ 1. Read `CLAUDE.md` for the project's test framework, test conventions, and file organization.
8
+ 2. Read `$ARGUMENTS` for the feature context: PRD reference, specific functionality, or conversation context.
9
+ 3. If a PRD is referenced, read it to understand the acceptance criteria.
10
+ 4. Read existing test files to understand patterns: test structure, naming, helpers, fixtures, mocking approach.
11
+
12
+ ## Step 2: Identify the Next Unit of Work
13
+
14
+ Break the feature into small, independently testable pieces of functionality. Pick the next piece that:
15
+
16
+ - Has no unmet dependencies on other unimplemented pieces.
17
+ - Represents a single, clearly defined behavior.
18
+ - Can be expressed as one or more test assertions.
19
+
20
+ If all pieces are implemented, skip to Step 6.
21
+
22
+ ## Step 3: RED -- Write a Failing Test
23
+
24
+ 1. Create or open the test file following project conventions (colocated with source or in a tests directory).
25
+ 2. Write a test that describes the expected behavior of the next piece of functionality.
26
+ - Use descriptive test names that explain WHAT is being tested and WHAT the expected outcome is.
27
+ - Test one behavior per test case.
28
+ - Use the project's test framework and assertion style as specified in CLAUDE.md.
29
+ 3. Run the test using the project's test command (as specified in CLAUDE.md).
30
+ 4. **Verify it fails for the RIGHT reason:**
31
+ - The test should fail because the functionality does not exist yet (e.g., missing function, missing module, wrong return value).
32
+ - If it fails for the WRONG reason (import error, syntax error, misconfigured test), fix the test setup first.
33
+ - If it passes unexpectedly, the behavior already exists -- move to the next piece.
34
+
35
+ ## Step 4: GREEN -- Write Minimum Code to Pass
36
+
37
+ 1. Write the simplest implementation that makes the failing test pass.
38
+ - Follow project conventions, patterns, and architectural rules from CLAUDE.md.
39
+ - Use existing utilities, components, and services where applicable.
40
+ - Do NOT add functionality beyond what the test requires.
41
+ 2. Run the test using the project's test command.
42
+ 3. **Verify it passes.** If it still fails:
43
+ - Read the error output completely before attempting a fix.
44
+ - Apply a targeted fix (one change at a time).
45
+ - Re-run the test.
46
+
47
+ ## Step 5: REFACTOR -- Improve While Green
48
+
49
+ With the test passing, improve the code quality:
50
+
51
+ 1. Extract common patterns into shared utilities or helpers.
52
+ 2. Improve naming for clarity.
53
+ 3. Reduce duplication (DRY) -- within the current feature and against existing code.
54
+ 4. Ensure functions stay under 40 lines and files under 300 lines.
55
+ 5. Ensure error handling follows project conventions (e.g., Result pattern).
56
+ 6. Run the full test suite (not just the current test) to verify nothing is broken.
57
+ 7. If any test breaks during refactoring: revert the refactoring change and investigate.
58
+
59
+ ## Step 6: Repeat
60
+
61
+ Go back to Step 2 and pick the next piece of functionality. Continue until all acceptance criteria from the PRD or feature description are covered by passing tests.
62
+
63
+ ## Step 7: Final Verification
64
+
65
+ When the feature is fully implemented:
66
+
67
+ 1. Run the complete test suite to confirm all tests pass.
68
+ 2. Run the project's build command to confirm compilation succeeds.
69
+ 3. Run the project's type-check command (if separate from build) to confirm type safety.
70
+ 4. Run the project's linter to confirm code style compliance.
71
+ 5. Report results: what was implemented, how many tests were written, and the pass/fail status of each quality gate.
72
+ 6. Suggest running `/verify` for the full quality gate check or `/e2e` for end-to-end tests.
73
+
74
+ ## Error Recovery
75
+
76
+ - **Import/module errors**: Check if the module path, export name, or package installation is correct.
77
+ - **Type errors**: Use type guards or validation schemas instead of type assertions.
78
+ - **Flaky tests**: Investigate the root cause (stateful dependency, timing, test isolation) -- do not retry blindly.
79
+ - **Same error 3 times**: Try a fundamentally different approach. Document what was tried and why it failed.
80
+
81
+ ## Communication
82
+
83
+ Follow the language settings defined in CLAUDE.md for user-facing communication.
84
+ All code, test names, and technical content in English.
@@ -0,0 +1,71 @@
1
+ # /verify -- Run All Quality Gates and Report Results
2
+
3
+ You run every quality gate for the project and report a clear pass/fail summary. You do NOT fix failures automatically.
4
+
5
+ ## Step 1: Determine Verification Scope
6
+
7
+ Parse `$ARGUMENTS` to determine the scope:
8
+
9
+ - **No arguments** (`/verify`): Standard verification -- build + types + lint + tests.
10
+ - **`quick`** (`/verify quick`): Minimal verification -- build + types only.
11
+ - **`full`** (`/verify full`): Complete verification -- all gates including E2E, debug checks, and file size analysis.
12
+
13
+ ## Step 2: Read Project Configuration
14
+
15
+ Read `CLAUDE.md` to identify the project's specific commands for each quality gate:
16
+
17
+ - Build command (e.g., compile, transpile, bundle).
18
+ - Type-check command (if separate from build).
19
+ - Lint command.
20
+ - Test command (unit + integration).
21
+ - E2E test command (for `full` scope only).
22
+
23
+ If CLAUDE.md does not specify a command for a gate, attempt to detect it from project configuration files (package.json, pyproject.toml, Makefile, Cargo.toml, etc.). If a gate cannot be detected, mark it as SKIPPED with a note.
24
+
25
+ ## Step 3: Run Quality Gates
26
+
27
+ Run each gate sequentially. Collect the output and determine pass/fail for each.
28
+
29
+ ### Standard Gates (always run)
30
+
31
+ 1. **Build**: Run the project's build command. PASS = zero errors, zero exit code.
32
+ 2. **Type Check**: Run the project's type-check command (if separate from build). PASS = zero type errors.
33
+ 3. **Lint**: Run the project's lint command. PASS = zero errors (warnings are acceptable unless the project treats them as errors).
34
+ 4. **Tests**: Run the project's test suite (unit + integration). PASS = all tests pass, zero exit code.
35
+
36
+ ### Full Gates (only with `full` or `e2e` argument)
37
+
38
+ 5. **E2E Tests**: Run the project's E2E test command. PASS = all tests pass.
39
+ 6. **Debug Statements**: Search production source code for debug statements:
40
+ - JavaScript/TypeScript: `console.log`, `console.debug`, `console.warn` (not in test files), `debugger`
41
+ - Python: `print(`, `breakpoint()`, `pdb.set_trace()`
42
+ - Other languages: adapt to common debug patterns.
43
+ - PASS = zero debug statements found in production code.
44
+ 7. **File Size**: Check all source files for length. Flag any file exceeding 300 lines. PASS = no oversized files.
45
+
46
+ ## Step 4: Present Results
47
+
48
+ Present results as a clear table:
49
+
50
+ ```
51
+ | Gate | Status | Details |
52
+ | --------------- | ------- | -------------------------------- |
53
+ | Build | PASS | Completed in 12s |
54
+ | Type Check | PASS | 0 errors |
55
+ | Lint | FAIL | 3 errors in src/lib/auth.ts |
56
+ | Tests | PASS | 47/47 passed |
57
+ | E2E Tests | SKIPPED | Not requested (use /verify full) |
58
+ | Debug Statements| PASS | 0 found |
59
+ | File Size | WARNING | 2 files exceed 300 lines |
60
+ ```
61
+
62
+ ## Step 5: Summary
63
+
64
+ - **All gates PASS**: Report success. The project is in a clean state.
65
+ - **Any gate FAIL**: Report which gates failed with the specific error details (file, line, error message). List the failures in priority order (build errors first, then types, then lint, then tests).
66
+ - **Do NOT attempt to fix failures automatically.** Present the findings and ask the user what to do. Suggest `/build-fix` for build/type errors, or specific actions for other failures.
67
+
68
+ ## Communication
69
+
70
+ Follow the language settings defined in CLAUDE.md for user-facing communication.
71
+ All technical output (error messages, file paths, gate names) in English.
@@ -0,0 +1,96 @@
1
+ # Stack Preset: Generic
2
+
3
+ > Stack-agnostic baseline. Use this when no specific stack preset matches, or as a starting point for custom stacks.
4
+
5
+ ## TECH_STACK
6
+
7
+ ```
8
+ - Language: [SPECIFY — e.g., TypeScript, Python, Go, Rust, Java]
9
+ - Framework: [SPECIFY — e.g., Express, Django, Gin, Actix, Spring Boot]
10
+ - Database: [SPECIFY — e.g., PostgreSQL, MySQL, SQLite, MongoDB]
11
+ - Testing: [SPECIFY — e.g., Jest, pytest, go test, cargo test]
12
+ - Linting: [SPECIFY — e.g., ESLint, ruff, golangci-lint, clippy]
13
+ - Package Manager: [SPECIFY — e.g., npm, pip, go mod, cargo]
14
+ - Deployment: [SPECIFY — e.g., Docker, Kubernetes, serverless]
15
+ ```
16
+
17
+ ## ARCHITECTURE_PRINCIPLES
18
+
19
+ ```
20
+ - Separation of concerns: keep data access, business logic, and presentation in distinct layers.
21
+ - Dependency injection: pass dependencies explicitly. Avoid global state and singletons.
22
+ - Type safety: use the strongest type system your language offers. Avoid escape hatches (any, Object, interface{}).
23
+ - Validation at boundaries: validate all external input (API requests, env vars, file I/O) at the entry point.
24
+ - Error handling: use explicit error types or Result patterns. Never swallow errors silently.
25
+ - Configuration from environment: no hardcoded secrets or environment-specific values in source code.
26
+ - Immutability by default: prefer const/final/readonly. Mutate only when necessary.
27
+ - Tests are first-class: colocate tests with source code. Test behavior, not implementation.
28
+ ```
29
+
30
+ ## PROJECT_STRUCTURE
31
+
32
+ ````
33
+ ```
34
+ src/ # Application source code
35
+ [domain]/ # Domain-specific modules
36
+ models/ # Data models / entities
37
+ services/ # Business logic
38
+ handlers/ # Request handlers / controllers
39
+ repositories/ # Data access layer
40
+ shared/ # Cross-cutting concerns
41
+ config/ # Configuration loading and validation
42
+ errors/ # Custom error types
43
+ middleware/ # Middleware / interceptors
44
+ utils/ # Pure utility functions
45
+ tests/ # Test files (mirrors src/ structure)
46
+ scripts/ # Build, deploy, and maintenance scripts
47
+ docs/ # Documentation (if needed)
48
+ ```
49
+ ````
50
+
51
+ ## QUALITY_GATES
52
+
53
+ ```
54
+ - Build: [BUILD_COMMAND] — 0 errors
55
+ - Types: [TYPE_CHECK_COMMAND] — 0 errors (if applicable)
56
+ - Tests: [TEST_COMMAND] — all pass, target 80%+ coverage
57
+ - Lint: [LINT_COMMAND] — 0 errors
58
+ - Format: [FORMAT_CHECK_COMMAND] — 0 differences
59
+ - No Debug Logs: 0 debug print/log statements in production code
60
+ - File Size: No file exceeds 300 lines
61
+ ```
62
+
63
+ ## FORMATTER
64
+
65
+ ```
66
+ [SPECIFY — e.g., prettier, ruff format, gofmt, rustfmt, google-java-format]
67
+ ```
68
+
69
+ ## FORMATTER_GLOB
70
+
71
+ ```
72
+ [SPECIFY — e.g., ts|tsx|js|jsx, py, go, rs, java]
73
+ ```
74
+
75
+ ## PACKAGE_MANAGER
76
+
77
+ ```
78
+ [SPECIFY — e.g., pnpm, uv, go mod, cargo, maven]
79
+ ```
80
+
81
+ ## STACK_SPECIFIC_GUARDRAILS
82
+
83
+ ```
84
+ - [Add project-specific guardrails here]
85
+ - [e.g., "Use X package manager, not Y"]
86
+ - [e.g., "Always validate input with Z library"]
87
+ - [e.g., "Follow existing patterns in src/domain/"]
88
+ ```
89
+
90
+ ## TOOL_SPECIFIC_GUARDRAILS
91
+
92
+ ```
93
+ - **Formatter runs automatically**: The PostToolUse hook auto-formats files. Don't run the formatter manually.
94
+ - **CHANGELOG is auto-updated**: The Stop hook handles CHANGELOG.md. Don't update it manually unless explicitly asked.
95
+ - **Lock files are protected**: Dependency lock files cannot be written to directly. Use package manager commands.
96
+ ```
@@ -0,0 +1,114 @@
1
+ # Stack Preset: Next.js + Supabase
2
+
3
+ > Full-stack web applications with Next.js, TypeScript, Tailwind CSS, and Supabase.
4
+
5
+ ## TECH_STACK
6
+
7
+ ```
8
+ - Next.js >= 16, App Router ONLY (never Pages Router, never getServerSideProps/getStaticProps)
9
+ - TypeScript strict mode, no `any`, no `as` casts (use type guards or Zod)
10
+ - Tailwind CSS v4 + Shadcn UI components
11
+ - Framer Motion for animations
12
+ - Supabase: DB, Auth, Storage, Edge Functions, Realtime
13
+ - Zod for ALL external data validation (API inputs, env vars, form data)
14
+ - Vitest + Testing Library for unit/integration tests
15
+ - Playwright for E2E tests
16
+ - pnpm (never npm/yarn unless project explicitly uses them)
17
+ - Vercel deployment, Docker Compose for local dev
18
+ ```
19
+
20
+ ## ARCHITECTURE_PRINCIPLES
21
+
22
+ ```
23
+ - AGENT-NATIVE: every feature MUST expose clean REST/RPC APIs. Backend is modular, extensible, automatable.
24
+ - MULTI-TENANT: include tenant_id/org_id from day one. Never assume single-tenant.
25
+ - Supabase RLS policies on EVERY table, no exceptions. Run security advisors after DDL changes.
26
+ - DB changes ONLY through migrations (apply_migration), never raw DDL.
27
+ - Generate TypeScript types from Supabase schema. Never hand-write DB types.
28
+ - End-to-end type safety: DB schema -> generated types -> Zod schemas -> API -> frontend.
29
+ - Components -> Features -> Services separation. No business logic in components.
30
+ - Server Components by default. Client Components only when needed (interactivity, hooks, browser APIs).
31
+ - Colocate: keep tests, types, and utils next to the code they serve.
32
+ - Zod validation on ALL external boundaries (API inputs, env vars, form data).
33
+ - Result pattern { data, error } for all operations that can fail. Never throw for expected errors.
34
+ ```
35
+
36
+ ## PROJECT_STRUCTURE
37
+
38
+ ````
39
+ ```
40
+ src/
41
+ app/ # Next.js App Router (routes, layouts, pages)
42
+ (auth)/ # Route groups
43
+ api/ # Route Handlers
44
+ components/
45
+ ui/ # Shadcn/base components
46
+ [feature]/ # Feature-specific components
47
+ lib/
48
+ supabase/ # Client, server client, middleware, types
49
+ [domain]/ # Domain services (e.g., lib/billing/, lib/agents/)
50
+ hooks/ # Custom React hooks
51
+ types/ # Shared TypeScript types
52
+ utils/ # Pure utility functions
53
+ supabase/
54
+ migrations/ # SQL migrations (timestamped)
55
+ functions/ # Edge Functions (Deno)
56
+ tests/
57
+ e2e/ # Playwright tests
58
+ ```
59
+ ````
60
+
61
+ ## QUALITY_GATES
62
+
63
+ ```
64
+ - Build: `pnpm build` — 0 errors
65
+ - Types: `tsc --noEmit` — 0 errors
66
+ - Tests: `pnpm vitest run` — all pass, 80%+ coverage
67
+ - Lint: `pnpm lint` — 0 errors
68
+ - E2E: `npx playwright test` — all pass (if applicable)
69
+ - Code Review: `/code-review` — no security issues
70
+ - RLS Check: Supabase security advisor — all tables have RLS policies
71
+ - No Debug Logs: 0 console.log in production code (`grep -r "console.log" src/`)
72
+ - Type Safety: No `any`, no `as` casts in source code
73
+ - File Size: No file exceeds 300 lines
74
+ ```
75
+
76
+ ## FORMATTER
77
+
78
+ ```
79
+ npx prettier --write
80
+ ```
81
+
82
+ ## FORMATTER_GLOB
83
+
84
+ ```
85
+ ts|tsx|js|jsx|json|css|md
86
+ ```
87
+
88
+ ## PACKAGE_MANAGER
89
+
90
+ ```
91
+ pnpm
92
+ ```
93
+
94
+ ## STACK_SPECIFIC_GUARDRAILS
95
+
96
+ ```
97
+ - **pnpm, not npm**: This project ecosystem uses pnpm exclusively. npm commands will create conflicting lock files.
98
+ - **Check DESIGN.md first**: Before any UI/design work, read DESIGN.md. Making design decisions without it causes inconsistencies.
99
+ - **createServerClient in Server Components**: Always use `createServerClient` in Server Components and Route Handlers.
100
+ - **createBrowserClient in Client Components**: Always use `createBrowserClient` in Client Components.
101
+ - **Protect routes with middleware**: Use Supabase Auth middleware for all authenticated routes.
102
+ - **Edge Functions validate with Zod**: All Edge Function inputs must be validated with Zod schemas.
103
+ - **Realtime over polling**: Use Supabase Realtime subscriptions for live data, never polling.
104
+ - **Migrations only**: DB changes ONLY through `apply_migration`. Never run raw DDL.
105
+ - **Generated types only**: TypeScript types for DB schema MUST be generated via `generate_typescript_types`. Never hand-write DB types.
106
+ ```
107
+
108
+ ## TOOL_SPECIFIC_GUARDRAILS
109
+
110
+ ```
111
+ - **Prettier runs automatically**: The PostToolUse hook auto-formats ts/tsx/js/jsx/json/css/md files. Don't run prettier manually — it wastes a tool call.
112
+ - **CHANGELOG is auto-updated**: The Stop hook handles CHANGELOG.md. Don't update it manually unless explicitly asked.
113
+ - **Lock files are protected**: pnpm-lock.yaml and package-lock.json cannot be written to directly. Use pnpm install/add commands.
114
+ ```