flight-rules 0.5.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +270 -0
- package/dist/commands/adapter.d.ts +35 -0
- package/dist/commands/adapter.js +308 -0
- package/dist/commands/init.d.ts +1 -0
- package/dist/commands/init.js +197 -0
- package/dist/commands/upgrade.d.ts +1 -0
- package/dist/commands/upgrade.js +255 -0
- package/dist/index.d.ts +2 -0
- package/dist/index.js +83 -0
- package/dist/utils/files.d.ts +75 -0
- package/dist/utils/files.js +245 -0
- package/dist/utils/interactive.d.ts +5 -0
- package/dist/utils/interactive.js +7 -0
- package/package.json +52 -0
- package/payload/.editorconfig +15 -0
- package/payload/AGENTS.md +247 -0
- package/payload/commands/dev-session.end.md +79 -0
- package/payload/commands/dev-session.start.md +54 -0
- package/payload/commands/impl.create.md +178 -0
- package/payload/commands/impl.outline.md +120 -0
- package/payload/commands/prd.clarify.md +139 -0
- package/payload/commands/prd.create.md +154 -0
- package/payload/commands/test.add.md +73 -0
- package/payload/commands/test.assess-current.md +75 -0
- package/payload/doc-templates/critical-learnings.md +21 -0
- package/payload/doc-templates/implementation/overview.md +27 -0
- package/payload/doc-templates/prd.md +49 -0
- package/payload/doc-templates/progress.md +19 -0
- package/payload/doc-templates/session-log.md +62 -0
- package/payload/doc-templates/tech-stack.md +101 -0
- package/payload/prompts/.gitkeep +0 -0
- package/payload/prompts/implementation/README.md +26 -0
- package/payload/prompts/implementation/plan-review.md +80 -0
- package/payload/prompts/prd/README.md +46 -0
- package/payload/prompts/prd/creation-conversational.md +46 -0
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
# Add Test
|
|
2
|
+
|
|
3
|
+
When the user invokes this command, help them create a new test file following the project's testing conventions.
|
|
4
|
+
|
|
5
|
+
## 1. Read Testing Context
|
|
6
|
+
|
|
7
|
+
First, read `docs/tech-stack.md` to understand:
|
|
8
|
+
- What testing framework is in use
|
|
9
|
+
- Where test files should be located
|
|
10
|
+
- Naming conventions for test files
|
|
11
|
+
- Test structure patterns and conventions
|
|
12
|
+
- Assertion and mocking approaches
|
|
13
|
+
|
|
14
|
+
If `docs/tech-stack.md` doesn't exist or the Testing section is empty, run the `/test.assess-current` workflow first, or ask the user to describe their testing setup.
|
|
15
|
+
|
|
16
|
+
## 2. Understand What to Test
|
|
17
|
+
|
|
18
|
+
Ask the user:
|
|
19
|
+
|
|
20
|
+
> "What would you like to test?"
|
|
21
|
+
>
|
|
22
|
+
> You can:
|
|
23
|
+
> - Point me to a specific file or function
|
|
24
|
+
> - Describe the behavior you want to test
|
|
25
|
+
> - Ask me to suggest areas that need test coverage
|
|
26
|
+
|
|
27
|
+
## 3. Analyze the Target
|
|
28
|
+
|
|
29
|
+
Once the user specifies what to test:
|
|
30
|
+
- Read the relevant source file(s)
|
|
31
|
+
- Understand the function/component/module behavior
|
|
32
|
+
- Identify key scenarios to test:
|
|
33
|
+
- Happy path
|
|
34
|
+
- Edge cases
|
|
35
|
+
- Error conditions
|
|
36
|
+
- Boundary conditions (if applicable)
|
|
37
|
+
|
|
38
|
+
## 4. Propose Test Plan
|
|
39
|
+
|
|
40
|
+
Before writing code, propose the test plan:
|
|
41
|
+
|
|
42
|
+
> **Proposed Tests for [target]**
|
|
43
|
+
>
|
|
44
|
+
> I'll create `[test file path]` with the following test cases:
|
|
45
|
+
>
|
|
46
|
+
> 1. **[test name]** - [what it verifies]
|
|
47
|
+
> 2. **[test name]** - [what it verifies]
|
|
48
|
+
> 3. ...
|
|
49
|
+
>
|
|
50
|
+
> Does this look right, or would you like to adjust the scope?
|
|
51
|
+
|
|
52
|
+
Wait for user confirmation before proceeding.
|
|
53
|
+
|
|
54
|
+
## 5. Create the Test File
|
|
55
|
+
|
|
56
|
+
Write the test file following the project's conventions from `docs/tech-stack.md`:
|
|
57
|
+
- Use the correct file location and naming
|
|
58
|
+
- Match the existing test structure style
|
|
59
|
+
- Use the project's assertion library
|
|
60
|
+
- Use the project's mocking approach if needed
|
|
61
|
+
- Include appropriate imports
|
|
62
|
+
|
|
63
|
+
## 6. Verify and Report
|
|
64
|
+
|
|
65
|
+
After creating the test:
|
|
66
|
+
|
|
67
|
+
> **Test created:** `[path to test file]`
|
|
68
|
+
>
|
|
69
|
+
> Run with: `[test command]`
|
|
70
|
+
>
|
|
71
|
+
> Would you like me to run the tests now to verify they pass?
|
|
72
|
+
|
|
73
|
+
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
# Assess Current Testing Setup
|
|
2
|
+
|
|
3
|
+
When the user invokes this command, assess the project's testing environment and update `docs/tech-stack.md`.
|
|
4
|
+
|
|
5
|
+
## 1. Scan for Test Configuration
|
|
6
|
+
|
|
7
|
+
Look for test configuration files in the project root and common locations:
|
|
8
|
+
|
|
9
|
+
**JavaScript/TypeScript:**
|
|
10
|
+
- `vitest.config.ts`, `vitest.config.js`
|
|
11
|
+
- `jest.config.ts`, `jest.config.js`, `jest.config.mjs`
|
|
12
|
+
- `package.json` (check for `jest` or `vitest` config sections)
|
|
13
|
+
|
|
14
|
+
**Python:**
|
|
15
|
+
- `pytest.ini`, `pyproject.toml` (pytest section), `setup.cfg`
|
|
16
|
+
- `tox.ini`
|
|
17
|
+
|
|
18
|
+
**Other:**
|
|
19
|
+
- `*.test.*`, `*_test.*`, `*_spec.*` file patterns
|
|
20
|
+
- `tests/`, `test/`, `__tests__/`, `spec/` directories
|
|
21
|
+
|
|
22
|
+
## 2. Analyze Test Files
|
|
23
|
+
|
|
24
|
+
Examine existing test files to understand:
|
|
25
|
+
- **Naming conventions**: How are test files named?
|
|
26
|
+
- **Location pattern**: Co-located with source, or in separate directory?
|
|
27
|
+
- **Test structure**: describe/it blocks, test classes, function-based?
|
|
28
|
+
- **Assertion style**: expect(), assert, should
|
|
29
|
+
- **Mocking approach**: What mocking utilities are used?
|
|
30
|
+
|
|
31
|
+
## 3. Check for Test Scripts
|
|
32
|
+
|
|
33
|
+
Examine `package.json` (or equivalent) for:
|
|
34
|
+
- Test run commands
|
|
35
|
+
- Watch mode commands
|
|
36
|
+
- Coverage commands
|
|
37
|
+
|
|
38
|
+
## 4. Update docs/tech-stack.md
|
|
39
|
+
|
|
40
|
+
Update the **Testing** section of `docs/tech-stack.md` with your findings:
|
|
41
|
+
- Framework name and version (if detectable)
|
|
42
|
+
- Test location and naming conventions
|
|
43
|
+
- Commands for running tests
|
|
44
|
+
- Patterns and conventions observed
|
|
45
|
+
- A representative example of test structure from the codebase
|
|
46
|
+
|
|
47
|
+
If `docs/tech-stack.md` doesn't exist, create it using the template from `.flight-rules/doc-templates/tech-stack.md`.
|
|
48
|
+
|
|
49
|
+
## 5. Report to User
|
|
50
|
+
|
|
51
|
+
Present a summary to the user covering:
|
|
52
|
+
|
|
53
|
+
> **Testing Assessment Complete**
|
|
54
|
+
>
|
|
55
|
+
> **Framework:** [detected framework]
|
|
56
|
+
> **Test Location:** [where tests live]
|
|
57
|
+
> **Run Command:** [how to run tests]
|
|
58
|
+
>
|
|
59
|
+
> **Observations:**
|
|
60
|
+
> - [key observations about testing patterns]
|
|
61
|
+
> - [any gaps or recommendations]
|
|
62
|
+
>
|
|
63
|
+
> I've updated `docs/tech-stack.md` with these findings.
|
|
64
|
+
|
|
65
|
+
If no testing setup is detected, report:
|
|
66
|
+
|
|
67
|
+
> **No Testing Setup Detected**
|
|
68
|
+
>
|
|
69
|
+
> I couldn't find a testing framework configured in this project.
|
|
70
|
+
>
|
|
71
|
+
> Would you like me to help you set up testing? If so, let me know:
|
|
72
|
+
> 1. What language/framework is this project using?
|
|
73
|
+
> 2. Do you have a preferred testing framework?
|
|
74
|
+
|
|
75
|
+
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
# Critical Learnings
|
|
2
|
+
|
|
3
|
+
A curated list of important insights, patterns, and decisions that should inform future work.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
<!--
|
|
8
|
+
Add learnings as they emerge from sessions:
|
|
9
|
+
|
|
10
|
+
## Category Name
|
|
11
|
+
|
|
12
|
+
### Learning Title
|
|
13
|
+
|
|
14
|
+
**Context:** When/where this came up
|
|
15
|
+
|
|
16
|
+
**Insight:** What we learned
|
|
17
|
+
|
|
18
|
+
**Implication:** How this should affect future decisions
|
|
19
|
+
-->
|
|
20
|
+
|
|
21
|
+
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
# Implementation Overview
|
|
2
|
+
|
|
3
|
+
This document lists the major implementation areas for this project.
|
|
4
|
+
|
|
5
|
+
## Implementation Areas
|
|
6
|
+
|
|
7
|
+
<!--
|
|
8
|
+
Add implementation areas as you define them:
|
|
9
|
+
|
|
10
|
+
1. **Area Name** – Brief description
|
|
11
|
+
- Status: 🔵 Planned / 🟡 In Progress / ✅ Complete
|
|
12
|
+
- See: `1-area-name/`
|
|
13
|
+
|
|
14
|
+
2. **Another Area** – Brief description
|
|
15
|
+
- Status: 🔵 Planned
|
|
16
|
+
- See: `2-another-area/`
|
|
17
|
+
-->
|
|
18
|
+
|
|
19
|
+
## How to Use This
|
|
20
|
+
|
|
21
|
+
1. Each numbered area has its own directory: `{N}-{kebab-topic}/`
|
|
22
|
+
2. Inside each directory:
|
|
23
|
+
- `index.md` – Overview and goals for that area
|
|
24
|
+
- `{N}.{M}-topic.md` – Detailed specs for sub-areas
|
|
25
|
+
3. Status is tracked in the detailed spec files, not here
|
|
26
|
+
|
|
27
|
+
|
|
@@ -0,0 +1,49 @@
|
|
|
1
|
+
# Product Requirements Document
|
|
2
|
+
|
|
3
|
+
<!--
|
|
4
|
+
This template defines what you're building and why.
|
|
5
|
+
Fill in each section; delete comments when done.
|
|
6
|
+
-->
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
<!--
|
|
11
|
+
What is this project? Why does it exist?
|
|
12
|
+
Describe the product/feature in 2-3 sentences.
|
|
13
|
+
-->
|
|
14
|
+
|
|
15
|
+
## Goals
|
|
16
|
+
|
|
17
|
+
<!--
|
|
18
|
+
What are the high-level goals this project aims to achieve?
|
|
19
|
+
List 3-5 specific, measurable outcomes.
|
|
20
|
+
-->
|
|
21
|
+
|
|
22
|
+
## Non-Goals
|
|
23
|
+
|
|
24
|
+
<!--
|
|
25
|
+
What is explicitly out of scope?
|
|
26
|
+
Listing non-goals prevents scope creep and clarifies focus.
|
|
27
|
+
-->
|
|
28
|
+
|
|
29
|
+
## User Stories
|
|
30
|
+
|
|
31
|
+
<!--
|
|
32
|
+
Who benefits from this, and how?
|
|
33
|
+
Format: "As a [type of user], I want [goal] so that [benefit]."
|
|
34
|
+
-->
|
|
35
|
+
|
|
36
|
+
## Constraints
|
|
37
|
+
|
|
38
|
+
<!--
|
|
39
|
+
What limitations affect this project?
|
|
40
|
+
Consider: timeline, budget, technology, dependencies, team capacity.
|
|
41
|
+
-->
|
|
42
|
+
|
|
43
|
+
## Success Criteria
|
|
44
|
+
|
|
45
|
+
<!--
|
|
46
|
+
How will you know this project succeeded?
|
|
47
|
+
Define observable, measurable outcomes.
|
|
48
|
+
-->
|
|
49
|
+
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
# Progress
|
|
2
|
+
|
|
3
|
+
A running log of sessions and milestones.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Session Logs
|
|
8
|
+
|
|
9
|
+
<!--
|
|
10
|
+
Add session entries as you complete them:
|
|
11
|
+
|
|
12
|
+
### YYYY-MM-DD HH:MM
|
|
13
|
+
|
|
14
|
+
- What was accomplished
|
|
15
|
+
- Key decisions made
|
|
16
|
+
- See: [session_logs/YYYYMMDD_HHMM_session.md](session_logs/YYYYMMDD_HHMM_session.md)
|
|
17
|
+
-->
|
|
18
|
+
|
|
19
|
+
|
|
@@ -0,0 +1,62 @@
|
|
|
1
|
+
# Session Log: YYYY-MM-DD HH:MM
|
|
2
|
+
|
|
3
|
+
<!--
|
|
4
|
+
Template for coding session documentation.
|
|
5
|
+
Agents create new session logs in docs/session_logs/ following this structure.
|
|
6
|
+
Naming convention: YYYYMMDD_HHMM_session.md
|
|
7
|
+
-->
|
|
8
|
+
|
|
9
|
+
## Session Goals
|
|
10
|
+
|
|
11
|
+
<!-- What did we set out to accomplish? -->
|
|
12
|
+
|
|
13
|
+
- Goal 1
|
|
14
|
+
- Goal 2
|
|
15
|
+
|
|
16
|
+
## Related Specs
|
|
17
|
+
|
|
18
|
+
<!-- Which Task Groups and Tasks does this session address? -->
|
|
19
|
+
|
|
20
|
+
- Task Group: `X.Y-topic.md`
|
|
21
|
+
- Tasks: X.Y.1, X.Y.2
|
|
22
|
+
|
|
23
|
+
## Summary
|
|
24
|
+
|
|
25
|
+
<!-- Brief overview of what was accomplished -->
|
|
26
|
+
|
|
27
|
+
## Implementation Details
|
|
28
|
+
|
|
29
|
+
<!-- Notable technical choices, patterns used, architecture decisions -->
|
|
30
|
+
|
|
31
|
+
## Key Decisions
|
|
32
|
+
|
|
33
|
+
<!-- Important decisions made during this session, especially any deviations from the original plan -->
|
|
34
|
+
|
|
35
|
+
## Challenges & Solutions
|
|
36
|
+
|
|
37
|
+
<!-- Problems encountered and how they were resolved -->
|
|
38
|
+
|
|
39
|
+
## Code Areas Touched
|
|
40
|
+
|
|
41
|
+
<!-- List of files/directories modified -->
|
|
42
|
+
|
|
43
|
+
- `path/to/file.ts`
|
|
44
|
+
|
|
45
|
+
## Spec Updates Needed
|
|
46
|
+
|
|
47
|
+
<!-- Any spec files that need to be updated based on what was actually implemented -->
|
|
48
|
+
|
|
49
|
+
- [ ] Update `X.Y-topic.md` Task X.Y.1 status to ✅ Complete
|
|
50
|
+
|
|
51
|
+
## Next Steps
|
|
52
|
+
|
|
53
|
+
<!-- What should happen in future sessions? -->
|
|
54
|
+
|
|
55
|
+
- Next step 1
|
|
56
|
+
- Next step 2
|
|
57
|
+
|
|
58
|
+
## Learnings
|
|
59
|
+
|
|
60
|
+
<!-- Insights worth promoting to critical-learnings.md -->
|
|
61
|
+
|
|
62
|
+
|
|
@@ -0,0 +1,101 @@
|
|
|
1
|
+
# Tech Stack
|
|
2
|
+
|
|
3
|
+
This document describes the technical environment for this project. It serves as a reference for humans and agents when performing tech-dependent tasks.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Testing
|
|
8
|
+
|
|
9
|
+
<!--
|
|
10
|
+
This section is the primary focus. Document your testing setup so that agents
|
|
11
|
+
can create appropriate tests without requiring you to re-explain the framework.
|
|
12
|
+
|
|
13
|
+
Use the `/test.assess-current` command to auto-populate this section.
|
|
14
|
+
-->
|
|
15
|
+
|
|
16
|
+
### Framework
|
|
17
|
+
|
|
18
|
+
<!--
|
|
19
|
+
What testing framework does this project use?
|
|
20
|
+
Examples: Vitest, Jest, pytest, RSpec, Go testing, etc.
|
|
21
|
+
-->
|
|
22
|
+
|
|
23
|
+
### Test Location & Naming
|
|
24
|
+
|
|
25
|
+
<!--
|
|
26
|
+
Where do test files live? What naming conventions are used?
|
|
27
|
+
|
|
28
|
+
Examples:
|
|
29
|
+
- Tests live in `tests/` directory, mirroring `src/` structure
|
|
30
|
+
- Test files named `*.test.ts` or `*.spec.ts`
|
|
31
|
+
- Co-located with source files as `ComponentName.test.tsx`
|
|
32
|
+
-->
|
|
33
|
+
|
|
34
|
+
### Running Tests
|
|
35
|
+
|
|
36
|
+
<!--
|
|
37
|
+
What commands are used to run tests?
|
|
38
|
+
|
|
39
|
+
Examples:
|
|
40
|
+
- `npm test` - Run all tests
|
|
41
|
+
- `npm run test:watch` - Watch mode
|
|
42
|
+
- `npm run test:coverage` - With coverage report
|
|
43
|
+
-->
|
|
44
|
+
|
|
45
|
+
### Patterns & Conventions
|
|
46
|
+
|
|
47
|
+
<!--
|
|
48
|
+
What patterns does this project follow for writing tests?
|
|
49
|
+
|
|
50
|
+
Consider documenting:
|
|
51
|
+
- Assertion style (expect, assert, should)
|
|
52
|
+
- Mocking approach (vi.mock, jest.mock, unittest.mock)
|
|
53
|
+
- Test organization (describe/it blocks, test classes)
|
|
54
|
+
- Fixtures or factories used
|
|
55
|
+
- Any custom test utilities
|
|
56
|
+
-->
|
|
57
|
+
|
|
58
|
+
### Example Test Structure
|
|
59
|
+
|
|
60
|
+
<!--
|
|
61
|
+
Provide a representative example of how tests are structured in this project.
|
|
62
|
+
This helps agents match the existing style.
|
|
63
|
+
|
|
64
|
+
```typescript
|
|
65
|
+
// Example test structure goes here
|
|
66
|
+
```
|
|
67
|
+
-->
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Runtime & Language
|
|
72
|
+
|
|
73
|
+
<!--
|
|
74
|
+
Brief notes on the runtime environment.
|
|
75
|
+
Examples: Node.js 20, Python 3.11, Go 1.21
|
|
76
|
+
-->
|
|
77
|
+
|
|
78
|
+
---
|
|
79
|
+
|
|
80
|
+
## Build Tools
|
|
81
|
+
|
|
82
|
+
<!--
|
|
83
|
+
Brief notes on build/bundling tools if relevant.
|
|
84
|
+
Examples: TypeScript + tsup, Vite, webpack, esbuild
|
|
85
|
+
-->
|
|
86
|
+
|
|
87
|
+
---
|
|
88
|
+
|
|
89
|
+
## Key Dependencies
|
|
90
|
+
|
|
91
|
+
<!--
|
|
92
|
+
List major frameworks or libraries that define the project's architecture.
|
|
93
|
+
Not an exhaustive list—just the ones that matter for understanding the codebase.
|
|
94
|
+
|
|
95
|
+
Examples:
|
|
96
|
+
- React 18 with React Router
|
|
97
|
+
- Express.js for API
|
|
98
|
+
- Prisma for database access
|
|
99
|
+
-->
|
|
100
|
+
|
|
101
|
+
|
|
File without changes
|
|
@@ -0,0 +1,26 @@
|
|
|
1
|
+
# Implementation Prompts
|
|
2
|
+
|
|
3
|
+
Prompts to help create and review implementation plans.
|
|
4
|
+
|
|
5
|
+
## Available Prompts
|
|
6
|
+
|
|
7
|
+
| Prompt | Description |
|
|
8
|
+
|--------|-------------|
|
|
9
|
+
| `plan-review.md` | Critical review of an implementation plan by a second agent |
|
|
10
|
+
|
|
11
|
+
## Usage
|
|
12
|
+
|
|
13
|
+
Copy the prompt into your AI assistant, specifying which part of the implementation plan to review. The AI will read the relevant docs from `docs/implementation/` and provide structured feedback.
|
|
14
|
+
|
|
15
|
+
## Workflow
|
|
16
|
+
|
|
17
|
+
A typical workflow for using these prompts:
|
|
18
|
+
|
|
19
|
+
1. **Agent 1** creates an implementation plan (Area, Task Group, or Task specs)
|
|
20
|
+
2. **You** open a new session with **Agent 2** and paste the review prompt
|
|
21
|
+
3. **You** specify what to review (e.g., "Area 2", "Task Group 2.3", "Task 2.3.4")
|
|
22
|
+
4. **Agent 2** reads the implementation docs and provides critical feedback
|
|
23
|
+
5. **You** incorporate feedback and finalize the plan
|
|
24
|
+
|
|
25
|
+
This "two-agent review" pattern helps catch gaps, assumptions, and issues before implementation begins.
|
|
26
|
+
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
# Implementation Plan Review — Prompt
|
|
2
|
+
|
|
3
|
+
Use this prompt to have an AI assistant critically review an implementation plan created by another agent (or yourself).
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## The Prompt
|
|
8
|
+
|
|
9
|
+
Copy everything below the line into your AI assistant:
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
You are a senior software architect and technical lead with deep experience shipping production systems. You're known for catching issues early, asking hard questions, and improving plans before a single line of code is written. Your job is to critically review part of this project's implementation plan.
|
|
14
|
+
|
|
15
|
+
**What to review:**
|
|
16
|
+
|
|
17
|
+
Review the following from `docs/implementation/`:
|
|
18
|
+
|
|
19
|
+
> **[SPECIFY WHAT TO REVIEW — examples below]**
|
|
20
|
+
> - `2-feature-name/` — review the entire Area
|
|
21
|
+
> - `2.3-task-group.md` — review a specific Task Group
|
|
22
|
+
> - `2.3.4` — review a specific Task within a Task Group
|
|
23
|
+
> - "All tasks marked 🔵 Planned in Area 2" — review by status
|
|
24
|
+
|
|
25
|
+
Read the specified implementation docs now before proceeding with your review.
|
|
26
|
+
|
|
27
|
+
**Your review philosophy:**
|
|
28
|
+
|
|
29
|
+
The goal is a *good enough* plan, not a perfect one. Implementation plans are guides, not scripts — some details are better discovered during coding. Resist the urge to add endless clarifications.
|
|
30
|
+
|
|
31
|
+
Ask yourself: "Could a competent developer proceed with this?" If yes, it's probably fine. Don't flag something just because it *could* be more detailed.
|
|
32
|
+
|
|
33
|
+
**What to look for:**
|
|
34
|
+
|
|
35
|
+
- Genuine blockers — things that would cause implementation to fail or go badly wrong
|
|
36
|
+
- Missing pieces that would force the developer to stop and ask questions
|
|
37
|
+
- Risks that aren't acknowledged
|
|
38
|
+
- Scope that doesn't match the stated goals
|
|
39
|
+
|
|
40
|
+
**What NOT to flag:**
|
|
41
|
+
|
|
42
|
+
- Stylistic preferences or "I would have written it differently"
|
|
43
|
+
- Details that are obvious to any experienced developer
|
|
44
|
+
- Edge cases that can be handled when encountered
|
|
45
|
+
- Optimizations that aren't relevant yet
|
|
46
|
+
|
|
47
|
+
**Review dimensions** (scan for issues, don't exhaustively analyze each):
|
|
48
|
+
|
|
49
|
+
1. **Can a developer proceed?** — Is it clear what to build and roughly how?
|
|
50
|
+
2. **Will it work?** — Are there technical red flags or missing dependencies?
|
|
51
|
+
3. **Does it match the goals?** — Is there scope creep or missing scope?
|
|
52
|
+
4. **What's the biggest risk?** — What's most likely to go wrong?
|
|
53
|
+
|
|
54
|
+
**Your output format:**
|
|
55
|
+
|
|
56
|
+
Organize your feedback as follows:
|
|
57
|
+
|
|
58
|
+
1. **Summary** — A 2-3 sentence overall assessment. End with a clear verdict: "Ready to implement", "Needs minor fixes", or "Needs rework"
|
|
59
|
+
|
|
60
|
+
2. **Critical Issues** (max 3) — True blockers only. These are problems that would cause implementation to fail or go seriously wrong. If there are no blockers, say "None" — don't invent issues to fill this section.
|
|
61
|
+
|
|
62
|
+
3. **Suggestions** (max 5) — Non-blocking improvements, ranked by impact. These are "would be better if" items, not "must fix" items.
|
|
63
|
+
|
|
64
|
+
4. **Questions** (if any) — Only include questions that would change your assessment. Skip this section if you have none.
|
|
65
|
+
|
|
66
|
+
**Important constraints:**
|
|
67
|
+
|
|
68
|
+
- If you find yourself listing more than 3 critical issues, you're probably being too strict. Re-evaluate what's truly a blocker.
|
|
69
|
+
- Suggestions beyond the top 5 aren't worth the iteration time. Be ruthless about what matters.
|
|
70
|
+
- "This could be clearer" is not a critical issue unless a developer literally couldn't proceed.
|
|
71
|
+
- Don't suggest adding detail just because you could imagine more detail existing.
|
|
72
|
+
|
|
73
|
+
**Tone guidance:**
|
|
74
|
+
|
|
75
|
+
- Be direct but constructive — the goal is a better plan, not criticism for its own sake
|
|
76
|
+
- If the plan is good, say so briefly and move on — don't pad your review
|
|
77
|
+
- When you identify an issue, suggest a fix, not just the problem
|
|
78
|
+
|
|
79
|
+
**Now read the specified implementation docs and provide your review.**
|
|
80
|
+
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
# PRD Prompts
|
|
2
|
+
|
|
3
|
+
Prompts and commands to help create Product Requirements Documents.
|
|
4
|
+
|
|
5
|
+
## Recommended: Use Commands
|
|
6
|
+
|
|
7
|
+
The preferred way to create and refine PRDs is through the commands in `.flight-rules/commands/`:
|
|
8
|
+
|
|
9
|
+
| Command | Description |
|
|
10
|
+
|---------|-------------|
|
|
11
|
+
| `/prd.create` | Create a new PRD (supports conversational interview or one-shot from description) |
|
|
12
|
+
| `/prd.clarify` | Refine specific sections of an existing PRD |
|
|
13
|
+
|
|
14
|
+
These commands integrate better with the Flight Rules workflow and automatically save output to `docs/prd.md`.
|
|
15
|
+
|
|
16
|
+
## Available Prompts
|
|
17
|
+
|
|
18
|
+
These standalone prompts can be copied into any AI assistant:
|
|
19
|
+
|
|
20
|
+
| Prompt | Description |
|
|
21
|
+
|--------|-------------|
|
|
22
|
+
| `creation-conversational.md` | Interactive interview that walks through each PRD section |
|
|
23
|
+
|
|
24
|
+
**Note:** The `creation-conversational.md` prompt is the foundation for the `/prd.create` command's conversational mode. Use the command for better workflow integration, or use the prompt directly if you prefer a standalone conversation.
|
|
25
|
+
|
|
26
|
+
## Usage
|
|
27
|
+
|
|
28
|
+
**With commands (recommended):**
|
|
29
|
+
- Invoke `/prd.create` to start creating a PRD
|
|
30
|
+
- Invoke `/prd.clarify` to refine specific sections
|
|
31
|
+
|
|
32
|
+
**With prompts:**
|
|
33
|
+
- Copy the prompt into your AI assistant and follow the conversation
|
|
34
|
+
- The template for the PRD is at `.flight-rules/doc-templates/prd.md`
|
|
35
|
+
- After completing the conversation, save the output to `docs/prd.md`
|
|
36
|
+
|
|
37
|
+
## Output
|
|
38
|
+
|
|
39
|
+
Both commands and prompts produce a completed PRD that follows the standard template structure:
|
|
40
|
+
|
|
41
|
+
- Overview
|
|
42
|
+
- Goals
|
|
43
|
+
- Non-Goals
|
|
44
|
+
- User Stories
|
|
45
|
+
- Constraints
|
|
46
|
+
- Success Criteria
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
# PRD Creation — Conversational Prompt
|
|
2
|
+
|
|
3
|
+
Use this prompt to have an AI assistant interview you and help create a Product Requirements Document.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## The Prompt
|
|
8
|
+
|
|
9
|
+
Copy everything below the line into your AI assistant:
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
You are a senior product manager who has shipped multiple successful products. You're known for asking "why" until you truly understand the problem, and for pushing back when requirements are vague or unmeasurable. Your job is to interview me and help create a Product Requirements Document (PRD).
|
|
14
|
+
|
|
15
|
+
**How this works:**
|
|
16
|
+
|
|
17
|
+
1. You'll ask me about one section at a time
|
|
18
|
+
2. Ask 1-2 questions per section, with follow-ups if my answers need clarification
|
|
19
|
+
3. Summarize what you heard before moving to the next section
|
|
20
|
+
4. At the end, review for consistency and generate the complete PRD
|
|
21
|
+
|
|
22
|
+
**The PRD has 6 sections:**
|
|
23
|
+
|
|
24
|
+
1. Overview — What is this project? Why does it exist?
|
|
25
|
+
2. Goals — What are we trying to achieve?
|
|
26
|
+
3. Non-Goals — What is explicitly out of scope?
|
|
27
|
+
4. User Stories — Who benefits and how?
|
|
28
|
+
5. Constraints — What limitations affect this project?
|
|
29
|
+
6. Success Criteria — How will we know it worked?
|
|
30
|
+
|
|
31
|
+
**Your approach:**
|
|
32
|
+
|
|
33
|
+
- Push back when I'm vague — don't just accept my first answer
|
|
34
|
+
- Ask "why" and "how will you measure that" frequently
|
|
35
|
+
- If I say "I don't know," help me figure it out rather than moving on
|
|
36
|
+
- Don't let me skip Non-Goals — people always forget these and regret it later
|
|
37
|
+
- Challenge goals that sound like platitudes ("improve user experience" is not a goal)
|
|
38
|
+
- Insist that success criteria be specific and measurable
|
|
39
|
+
|
|
40
|
+
**Before generating the final PRD:**
|
|
41
|
+
|
|
42
|
+
- Check that goals and non-goals don't conflict
|
|
43
|
+
- Verify every success criterion is actually measurable
|
|
44
|
+
- Flag anything that feels incomplete or hand-wavy
|
|
45
|
+
|
|
46
|
+
**Start now.** Introduce yourself briefly, then begin with the Overview section.
|