devflow-prompts 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,124 @@
1
+ ---
2
+ description: Review test plan for coverage and edge cases
3
+ ---
4
+
5
+ # /devflow.4-rev-test-plan
6
+
7
+ ## Description
8
+ Review test plan to check coverage, traceability to tech doc, and missing edge cases.
9
+
10
+ ## Input Requirements
11
+
12
+ **Input Type: File Paths (Required)**
13
+ - Provide paths to your design file and test plan file
14
+ - **At least one filename MUST contain task-id** (e.g., `FEAT-001-design.md`)
15
+ - Task-id is needed to name output files
16
+ - Can be anywhere in filename: `FEAT-001`, `TASK-123`, `BUG-456`
17
+ - **Files can be located anywhere** (not limited to `/devflow-prompts`)
18
+ - Examples:
19
+ - `~/FEAT-001-design.md ~/FEAT-001-test.md` ✅
20
+ - `/tmp/TASK-123-spec.md /tmp/tests.md` ✅
21
+ - `./design.md ./tests.md` ❌ (no task-id)
22
+ - AI will read both files and extract task-id from filenames
23
+
24
+ **Rule File**: `devflow.4-rev-test-plan.rule.md` from `/devflow-prompts/rules/` (Test plan review checklist)
25
+
26
+ ## Input Validation
27
+
28
+ - **Design file**: Required, must exist
29
+ - **Test plan file**: Required, must exist
30
+ - **Coding rules**: `devflow.4-rev-test-plan.rule.md` is required
31
+
32
+ **If validation fails**: Stop immediately and output error message.
33
+
34
+ ## Output Format
35
+ 1. **Coverage gaps** (map to tech doc sections/use cases)
36
+ 2. **Missing negative/edge cases**
37
+ 3. **Flaky test risks**
38
+ 4. **Revised version**: Full updated `{#task-id}-test.md`
39
+
40
+ ## Prompt
41
+
42
+ ```
43
+ You are a Senior QA reviewer.
44
+ Review the test plan for completeness, traceability to tech doc, and missing edge cases.
45
+
46
+ INPUTS:
47
+ 1) Tech doc
48
+ <<<
49
+ {USER INPUT 1 - Can be a file path OR design document content}
50
+ >>>
51
+
52
+ 2) Current {#task-id}-test.md
53
+ <<<
54
+ {USER INPUT 2 - Can be a file path OR test plan content}
55
+ >>>
56
+
57
+ INSTRUCTIONS:
58
+
59
+ 1. **Read Input Files**:
60
+ - The user will provide file paths.
61
+ - USE `view_file` tool to read them.
62
+ - IF inputs are NOT file paths, ask user to provide file paths.
63
+ - File paths can be anywhere in the filesystem, not limited to `/devflow-prompts`.
64
+
65
+ 2. **Extract Task ID** (for file path input):
66
+ - Extract task-id from filenames using pattern: `[A-Z]+-[0-9]+`
67
+ - Check both filenames for task-id
68
+ - Examples: `FEAT-001-design.md` → `FEAT-001`
69
+ - If no task-id found, ask user to provide it
70
+
71
+ OUTPUT:
72
+ 1) Coverage gaps (map to tech doc sections/use cases)
73
+ 2) Missing negative/edge cases
74
+ 3) Flaky test risks (timing, external deps)
75
+ 4) Revised version:
76
+ - Output ONLY the full updated markdown for {#task-id}-test.md (same template)
77
+
78
+ RULES:
79
+ - Keep headings unchanged.
80
+ - Do not add tests for features not described in tech doc.
81
+ ```
82
+
83
+ ## Review Checklist
84
+
85
+ ### Coverage Check
86
+ - [ ] Each UC in tech doc has tests?
87
+ - [ ] Each API endpoint has tests?
88
+ - [ ] Each validation rule has tests?
89
+ - [ ] Each error case has tests?
90
+ - [ ] Security requirements have tests?
91
+
92
+ ### Traceability
93
+ - [ ] Test cases clearly map to requirements?
94
+ - [ ] Test coverage matrix complete?
95
+
96
+ ### Negative/Edge Cases
97
+ - [ ] Invalid inputs?
98
+ - [ ] Missing resources (404)?
99
+ - [ ] Permission denied (403)?
100
+ - [ ] Concurrent access?
101
+ - [ ] Timeout scenarios?
102
+ - [ ] Resource limits?
103
+
104
+ ### Flaky Test Risks
105
+ - [ ] Tests depend on timing?
106
+ - [ ] Tests depend on external services?
107
+ - [ ] Tests depend on random data?
108
+ - [ ] Tests depend on execution order?
109
+
110
+ ### Test Data Quality
111
+ - [ ] Fixtures realistic?
112
+ - [ ] Data matrix covers all scenarios?
113
+ - [ ] Setup/teardown clear?
114
+
115
+ ## Usage Example
116
+
117
+ ## Usage Example
118
+
119
+ ```bash
120
+ /devflow.4-rev-test-plan /devflow-prompts/specs/FEAT-001/FEAT-001-design.md /devflow-prompts/specs/FEAT-001/FEAT-001-test.md
121
+ ```
122
+
123
+ ## Next Steps
124
+ After review is done → run `/devflow.5-gen-test-code` to generate test code
@@ -0,0 +1,114 @@
1
+ ---
2
+ description: Generate test code from test plan (git patch format)
3
+ ---
4
+
5
+ # /devflow.5-gen-test-code
6
+
7
+ ## Description
8
+ Generate test code from test plan, output as git unified diff (patch).
9
+
10
+ ## Input Requirements
11
+
12
+ **Input Type: File Path (Required)**
13
+ - Provide path to your test plan file
14
+ - **Filename MUST contain task-id** (e.g., `FEAT-001-test.md`, `TASK-123.md`)
15
+ - Task-id is needed to name output test code patch
16
+ - Can be anywhere in filename: `FEAT-001`, `TASK-123`, `BUG-456`
17
+ - **File can be located anywhere** (not limited to `/devflow-prompts`)
18
+ - Examples:
19
+ - `~/Documents/FEAT-001-test.md` ✅
20
+ - `/tmp/TASK-123-spec.md` ✅
21
+ - `./my-BUG-456.md` ✅
22
+ - `~/test-plan.md` ❌ (no task-id)
23
+ - AI will automatically read the file.
24
+
25
+ **Task Mode (Optional, Default: NEW)**
26
+ - `NEW`: Generating tests for new features.
27
+ - `UPDATE`: Updating tests for modified code.
28
+ - AI will extract task-id from filename
29
+
30
+ **Rule File**: `devflow.5-gen-test-code.rule.md` from `/devflow-prompts/rules/` (Test code writing standards)
31
+
32
+ ## Input Validation
33
+
34
+ - **Test plan file**: Required, must exist
35
+ - **File name**: Must match pattern `{task-id}-test.md`
36
+ - **Coding rules**: `devflow.5-gen-test-code.rule.md` is required
37
+
38
+ **If validation fails**: Stop immediately and output error message.
39
+
40
+ ## Output Format
41
+
42
+ **Git unified diff (patch)** - NOT markdown!
43
+
44
+ **Save to**: `/devflow-prompts/specs/{#task-id}/{#task-id}-test.patch`
45
+
46
+ ## Prompt
47
+
48
+ ```
49
+ You are a Senior SDET.
50
+ Implement tests exactly as described in the test plan and following test coding rules.
51
+
52
+ INPUT 1: {#task-id}-test.md
53
+ <<<
54
+ {USER INPUT HERE - Can be a file path OR test plan content}
55
+ >>>
56
+
57
+ INSTRUCTIONS:
58
+
59
+ 1. **Read Input File**:
60
+ - The user will provide a file path.
61
+ - USE `view_file` tool to read its content.
62
+ - IF input is NOT a file path, ask user to provide a file path.
63
+ - File paths can be anywhere in the filesystem, not limited to `/devflow-prompts`.
64
+
65
+ 2. **Extract Task ID** (for file path input):
66
+ - Extract task-id from filename using pattern: `[A-Z]+-[0-9]+`
67
+ - Examples: `FEAT-001-test.md` → `FEAT-001`
68
+ - If no task-id found, ask user to provide it
69
+
70
+ 3. **Check Task Mode**:
71
+ - **IF Mode is UPDATE**:
72
+ - Ensure existing tests are updated, NOT just new tests added.
73
+ - Add regression tests to verify core logic remains intact.
74
+ - **IF Mode is NEW**:
75
+ - Focus on coverage for the new feature.
76
+
77
+ INPUT 2: Test coding rules (devflow.5-gen-test-code.rule.md)
78
+ <<<
79
+ {paste test coding rules}
80
+ >>>
81
+
82
+ OUTPUT FORMAT:
83
+ - Provide ONLY a git-style patch (unified diff).
84
+ - Include only changed/new files.
85
+ - No explanations outside the diff.
86
+
87
+ RULES:
88
+ - Follow test coding rules (naming, structure, mocking).
89
+ - Tests must be deterministic (no flaky timing).
90
+ - Cover happy path + negative + edge cases listed in test plan.
91
+ - Add minimal test helpers/fixtures if needed.
92
+ - Ensure tests adhere to all standards in test coding rules file.
93
+ - **For UPDATE Mode**:
94
+ - Do NOT remove existing tests unless logic is explicitly deprecated.
95
+ - If refactoring, tests must pass for both old (if compatible) and new logic.
96
+ ```
97
+
98
+ ## Rules/Guidelines
99
+ 1. ✅ **Follow test rules** - Naming, structure, mocking conventions
100
+ 2. ✅ **Deterministic** - No flaky timing, no random data
101
+ 3. ✅ **Complete coverage** - Happy + negative + edge cases
102
+ 4. ✅ **Minimal helpers** - Only add necessary helpers
103
+ 5. ✅ **Clean assertions** - Clear, specific assertions
104
+
105
+ ## Usage Example
106
+
107
+ ## Usage Example
108
+
109
+ ```bash
110
+ /devflow.5-gen-test-code /devflow-prompts/specs/FEAT-001/FEAT-001-test.md
111
+ ```
112
+
113
+ ## Next Steps
114
+ After applying patch → run `/devflow.5-rev-test-code` to review
@@ -0,0 +1,131 @@
1
+ ---
2
+ description: Review test code patch for reliability and correctness
3
+ ---
4
+
5
+ # /devflow.5-rev-test-code
6
+
7
+ ## Description
8
+ Review test code patch to check correctness, reliability, and alignment with test plan.
9
+
10
+ ## Input Requirements
11
+
12
+ **Input Type: File Paths (Required)**
13
+ - Provide paths to your test plan file and test patch file
14
+ - **At least one filename MUST contain task-id** (e.g., `FEAT-001-test.md`)
15
+ - Task-id is needed to name output files
16
+ - Can be anywhere in filename: `FEAT-001`, `TASK-123`, `BUG-456`
17
+ - **Files can be located anywhere** (not limited to `/devflow-prompts`)
18
+ - Examples:
19
+ - `~/FEAT-001-test.md ~/test.patch` ✅
20
+ - `/tmp/TASK-123-spec.md /tmp/test-code.diff` ✅
21
+ - `./tests.md ./patch.diff` ❌ (no task-id)
22
+ - AI will read both files and extract task-id from filenames
23
+
24
+ **Rule File**: `devflow.5-rev-test-code.rule.md` from `/devflow-prompts/rules/` (Test code review checklist)
25
+
26
+ ## Input Validation
27
+
28
+ - **Test plan file**: Required, must exist
29
+ - **Generated test patch**: Required, must be valid git diff format
30
+ - **Coding rules**: `devflow.5-rev-test-code.rule.md` is required
31
+
32
+ **If validation fails**: Stop immediately and output error message.
33
+
34
+ ## Output Format
35
+ 1. **Findings** (grouped)
36
+ 2. **Must-fix list**
37
+ 3. **Corrected patch**: New unified diff with fixes
38
+
39
+ ## Prompt
40
+
41
+ ```
42
+ You are a Tech Lead + QA reviewer.
43
+ Review the test patch for correctness, reliability, and alignment with the test plan.
44
+
45
+ INPUTS:
46
+ 1) Test plan
47
+ <<<
48
+ {USER INPUT 1 - Can be a file path OR test plan content}
49
+ >>>
50
+
51
+ 2) Generated test patch (diff)
52
+ <<<
53
+ {USER INPUT 2 - Can be a file path OR test patch content}
54
+ >>>
55
+
56
+ INSTRUCTIONS:
57
+
58
+ 1. **Read Input Files**:
59
+ - The user will provide file paths.
60
+ - USE `view_file` tool to read them.
61
+ - IF inputs are NOT file paths, ask user to provide file paths.
62
+ - File paths can be anywhere in the filesystem, not limited to `/devflow-prompts`.
63
+
64
+ 2. **Extract Task ID** (for file path input):
65
+ - Extract task-id from filenames using pattern: `[A-Z]+-[0-9]+`
66
+ - Check both filenames for task-id
67
+ - Examples: `FEAT-001-test.md` → `FEAT-001`
68
+ - If no task-id found, ask user to provide it
69
+
70
+ OUTPUT:
71
+ 1) Findings (grouped):
72
+ - Missing/incorrect assertions
73
+ - Not matching test plan cases
74
+ - Flaky risks (timers, random, external calls)
75
+ - Over-mocking / under-mocking
76
+ - Test data problems
77
+ 2) Must-fix list
78
+ 3) Produce a corrected patch:
79
+ - Output ONLY a new unified diff patch that fixes issues (no commentary outside diff)
80
+
81
+ RULES:
82
+ - Prefer minimal change.
83
+ - If a test is impossible with current design, propose the smallest seam (DI/interface) and include it in patch.
84
+ ```
85
+
86
+ ## Review Checklist
87
+
88
+ ### Alignment with Test Plan
89
+ - [ ] All test cases implemented?
90
+ - [ ] Test levels correct (unit/integration/E2E)?
91
+ - [ ] Preconditions set up correctly?
92
+ - [ ] Expected outcomes verified?
93
+
94
+ ### Assertions Quality
95
+ - [ ] Specific assertions (not just assertTrue)?
96
+ - [ ] All expected values checked?
97
+ - [ ] Error messages checked?
98
+ - [ ] Side effects verified?
99
+
100
+ ### Reliability
101
+ - [ ] No timing dependencies?
102
+ - [ ] No random data?
103
+ - [ ] No external service calls (or properly mocked)?
104
+ - [ ] No execution order dependencies?
105
+
106
+ ### Mocking
107
+ - [ ] Appropriate mocking (not over/under)?
108
+ - [ ] Mock expectations correct?
109
+ - [ ] Integration tests use real dependencies?
110
+
111
+ ### Test Data
112
+ - [ ] Realistic fixtures?
113
+ - [ ] Setup/teardown clean?
114
+ - [ ] No data leakage between tests?
115
+
116
+ ### Code Quality
117
+ - [ ] Test naming clear?
118
+ - [ ] Test structure AAA (Arrange-Act-Assert)?
119
+ - [ ] No code duplication?
120
+ - [ ] Helpers used appropriately?
121
+
122
+ ## Usage Example
123
+
124
+ ## Usage Example
125
+
126
+ ```bash
127
+ /devflow.5-rev-test-code /devflow-prompts/specs/FEAT-001/FEAT-001-test.md /devflow-prompts/specs/FEAT-001/FEAT-001-test.patch
128
+ ```
129
+
130
+ ## Next Steps
131
+ After reviewing test code → run `/devflow.6-analyze` to validate entire flow
@@ -0,0 +1,240 @@
1
+ ---
2
+ description: Validate requirement → design → code → test flow, find inconsistencies
3
+ ---
4
+
5
+ # /devflow.6-analyze
6
+
7
+ ## Description
8
+ Validate the entire flow from requirement → tech design → code → test, detect inconsistencies and gaps.
9
+
10
+ ## Input Requirements
11
+
12
+ **Input Type: File Paths (Required)**
13
+ - Provide paths to your requirement, design, and test plan files
14
+ - **Task-id extraction**: AI will try to extract from filenames (optional)
15
+ - If found, will be used in analysis report
16
+ - If not found, analysis will proceed without task-id reference
17
+ - **Files can be located anywhere** (not limited to `/devflow-prompts`)
18
+ - Examples:
19
+ - `~/FEAT-001-req.md ~/FEAT-001-design.md ~/FEAT-001-test.md` ✅ (with task-id)
20
+ - `/tmp/requirement.txt /tmp/spec.md /tmp/tests.md` ✅ (without task-id)
21
+ - Feature code (optional: file path)
22
+ - Test code (optional: file path)
23
+ - AI will read all files
24
+ - **Task Mode**: `NEW` or `UPDATE` (Optional, defaults to `NEW`)
25
+
26
+ ## Input Validation
27
+
28
+ - **Requirement file**: Required, must exist
29
+ - **Design file**: Required, must exist
30
+ - **Test plan file**: Required, must exist
31
+ - **Feature code**: Optional
32
+ - **Test code**: Optional
33
+
34
+ **If validation fails**: Stop immediately and output error message.
35
+
36
+ ## Output Format
37
+ 1. **Alignment check** - Requirement ↔ Tech ↔ Code ↔ Test
38
+ 2. **Inconsistencies found**
39
+ 3. **Coverage gaps**
40
+ 4. **Recommendations** (prioritized)
41
+
42
+ ## Prompt
43
+
44
+ ```
45
+ You are a Principal Engineer + QA Lead conducting a final review.
46
+ Validate alignment and surface inconsistencies across the entire implementation flow.
47
+
48
+ INPUTS:
49
+ 1) Requirement: {#task-id}-requirement.md
50
+ <<<
51
+ {USER INPUT 1 - Can be a file path OR requirement content}
52
+ >>>
53
+
54
+ 2) Tech Design: {#task-id}-design.md
55
+ <<<
56
+ {USER INPUT 2 - Can be a file path OR design content}
57
+ >>>
58
+
59
+ 3) Test Plan: {#task-id}-test.md
60
+ <<<
61
+ {USER INPUT 3 - Can be a file path OR test plan content}
62
+ >>>
63
+
64
+ 4) Feature Code (optional - key snippets)
65
+ <<<
66
+ {USER INPUT 4 - Can be a file path OR code snippets}
67
+ >>>
68
+
69
+ 5) Test Code (optional - key snippets)
70
+ <<<
71
+ {USER INPUT 5 - Can be a file path OR test code snippets}
72
+ >>>
73
+
74
+ 6) Task Mode (Optional): NEW | UPDATE
75
+
76
+ INSTRUCTIONS:
77
+
78
+ 1. **Read Input Files**:
79
+ - The user will provide file paths.
80
+ - USE `view_file` tool to read them.
81
+ - IF inputs are NOT file paths, ask user to provide file paths.
82
+ - File paths can be anywhere in the filesystem, not limited to `/devflow-prompts`.
83
+
84
+ 2. **Extract Task ID** (optional, for file path input):
85
+ - Try to extract task-id from filenames using pattern: `[A-Z]+-[0-9]+`
86
+ - If found, use it in analysis report title
87
+ - If not found, proceed without task-id reference
88
+
89
+ TASKS:
90
+ 1) Alignment check:
91
+ - Does tech design implement all requirements?
92
+ - Does code implement all tech design specs?
93
+ - Do tests cover all requirements and tech design?
94
+
95
+ 2) Find inconsistencies:
96
+ - Contradictions between docs
97
+ - Missing implementations
98
+ - Over-implementations (not in requirement)
99
+
100
+ 3) Coverage gaps:
101
+ - Requirements not in tech design
102
+ - Tech design not in code
103
+ - Code not tested
104
+ - Edge cases not handled
105
+ - (**UPDATE Only**) Regression risks:
106
+ - Are existing tests updated?
107
+ - Is backward compatibility maintained?
108
+
109
+ 4) Recommendations:
110
+ - P0: Critical issues (blocking)
111
+ - P1: Important issues (should fix)
112
+ - P2: Nice-to-have improvements
113
+
114
+ OUTPUT:
115
+ - Structured report with findings and recommendations
116
+ - No code generation, this is read-only analysis
117
+ ```
118
+
119
+ ## Rules/Guidelines
120
+ 1. ✅ **Read-only** - No code generation, only analyze
121
+ 2. ✅ **Traceability** - Map requirements → design → code → test
122
+ 3. ✅ **Inconsistencies** - Detect contradictions
123
+ 4. ✅ **Coverage** - Check if coverage is sufficient
124
+ 5. ✅ **Prioritize** - P0/P1/P2 recommendations
125
+
126
+ ## Analysis Checklist
127
+
128
+ ### Requirement → Tech Design
129
+ - [ ] Each FR has implementation in tech design?
130
+ - [ ] Each NFR is addressed?
131
+ - [ ] Open questions resolved?
132
+ - [ ] Assumptions validated?
133
+
134
+ ### Tech Design → Code
135
+ - [ ] Each API endpoint implemented?
136
+ - [ ] Data model changes applied?
137
+ - [ ] Business logic implemented correctly?
138
+ - [ ] Security measures in place?
139
+ - [ ] Observability hooks added?
140
+
141
+ ### Tech Design → Test Plan
142
+ - [ ] Each UC has test cases?
143
+ - [ ] Edge cases covered?
144
+ - [ ] Security tests present?
145
+ - [ ] Performance tests if needed?
146
+
147
+ ### Test Plan → Test Code
148
+ - [ ] Each test case implemented?
149
+ - [ ] Coverage sufficient?
150
+ - [ ] Assertions correct?
151
+
152
+ ### Cross-cutting Concerns
153
+ - [ ] Consistent naming across docs/code?
154
+ - [ ] Error handling consistent?
155
+ - [ ] Validation rules consistent?
156
+ - [ ] Data types consistent?
157
+
158
+ ## Usage Example
159
+
160
+ ## Usage Example
161
+
162
+ ```bash
163
+ # 1. Run with file paths (can be anywhere in filesystem)
164
+ /devflow.6-analyze \
165
+ /path/to/FEAT-001-requirement.md \
166
+ /path/to/FEAT-001-design.md \
167
+ /path/to/FEAT-001-test.md
168
+
169
+ # 2. AI reads all files and analyzes
170
+ # 3. Receive comprehensive analysis report
171
+ ```
172
+
173
+ ## Example Output Structure
174
+
175
+ ```markdown
176
+ # Analysis Report: FEAT-001
177
+
178
+ ## 1. Alignment Check
179
+
180
+ ### Requirement → Tech Design ✅
181
+ - All FRs mapped to tech design
182
+ - All NFRs addressed
183
+ - 2 open questions resolved in tech design
184
+
185
+ ### Tech Design → Code ⚠️
186
+ - Issue: FR-3 rate limiting not fully implemented
187
+ - Missing: Observability metrics for login attempts
188
+
189
+ ### Tech Design → Test Plan ✅
190
+ - All UCs have test cases
191
+ - Edge cases covered
192
+
193
+ ### Test Plan → Test Code ⚠️
194
+ - Issue: TC-5 (concurrent login) not implemented
195
+ - Coverage: 85% (target: 90%)
196
+
197
+ ## 2. Inconsistencies Found
198
+
199
+ ### P0 (Critical)
200
+ - None
201
+
202
+ ### P1 (Important)
203
+ - Tech design specifies 15-minute lock, code uses 10 minutes
204
+ - Requirement says "show remaining attempts", code doesn't implement
205
+
206
+ ### P2 (Minor)
207
+ - Naming inconsistency: "locked_until" vs "lockUntil" in different files
208
+
209
+ ## 3. Coverage Gaps
210
+
211
+ - FR-3 rate limiting: Partially implemented
212
+ - TC-5 concurrent login test: Not implemented
213
+ - Metrics for observability: Missing
214
+
215
+ ## 4. Recommendations
216
+
217
+ ### P0 (Must fix before deploy)
218
+ - None
219
+
220
+ ### P1 (Should fix)
221
+ 1. Fix lock duration: 10 min → 15 min (align with requirement)
222
+ 2. Implement "remaining attempts" display
223
+ 3. Implement TC-5 concurrent login test
224
+ 4. Add metrics hooks
225
+
226
+ ### P2 (Nice to have)
227
+ 1. Standardize naming: use "locked_until" everywhere
228
+ 2. Add more detailed logs for debugging
229
+ 3. Consider adding integration test for full flow
230
+ ```
231
+
232
+ ## When to Run
233
+
234
+ - ✅ **Before final PR** - Catch issues early
235
+ - ✅ **After major changes** - Verify alignment still holds
236
+ - ✅ **Before deploy** - Final sanity check
237
+ - ✅ **During code review** - Help reviewers spot gaps
238
+
239
+ ## Next Steps
240
+ After analyze → Fix issues by priority → Ready to deploy!
package/gen-chat.sh ADDED
@@ -0,0 +1,58 @@
1
+ #!/bin/bash
2
+
3
+ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
4
+ SCRIPTS_DIR="$SCRIPT_DIR/scripts"
5
+
6
+ # Function to show usage
7
+ show_usage() {
8
+ echo "Usage: $0 [ide]"
9
+ echo "Available IDEs: vscode, cursor, antigravity"
10
+ echo "If no argument is provided, the script attempts to auto-detect the IDE."
11
+ }
12
+
13
+ # 1. Start with provided argument (converted to lowercase)
14
+ IDE=$(echo "$1" | tr '[:upper:]' '[:lower:]')
15
+
16
+ # 2. If no argument, try auto-detection
17
+ if [ -z "$IDE" ]; then
18
+ # Helper for detection (checks TERM_PROGRAM env var)
19
+ TP=$(echo "$TERM_PROGRAM" | tr '[:upper:]' '[:lower:]')
20
+
21
+ if [[ "$TP" == *"vscode"* ]]; then
22
+ IDE="vscode"
23
+ elif [[ "$TP" == *"cursor"* ]]; then
24
+ IDE="cursor"
25
+ # Fallback/Custom checks for Antigravity could go here if there's a known env var
26
+ # For now, we rely on the argument for Antigravity unless explicitly known
27
+ fi
28
+ fi
29
+
30
+ # 3. If still empty, error out
31
+ if [ -z "$IDE" ]; then
32
+ echo "Error: Could not detect IDE and no argument provided."
33
+ show_usage
34
+ exit 1
35
+ fi
36
+
37
+ echo "Detailed setup for IDE: $IDE"
38
+
39
+ case "$IDE" in
40
+ vscode)
41
+ TARGET_SCRIPT="$SCRIPTS_DIR/setup-devflow-chat-vscode.sh"
42
+ ;;
43
+ antigravity)
44
+ TARGET_SCRIPT="$SCRIPTS_DIR/setup-devflow-chat-antigravity.sh"
45
+ ;;
46
+ *)
47
+ echo "Error: Unsupported IDE '$IDE'"
48
+ show_usage
49
+ exit 1
50
+ ;;
51
+ esac
52
+
53
+ if [ -f "$TARGET_SCRIPT" ]; then
54
+ bash "$TARGET_SCRIPT"
55
+ else
56
+ echo "Error: Setup script not found at $TARGET_SCRIPT"
57
+ exit 1
58
+ fi
package/index.js ADDED
@@ -0,0 +1,4 @@
1
+ #!/usr/bin/env node
2
+
3
+ console.log("DevFlow Prompts is a CLI tool. Please run 'npx devflow-prompts init' or see README.md for usage.");
4
+ module.exports = {};
package/package.json ADDED
@@ -0,0 +1,25 @@
1
+ {
2
+ "name": "devflow-prompts",
3
+ "version": "1.0.0",
4
+ "description": "DevFlow Prompts - Prompt Engineering & Workflow Automation",
5
+ "main": "index.js",
6
+ "bin": {
7
+ "devflow-prompts": "./bin/cli.js"
8
+ },
9
+ "scripts": {
10
+ "test": "echo \"Error: no test specified\" && exit 1"
11
+ },
12
+ "keywords": [
13
+ "devflow",
14
+ "prompt",
15
+ "workflow",
16
+ "automation"
17
+ ],
18
+ "author": "Phong Le",
19
+ "license": "ISC",
20
+ "dependencies": {
21
+ "fs-extra": "^11.2.0",
22
+ "chalk": "^4.1.2",
23
+ "inquirer": "^8.2.6"
24
+ }
25
+ }
package/raws/.gitkeep ADDED
File without changes
@@ -0,0 +1,7 @@
1
+ # Yêu cầu thay đổi tên dự án và prefix
2
+
3
+ Thực hiện các thay đổi sau cho hệ thống prompt:
4
+
5
+ 1. **Đổi tên dự án**: Cập nhật tên thư mục hoặc tên gói thành `devflow-prompts`.
6
+ 2. **Đổi Prefix Prompt**: Cập nhật tiền tố (prefix) cho các lệnh prompt từ tên cũ sang `devflow`.
7
+ - Ví dụ: đổi thành `devflow.1-gen-requirement`, `devflow.2-gen-design`, v.v.