specrails-core 3.8.0 → 4.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bin/specrails-core.js +23 -1
- package/bin/tui-installer.mjs +80 -38
- package/commands/enrich.md +21 -23
- package/docs/installation.md +0 -3
- package/install.sh +158 -45
- package/package.json +4 -2
- package/templates/agents/sr-developer.md +27 -4
- package/templates/agents/sr-reviewer.md +19 -4
- package/templates/claude-md/CLAUDE-quickstart.md +0 -4
- package/templates/commands/specrails/enrich.md +21 -23
- package/templates/commands/specrails/implement.md +114 -18
- package/templates/commands/specrails/propose-spec.md +2 -0
- package/templates/settings/integration-contract.json +0 -2
- package/templates/skills/sr-implement/SKILL.md +114 -18
- package/.claude/skills/openspec-apply-change/SKILL.md +0 -156
- package/.claude/skills/openspec-archive-change/SKILL.md +0 -114
- package/.claude/skills/openspec-bulk-archive-change/SKILL.md +0 -246
- package/.claude/skills/openspec-continue-change/SKILL.md +0 -118
- package/.claude/skills/openspec-explore/SKILL.md +0 -290
- package/.claude/skills/openspec-ff-change/SKILL.md +0 -101
- package/.claude/skills/openspec-new-change/SKILL.md +0 -74
- package/.claude/skills/openspec-onboard/SKILL.md +0 -529
- package/.claude/skills/openspec-sync-specs/SKILL.md +0 -138
- package/.claude/skills/openspec-verify-change/SKILL.md +0 -168
- package/prompts/analyze-codebase.md +0 -87
- package/prompts/generate-personas.md +0 -61
- package/prompts/infer-conventions.md +0 -72
- package/templates/commands/specrails/setup.md +0 -1533
|
@@ -1,168 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: openspec-verify-change
|
|
3
|
-
description: Verify implementation matches change artifacts. Use when the user wants to validate that implementation is complete, correct, and coherent before archiving.
|
|
4
|
-
license: MIT
|
|
5
|
-
compatibility: Requires openspec CLI.
|
|
6
|
-
metadata:
|
|
7
|
-
author: openspec
|
|
8
|
-
version: "1.0"
|
|
9
|
-
generatedBy: "1.1.1"
|
|
10
|
-
---
|
|
11
|
-
|
|
12
|
-
Verify that an implementation matches the change artifacts (specs, tasks, design).
|
|
13
|
-
|
|
14
|
-
**Input**: Optionally specify a change name. If omitted, check if it can be inferred from conversation context. If vague or ambiguous you MUST prompt for available changes.
|
|
15
|
-
|
|
16
|
-
**Steps**
|
|
17
|
-
|
|
18
|
-
1. **If no change name provided, prompt for selection**
|
|
19
|
-
|
|
20
|
-
Run `openspec list --json` to get available changes. Use the **AskUserQuestion tool** to let the user select.
|
|
21
|
-
|
|
22
|
-
Show changes that have implementation tasks (tasks artifact exists).
|
|
23
|
-
Include the schema used for each change if available.
|
|
24
|
-
Mark changes with incomplete tasks as "(In Progress)".
|
|
25
|
-
|
|
26
|
-
**IMPORTANT**: Do NOT guess or auto-select a change. Always let the user choose.
|
|
27
|
-
|
|
28
|
-
2. **Check status to understand the schema**
|
|
29
|
-
```bash
|
|
30
|
-
openspec status --change "<name>" --json
|
|
31
|
-
```
|
|
32
|
-
Parse the JSON to understand:
|
|
33
|
-
- `schemaName`: The workflow being used (e.g., "spec-driven")
|
|
34
|
-
- Which artifacts exist for this change
|
|
35
|
-
|
|
36
|
-
3. **Get the change directory and load artifacts**
|
|
37
|
-
|
|
38
|
-
```bash
|
|
39
|
-
openspec instructions apply --change "<name>" --json
|
|
40
|
-
```
|
|
41
|
-
|
|
42
|
-
This returns the change directory and context files. Read all available artifacts from `contextFiles`.
|
|
43
|
-
|
|
44
|
-
4. **Initialize verification report structure**
|
|
45
|
-
|
|
46
|
-
Create a report structure with three dimensions:
|
|
47
|
-
- **Completeness**: Track tasks and spec coverage
|
|
48
|
-
- **Correctness**: Track requirement implementation and scenario coverage
|
|
49
|
-
- **Coherence**: Track design adherence and pattern consistency
|
|
50
|
-
|
|
51
|
-
Each dimension can have CRITICAL, WARNING, or SUGGESTION issues.
|
|
52
|
-
|
|
53
|
-
5. **Verify Completeness**
|
|
54
|
-
|
|
55
|
-
**Task Completion**:
|
|
56
|
-
- If tasks.md exists in contextFiles, read it
|
|
57
|
-
- Parse checkboxes: `- [ ]` (incomplete) vs `- [x]` (complete)
|
|
58
|
-
- Count complete vs total tasks
|
|
59
|
-
- If incomplete tasks exist:
|
|
60
|
-
- Add CRITICAL issue for each incomplete task
|
|
61
|
-
- Recommendation: "Complete task: <description>" or "Mark as done if already implemented"
|
|
62
|
-
|
|
63
|
-
**Spec Coverage**:
|
|
64
|
-
- If delta specs exist in `openspec/changes/<name>/specs/`:
|
|
65
|
-
- Extract all requirements (marked with "### Requirement:")
|
|
66
|
-
- For each requirement:
|
|
67
|
-
- Search codebase for keywords related to the requirement
|
|
68
|
-
- Assess if implementation likely exists
|
|
69
|
-
- If requirements appear unimplemented:
|
|
70
|
-
- Add CRITICAL issue: "Requirement not found: <requirement name>"
|
|
71
|
-
- Recommendation: "Implement requirement X: <description>"
|
|
72
|
-
|
|
73
|
-
6. **Verify Correctness**
|
|
74
|
-
|
|
75
|
-
**Requirement Implementation Mapping**:
|
|
76
|
-
- For each requirement from delta specs:
|
|
77
|
-
- Search codebase for implementation evidence
|
|
78
|
-
- If found, note file paths and line ranges
|
|
79
|
-
- Assess if implementation matches requirement intent
|
|
80
|
-
- If divergence detected:
|
|
81
|
-
- Add WARNING: "Implementation may diverge from spec: <details>"
|
|
82
|
-
- Recommendation: "Review <file>:<lines> against requirement X"
|
|
83
|
-
|
|
84
|
-
**Scenario Coverage**:
|
|
85
|
-
- For each scenario in delta specs (marked with "#### Scenario:"):
|
|
86
|
-
- Check if conditions are handled in code
|
|
87
|
-
- Check if tests exist covering the scenario
|
|
88
|
-
- If scenario appears uncovered:
|
|
89
|
-
- Add WARNING: "Scenario not covered: <scenario name>"
|
|
90
|
-
- Recommendation: "Add test or implementation for scenario: <description>"
|
|
91
|
-
|
|
92
|
-
7. **Verify Coherence**
|
|
93
|
-
|
|
94
|
-
**Design Adherence**:
|
|
95
|
-
- If design.md exists in contextFiles:
|
|
96
|
-
- Extract key decisions (look for sections like "Decision:", "Approach:", "Architecture:")
|
|
97
|
-
- Verify implementation follows those decisions
|
|
98
|
-
- If contradiction detected:
|
|
99
|
-
- Add WARNING: "Design decision not followed: <decision>"
|
|
100
|
-
- Recommendation: "Update implementation or revise design.md to match reality"
|
|
101
|
-
- If no design.md: Skip design adherence check, note "No design.md to verify against"
|
|
102
|
-
|
|
103
|
-
**Code Pattern Consistency**:
|
|
104
|
-
- Review new code for consistency with project patterns
|
|
105
|
-
- Check file naming, directory structure, coding style
|
|
106
|
-
- If significant deviations found:
|
|
107
|
-
- Add SUGGESTION: "Code pattern deviation: <details>"
|
|
108
|
-
- Recommendation: "Consider following project pattern: <example>"
|
|
109
|
-
|
|
110
|
-
8. **Generate Verification Report**
|
|
111
|
-
|
|
112
|
-
**Summary Scorecard**:
|
|
113
|
-
```
|
|
114
|
-
## Verification Report: <change-name>
|
|
115
|
-
|
|
116
|
-
### Summary
|
|
117
|
-
| Dimension | Status |
|
|
118
|
-
|--------------|------------------|
|
|
119
|
-
| Completeness | X/Y tasks, N reqs|
|
|
120
|
-
| Correctness | M/N reqs covered |
|
|
121
|
-
| Coherence | Followed/Issues |
|
|
122
|
-
```
|
|
123
|
-
|
|
124
|
-
**Issues by Priority**:
|
|
125
|
-
|
|
126
|
-
1. **CRITICAL** (Must fix before archive):
|
|
127
|
-
- Incomplete tasks
|
|
128
|
-
- Missing requirement implementations
|
|
129
|
-
- Each with specific, actionable recommendation
|
|
130
|
-
|
|
131
|
-
2. **WARNING** (Should fix):
|
|
132
|
-
- Spec/design divergences
|
|
133
|
-
- Missing scenario coverage
|
|
134
|
-
- Each with specific recommendation
|
|
135
|
-
|
|
136
|
-
3. **SUGGESTION** (Nice to fix):
|
|
137
|
-
- Pattern inconsistencies
|
|
138
|
-
- Minor improvements
|
|
139
|
-
- Each with specific recommendation
|
|
140
|
-
|
|
141
|
-
**Final Assessment**:
|
|
142
|
-
- If CRITICAL issues: "X critical issue(s) found. Fix before archiving."
|
|
143
|
-
- If only warnings: "No critical issues. Y warning(s) to consider. Ready for archive (with noted improvements)."
|
|
144
|
-
- If all clear: "All checks passed. Ready for archive."
|
|
145
|
-
|
|
146
|
-
**Verification Heuristics**
|
|
147
|
-
|
|
148
|
-
- **Completeness**: Focus on objective checklist items (checkboxes, requirements list)
|
|
149
|
-
- **Correctness**: Use keyword search, file path analysis, reasonable inference - don't require perfect certainty
|
|
150
|
-
- **Coherence**: Look for glaring inconsistencies, don't nitpick style
|
|
151
|
-
- **False Positives**: When uncertain, prefer SUGGESTION over WARNING, WARNING over CRITICAL
|
|
152
|
-
- **Actionability**: Every issue must have a specific recommendation with file/line references where applicable
|
|
153
|
-
|
|
154
|
-
**Graceful Degradation**
|
|
155
|
-
|
|
156
|
-
- If only tasks.md exists: verify task completion only, skip spec/design checks
|
|
157
|
-
- If tasks + specs exist: verify completeness and correctness, skip design
|
|
158
|
-
- If full artifacts: verify all three dimensions
|
|
159
|
-
- Always note which checks were skipped and why
|
|
160
|
-
|
|
161
|
-
**Output Format**
|
|
162
|
-
|
|
163
|
-
Use clear markdown with:
|
|
164
|
-
- Table for summary scorecard
|
|
165
|
-
- Grouped lists for issues (CRITICAL/WARNING/SUGGESTION)
|
|
166
|
-
- Code references in format: `file.ts:123`
|
|
167
|
-
- Specific, actionable recommendations
|
|
168
|
-
- No vague suggestions like "consider reviewing"
|
|
@@ -1,87 +0,0 @@
|
|
|
1
|
-
# Codebase Analysis Prompt
|
|
2
|
-
|
|
3
|
-
Analyze this repository to understand its architecture, stack, and conventions.
|
|
4
|
-
|
|
5
|
-
## What to detect
|
|
6
|
-
|
|
7
|
-
### 1. Languages & Frameworks
|
|
8
|
-
- Scan for language-specific files: `*.py`, `*.ts`, `*.tsx`, `*.js`, `*.go`, `*.rs`, `*.java`, `*.kt`, `*.rb`, `*.cs`, `*.swift`
|
|
9
|
-
- Check dependency files: `requirements.txt`, `pyproject.toml`, `package.json`, `go.mod`, `Cargo.toml`, `pom.xml`, `Gemfile`, `*.csproj`
|
|
10
|
-
- Identify frameworks from imports: FastAPI, Django, Flask, Express, Next.js, React, Vue, Angular, Spring, Gin, Actix, Rails, etc.
|
|
11
|
-
|
|
12
|
-
### 2. Architecture Layers
|
|
13
|
-
For each detected layer, identify:
|
|
14
|
-
- **Name**: e.g., "Backend", "Frontend", "Core", "API", "Mobile", "CLI"
|
|
15
|
-
- **Path**: e.g., `backend/`, `src/`, `frontend/`, `app/`, `lib/`
|
|
16
|
-
- **Tech**: e.g., "FastAPI (Python)", "React + TypeScript", "Go package"
|
|
17
|
-
- **Tag**: e.g., `[backend]`, `[frontend]`, `[core]`, `[mobile]`, `[test]`
|
|
18
|
-
|
|
19
|
-
### 3. CI/CD Commands
|
|
20
|
-
Parse `.github/workflows/*.yml`, `.gitlab-ci.yml`, `Jenkinsfile`, `Makefile`, `package.json` scripts to find:
|
|
21
|
-
- **Lint** commands (ruff, eslint, golangci-lint, clippy, rubocop)
|
|
22
|
-
- **Format** commands (ruff format, prettier, gofmt, rustfmt)
|
|
23
|
-
- **Test** commands (pytest, vitest, jest, go test, cargo test)
|
|
24
|
-
- **Build** commands (tsc, vite build, go build, cargo build)
|
|
25
|
-
- **Type check** commands (tsc --noEmit, mypy, pyright)
|
|
26
|
-
|
|
27
|
-
### 4. Conventions
|
|
28
|
-
Read 3-5 representative source files from each layer to detect:
|
|
29
|
-
- Naming style (snake_case, camelCase, PascalCase)
|
|
30
|
-
- Import organization patterns
|
|
31
|
-
- Error handling approach
|
|
32
|
-
- Testing patterns (framework, mocking, fixtures)
|
|
33
|
-
- API style (REST, GraphQL, tRPC, gRPC)
|
|
34
|
-
- State management (if frontend)
|
|
35
|
-
- Database access pattern (ORM, raw SQL, repository pattern)
|
|
36
|
-
|
|
37
|
-
### 5. Project Warnings
|
|
38
|
-
Look for:
|
|
39
|
-
- Concurrency constraints
|
|
40
|
-
- Authentication patterns
|
|
41
|
-
- State management gotchas
|
|
42
|
-
- Known test isolation issues
|
|
43
|
-
- Environment-specific behavior
|
|
44
|
-
|
|
45
|
-
## Output Format
|
|
46
|
-
|
|
47
|
-
Return a structured analysis:
|
|
48
|
-
|
|
49
|
-
```yaml
|
|
50
|
-
project:
|
|
51
|
-
name: "project-name"
|
|
52
|
-
description: "One-line description"
|
|
53
|
-
|
|
54
|
-
stack:
|
|
55
|
-
- layer: "Backend"
|
|
56
|
-
tech: "FastAPI (Python 3.11)"
|
|
57
|
-
path: "backend/"
|
|
58
|
-
tag: "[backend]"
|
|
59
|
-
- layer: "Frontend"
|
|
60
|
-
tech: "React 19 + TypeScript + Vite"
|
|
61
|
-
path: "frontend/"
|
|
62
|
-
tag: "[frontend]"
|
|
63
|
-
|
|
64
|
-
ci:
|
|
65
|
-
backend:
|
|
66
|
-
lint: "ruff check ."
|
|
67
|
-
format: "ruff format --check ."
|
|
68
|
-
test: "pytest tests/ -q"
|
|
69
|
-
frontend:
|
|
70
|
-
lint: "npm run lint"
|
|
71
|
-
typecheck: "npx tsc --noEmit"
|
|
72
|
-
test: "npx vitest run"
|
|
73
|
-
|
|
74
|
-
conventions:
|
|
75
|
-
backend:
|
|
76
|
-
- "Routes are thin: validate → call service → return model"
|
|
77
|
-
- "Services own business logic"
|
|
78
|
-
- "Repository pattern for data access"
|
|
79
|
-
frontend:
|
|
80
|
-
- "Functional components with hooks"
|
|
81
|
-
- "TanStack Query for server state"
|
|
82
|
-
- "Tailwind for styling"
|
|
83
|
-
|
|
84
|
-
warnings:
|
|
85
|
-
- "Auth: All endpoints require authentication"
|
|
86
|
-
- "Tests: Fixtures must use scope='function'"
|
|
87
|
-
```
|
|
@@ -1,61 +0,0 @@
|
|
|
1
|
-
# Persona Generation Prompt
|
|
2
|
-
|
|
3
|
-
Generate Value Proposition Canvas personas for the target users of this project.
|
|
4
|
-
|
|
5
|
-
## Input
|
|
6
|
-
|
|
7
|
-
The user has described their target users in natural language. Use this description to:
|
|
8
|
-
|
|
9
|
-
1. **Research the competitive landscape** using WebSearch:
|
|
10
|
-
- Search for existing tools these users currently use
|
|
11
|
-
- Find common pain points and frustrations in forums (Reddit, HN, product reviews)
|
|
12
|
-
- Identify feature gaps in competitor products
|
|
13
|
-
- Understand workflow patterns and daily routines
|
|
14
|
-
|
|
15
|
-
2. **Generate a detailed VPC persona** for each user type that includes:
|
|
16
|
-
|
|
17
|
-
### Profile
|
|
18
|
-
- **Nickname + Role**: A memorable name and short role description
|
|
19
|
-
- **Age range**: Typical demographic
|
|
20
|
-
- **Key behaviors**: 4-5 bullet points describing how they work/interact with tools
|
|
21
|
-
- **Tools currently used**: List of specific competitor products/tools
|
|
22
|
-
- **Spending pattern**: If relevant (monthly spend, budget constraints)
|
|
23
|
-
- **Mindset**: How they think about the problem space
|
|
24
|
-
|
|
25
|
-
### Value Proposition Canvas
|
|
26
|
-
|
|
27
|
-
#### Customer Jobs (6-8 entries)
|
|
28
|
-
| Type | Job |
|
|
29
|
-
|------|-----|
|
|
30
|
-
| **Functional** | Concrete tasks they need to accomplish |
|
|
31
|
-
| **Social** | How they want to be perceived by others |
|
|
32
|
-
| **Emotional** | How they want to feel |
|
|
33
|
-
|
|
34
|
-
#### Pains (6-8 entries, graded by severity)
|
|
35
|
-
| Severity | Pain |
|
|
36
|
-
|----------|------|
|
|
37
|
-
| **Critical** | Major blockers — things that cause real frustration or wasted time |
|
|
38
|
-
| **High** | Significant issues that affect daily workflow |
|
|
39
|
-
| **Medium** | Annoyances that are tolerable but reduce satisfaction |
|
|
40
|
-
| **Low** | Minor inconveniences |
|
|
41
|
-
|
|
42
|
-
#### Gains (6-8 entries, graded by impact)
|
|
43
|
-
| Impact | Gain |
|
|
44
|
-
|--------|------|
|
|
45
|
-
| **High** | Game-changing improvements that would make them switch tools |
|
|
46
|
-
| **Medium** | Meaningful improvements to their workflow |
|
|
47
|
-
| **Low** | Nice-to-haves |
|
|
48
|
-
|
|
49
|
-
### Key Insight
|
|
50
|
-
> The single most important unmet need that this project can uniquely address. This should be the intersection of a critical pain + a high-impact gain that no competitor handles well.
|
|
51
|
-
|
|
52
|
-
### Sources
|
|
53
|
-
List the actual URLs used during research (competitive analysis, forums, reviews, documentation).
|
|
54
|
-
|
|
55
|
-
## Quality Criteria
|
|
56
|
-
|
|
57
|
-
- **Grounded in research**: Every pain and gain should be traceable to real user feedback, not assumptions
|
|
58
|
-
- **Specific, not generic**: "Spending 30 minutes swapping cards between decks" > "Managing multiple decks is hard"
|
|
59
|
-
- **Actionable**: Each pain/gain should suggest a potential feature
|
|
60
|
-
- **Differentiated**: Personas should have meaningfully different needs — if they're too similar, merge them
|
|
61
|
-
- **Realistic**: Don't invent fantasy users — base on actual market segments
|
|
@@ -1,72 +0,0 @@
|
|
|
1
|
-
# Convention Inference Prompt
|
|
2
|
-
|
|
3
|
-
Analyze source files to infer coding conventions and patterns used in this project.
|
|
4
|
-
|
|
5
|
-
## How to Analyze
|
|
6
|
-
|
|
7
|
-
For each detected layer (backend, frontend, core, etc.):
|
|
8
|
-
|
|
9
|
-
1. **Read 3-5 representative files** from that layer
|
|
10
|
-
- Pick files that are central to the layer (routes, components, services, models)
|
|
11
|
-
- Prefer files with 50-200 lines (enough to show patterns without being overwhelming)
|
|
12
|
-
- Avoid auto-generated files, config files, or test files for primary analysis
|
|
13
|
-
|
|
14
|
-
2. **Identify these patterns:**
|
|
15
|
-
|
|
16
|
-
### Naming
|
|
17
|
-
- Variable naming: snake_case, camelCase, PascalCase
|
|
18
|
-
- Function naming: snake_case, camelCase
|
|
19
|
-
- Class/Type naming: PascalCase, SCREAMING_SNAKE
|
|
20
|
-
- File naming: kebab-case, snake_case, PascalCase
|
|
21
|
-
- Test naming: `test_behavior`, `describe/it`, `Test_Behavior`
|
|
22
|
-
|
|
23
|
-
### Code Organization
|
|
24
|
-
- Import ordering (stdlib → third-party → local)
|
|
25
|
-
- Module structure (one class per file, multiple, etc.)
|
|
26
|
-
- Export patterns (named exports, default exports, barrel files)
|
|
27
|
-
- Separation of concerns (where does business logic live?)
|
|
28
|
-
|
|
29
|
-
### Error Handling
|
|
30
|
-
- Exception types (custom exceptions, error codes, Result types)
|
|
31
|
-
- Error propagation (throw, return errors, Result/Option)
|
|
32
|
-
- API error format (structured error responses, HTTP status codes)
|
|
33
|
-
- Validation approach (Pydantic, Zod, custom validators)
|
|
34
|
-
|
|
35
|
-
### Testing
|
|
36
|
-
- Test framework (pytest, vitest, jest, go test)
|
|
37
|
-
- Test organization (mirrored structure, flat, grouped by type)
|
|
38
|
-
- Mocking approach (unittest.mock, jest.mock, testify)
|
|
39
|
-
- Fixture patterns (pytest fixtures, setup/teardown, factory functions)
|
|
40
|
-
- Test isolation concerns (scope, cleanup, temp directories)
|
|
41
|
-
|
|
42
|
-
### API Patterns
|
|
43
|
-
- Style (REST, GraphQL, tRPC, gRPC)
|
|
44
|
-
- Route organization (resource-based, feature-based)
|
|
45
|
-
- Middleware usage (auth, logging, rate limiting)
|
|
46
|
-
- Request/response types (Pydantic, TypeScript interfaces, protobuf)
|
|
47
|
-
|
|
48
|
-
### State Management (frontend)
|
|
49
|
-
- Server state (TanStack Query, SWR, Redux Query)
|
|
50
|
-
- Client state (Context, Zustand, Redux, Jotai)
|
|
51
|
-
- Form handling (React Hook Form, Formik, native)
|
|
52
|
-
|
|
53
|
-
### Database/Storage
|
|
54
|
-
- Access pattern (Repository, DAO, Active Record, raw queries)
|
|
55
|
-
- ORM (SQLAlchemy, Prisma, GORM, Diesel)
|
|
56
|
-
- Migration tool (Alembic, Prisma Migrate, golang-migrate)
|
|
57
|
-
- Connection management (pool, per-request, singleton)
|
|
58
|
-
|
|
59
|
-
## Output Format
|
|
60
|
-
|
|
61
|
-
For each layer, produce a concise list of conventions suitable for a `.claude/rules/{layer}.md` file:
|
|
62
|
-
|
|
63
|
-
```markdown
|
|
64
|
-
# {Layer Name} Conventions
|
|
65
|
-
|
|
66
|
-
- Convention 1: description
|
|
67
|
-
- Convention 2: description
|
|
68
|
-
- Convention 3: description
|
|
69
|
-
...
|
|
70
|
-
```
|
|
71
|
-
|
|
72
|
-
Keep each convention to one line. Focus on patterns that a developer needs to follow when writing new code in this layer.
|