specrails-core 1.3.0 → 1.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "specrails-core",
|
|
3
|
-
"version": "1.
|
|
3
|
+
"version": "1.4.0",
|
|
4
4
|
"description": "AI agent workflow system for Claude Code — installs 12 specialized agents, orchestration commands, and persona-driven product discovery into any repository",
|
|
5
5
|
"bin": {
|
|
6
6
|
"specrails-core": "bin/specrails-core.js"
|
|
@@ -0,0 +1,259 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "Agent Memory Inspector"
|
|
3
|
+
description: "Inspect and manage agent memory directories. Lists all sr-* agent memory stores, shows per-agent stats (file count, size, last modified), displays recent entries, and detects stale or orphaned files."
|
|
4
|
+
category: Workflow
|
|
5
|
+
tags: [workflow, memory, agents, maintenance, diagnostics]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
Inspect agent memory directories under `.claude/agent-memory/sr-*/` for **{{PROJECT_NAME}}**. Show per-agent stats, recent entries, and actionable recommendations.
|
|
9
|
+
|
|
10
|
+
**Input:** `$ARGUMENTS` — optional:
|
|
11
|
+
- `<agent-name>` — inspect a specific agent's memory (e.g. `sr-developer`, `sr-reviewer`)
|
|
12
|
+
- `--stale <days>` — flag files not modified in more than N days as stale (default: 30)
|
|
13
|
+
- `--prune` — delete stale files after confirmation (prints the list first, then asks)
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Phase 0: Argument Parsing
|
|
18
|
+
|
|
19
|
+
Parse `$ARGUMENTS` to set runtime variables.
|
|
20
|
+
|
|
21
|
+
**Variables to set:**
|
|
22
|
+
|
|
23
|
+
- `AGENT_FILTER` — string or empty string. Default: `""` (inspect all agents).
|
|
24
|
+
- `STALE_DAYS` — integer. Default: `30`.
|
|
25
|
+
- `PRUNE_MODE` — boolean. Default: `false`.
|
|
26
|
+
|
|
27
|
+
**Parsing rules:**
|
|
28
|
+
|
|
29
|
+
1. Scan `$ARGUMENTS` for `--stale <N>`. If found, set `STALE_DAYS=<N>`. Validate that `<N>` is a positive integer; if not, print `Error: --stale requires a positive integer (e.g. --stale 14)` and stop. Strip from arguments.
|
|
30
|
+
2. Scan for `--prune`. If found, set `PRUNE_MODE=true`. Strip from arguments.
|
|
31
|
+
3. Remaining non-flag text (if any) is treated as `AGENT_FILTER`. Strip leading/trailing whitespace.
|
|
32
|
+
|
|
33
|
+
**Print active configuration:**
|
|
34
|
+
|
|
35
|
+
```
|
|
36
|
+
Scanning: <all agents | agent: AGENT_FILTER> | Stale threshold: STALE_DAYS days | Prune: yes/no
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
## Phase 1: Discover Memory Directories
|
|
42
|
+
|
|
43
|
+
Glob all directories matching `.claude/agent-memory/sr-*/`.
|
|
44
|
+
|
|
45
|
+
If no directories are found:
|
|
46
|
+
```
|
|
47
|
+
No agent memory directories found under .claude/agent-memory/.
|
|
48
|
+
|
|
49
|
+
Agent memory is written by sr-* agents during the /sr:implement pipeline.
|
|
50
|
+
Run /sr:implement on a feature to generate your first memory entries.
|
|
51
|
+
```
|
|
52
|
+
Then stop.
|
|
53
|
+
|
|
54
|
+
If `AGENT_FILTER` is set, filter to only the directory `.claude/agent-memory/<AGENT_FILTER>/`. If that directory does not exist:
|
|
55
|
+
```
|
|
56
|
+
No memory directory found for agent: <AGENT_FILTER>
|
|
57
|
+
|
|
58
|
+
Available agents:
|
|
59
|
+
<list of discovered sr-* directory names>
|
|
60
|
+
```
|
|
61
|
+
Then stop.
|
|
62
|
+
|
|
63
|
+
Set `AGENT_DIRS` = list of matching directories (full paths), sorted alphabetically.
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
## Phase 2: Collect Per-Agent Stats
|
|
68
|
+
|
|
69
|
+
For each directory in `AGENT_DIRS`, collect:
|
|
70
|
+
|
|
71
|
+
- `AGENT_NAME` — directory name (e.g. `sr-developer`)
|
|
72
|
+
- `FILE_COUNT` — total number of files (recursive, all types)
|
|
73
|
+
- `TOTAL_SIZE` — total size in bytes; display as human-readable (KB, MB)
|
|
74
|
+
- `LAST_MODIFIED` — ISO date of the most recently modified file
|
|
75
|
+
- `OLDEST_MODIFIED` — ISO date of the least recently modified file
|
|
76
|
+
- `STALE_FILES` — list of files not modified in more than `STALE_DAYS` days (full paths)
|
|
77
|
+
- `STALE_COUNT` — count of stale files
|
|
78
|
+
|
|
79
|
+
Use the current date to compute stale age. A file is stale if `(today - last_modified) > STALE_DAYS`.
|
|
80
|
+
|
|
81
|
+
Print a summary table after collecting all stats:
|
|
82
|
+
|
|
83
|
+
```
|
|
84
|
+
## Agent Memory Overview
|
|
85
|
+
|
|
86
|
+
| Agent | Files | Size | Last Modified | Stale (>STALE_DAYS days) |
|
|
87
|
+
|-------|-------|------|---------------|--------------------------|
|
|
88
|
+
| sr-developer | N | N KB | YYYY-MM-DD | N files |
|
|
89
|
+
| sr-reviewer | N | N KB | YYYY-MM-DD | N files |
|
|
90
|
+
| ... | ... | ... | ... | ... |
|
|
91
|
+
|
|
92
|
+
Total: N agents | N files | N KB
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
---
|
|
96
|
+
|
|
97
|
+
## Phase 3: Display Recent Entries
|
|
98
|
+
|
|
99
|
+
For each agent in `AGENT_DIRS`, show the 5 most recently modified files.
|
|
100
|
+
|
|
101
|
+
Print per agent:
|
|
102
|
+
|
|
103
|
+
```
|
|
104
|
+
### <agent-name>
|
|
105
|
+
|
|
106
|
+
Recent entries (5 most recent):
|
|
107
|
+
|
|
108
|
+
| File | Size | Last Modified |
|
|
109
|
+
|------|------|---------------|
|
|
110
|
+
| common-fixes.md | 2.1 KB | 2026-03-18 |
|
|
111
|
+
| ... | ... | ... |
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
If the agent directory has fewer than 5 files, show all of them.
|
|
115
|
+
|
|
116
|
+
If `AGENT_FILTER` is set (single-agent mode), show the full content of each file up to 50 lines. For files exceeding 50 lines, print the first 50 lines followed by:
|
|
117
|
+
```
|
|
118
|
+
... (N more lines — view full file at <relative-path>)
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
---
|
|
122
|
+
|
|
123
|
+
## Phase 4: Orphan Detection
|
|
124
|
+
|
|
125
|
+
An **orphaned** memory directory is one whose agent name does not correspond to a known sr-agent persona.
|
|
126
|
+
|
|
127
|
+
Known sr-agent names (check for exact match):
|
|
128
|
+
`sr-architect`, `sr-developer`, `sr-test-writer`, `sr-reviewer`, `sr-frontend-reviewer`, `sr-backend-reviewer`, `sr-security-reviewer`, `sr-doc-sync`, `sr-product-manager`
|
|
129
|
+
|
|
130
|
+
For each directory in `AGENT_DIRS`, check whether its `AGENT_NAME` is in the known list. Collect non-matching directories as `ORPHANED_DIRS`.
|
|
131
|
+
|
|
132
|
+
If `ORPHANED_DIRS` is non-empty, print:
|
|
133
|
+
|
|
134
|
+
```
|
|
135
|
+
### Orphaned Memory Directories
|
|
136
|
+
|
|
137
|
+
The following directories do not match any known sr-agent name and may be leftover from renamed or removed agents:
|
|
138
|
+
|
|
139
|
+
| Directory | Files | Size | Recommendation |
|
|
140
|
+
|-----------|-------|------|----------------|
|
|
141
|
+
| sr-old-agent | N | N KB | Review and delete if no longer needed |
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
If `ORPHANED_DIRS` is empty: skip this section entirely.
|
|
145
|
+
|
|
146
|
+
---
|
|
147
|
+
|
|
148
|
+
## Phase 5: Stale File Report
|
|
149
|
+
|
|
150
|
+
Collect all stale files across all agents (from Phase 2 `STALE_FILES` lists).
|
|
151
|
+
|
|
152
|
+
If no stale files exist:
|
|
153
|
+
```
|
|
154
|
+
No stale files found (threshold: STALE_DAYS days). Memory is up to date.
|
|
155
|
+
```
|
|
156
|
+
Skip the rest of Phase 5.
|
|
157
|
+
|
|
158
|
+
Otherwise, print:
|
|
159
|
+
|
|
160
|
+
```
|
|
161
|
+
### Stale Files (not modified in >STALE_DAYS days)
|
|
162
|
+
|
|
163
|
+
| Agent | File | Size | Last Modified | Age (days) |
|
|
164
|
+
|-------|------|------|---------------|------------|
|
|
165
|
+
| sr-developer | common-fixes.md | 1.2 KB | 2026-01-10 | 69 |
|
|
166
|
+
| ... | ... | ... | ... | ... |
|
|
167
|
+
|
|
168
|
+
N stale files total (N KB).
|
|
169
|
+
```
|
|
170
|
+
|
|
171
|
+
---
|
|
172
|
+
|
|
173
|
+
## Phase 6: Prune (if --prune)
|
|
174
|
+
|
|
175
|
+
Skip this phase if `PRUNE_MODE=false`.
|
|
176
|
+
|
|
177
|
+
If `PRUNE_MODE=true` and there are no stale files and no orphaned directories:
|
|
178
|
+
```
|
|
179
|
+
Nothing to prune. All memory files are within the STALE_DAYS-day threshold.
|
|
180
|
+
```
|
|
181
|
+
Then stop.
|
|
182
|
+
|
|
183
|
+
Otherwise, print the full list of files and directories that will be deleted:
|
|
184
|
+
|
|
185
|
+
```
|
|
186
|
+
## Files to Delete
|
|
187
|
+
|
|
188
|
+
The following N files will be permanently deleted:
|
|
189
|
+
|
|
190
|
+
Stale files:
|
|
191
|
+
- .claude/agent-memory/sr-developer/common-fixes.md (69 days old)
|
|
192
|
+
- ...
|
|
193
|
+
|
|
194
|
+
Orphaned directories:
|
|
195
|
+
- .claude/agent-memory/sr-old-agent/ (N files, N KB)
|
|
196
|
+
|
|
197
|
+
Proceed? [y/N]:
|
|
198
|
+
```
|
|
199
|
+
|
|
200
|
+
Wait for user input.
|
|
201
|
+
|
|
202
|
+
- If the user enters `y` or `Y`:
|
|
203
|
+
- Delete each stale file individually.
|
|
204
|
+
- Delete each orphaned directory recursively.
|
|
205
|
+
- Print a confirmation for each deletion: `Deleted: <path>`
|
|
206
|
+
- Print a summary:
|
|
207
|
+
```
|
|
208
|
+
Pruned N files (N KB freed).
|
|
209
|
+
```
|
|
210
|
+
- If the user enters anything else (or presses Enter):
|
|
211
|
+
- Print: `Prune cancelled. No files were deleted.`
|
|
212
|
+
- Stop.
|
|
213
|
+
|
|
214
|
+
---
|
|
215
|
+
|
|
216
|
+
## Phase 7: Recommendations
|
|
217
|
+
|
|
218
|
+
Print a final recommendations section based on findings:
|
|
219
|
+
|
|
220
|
+
```
|
|
221
|
+
## Recommendations
|
|
222
|
+
|
|
223
|
+
<one or more of the following, based on findings>
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
**Recommendation rules (print only applicable ones):**
|
|
227
|
+
|
|
228
|
+
1. **Prune stale data** — if `STALE_COUNT > 0` across any agent and `PRUNE_MODE=false`:
|
|
229
|
+
```
|
|
230
|
+
- N stale files detected. Run `/sr:memory-inspect --prune` to remove them and free N KB.
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
2. **Investigate large memory** — if any single agent's `TOTAL_SIZE > 1 MB`:
|
|
234
|
+
```
|
|
235
|
+
- <agent-name> memory exceeds 1 MB (TOTAL_SIZE). Consider reviewing large files:
|
|
236
|
+
<list files over 100 KB>
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
3. **Orphaned directories** — if `ORPHANED_DIRS` is non-empty:
|
|
240
|
+
```
|
|
241
|
+
- N orphaned director(y|ies) found. Review and delete manually if no longer needed.
|
|
242
|
+
```
|
|
243
|
+
|
|
244
|
+
4. **Empty memory directories** — if any agent directory has `FILE_COUNT = 0`:
|
|
245
|
+
```
|
|
246
|
+
- <agent-name> memory directory is empty. It may be safe to delete:
|
|
247
|
+
rm -rf .claude/agent-memory/<agent-name>/
|
|
248
|
+
```
|
|
249
|
+
|
|
250
|
+
5. **Gitignore advisory** — check whether `.claude/agent-memory` appears in `.gitignore`. If not:
|
|
251
|
+
```
|
|
252
|
+
- Agent memory is local runtime state. Add to .gitignore:
|
|
253
|
+
echo '.claude/agent-memory/' >> .gitignore
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
If no recommendations apply, print:
|
|
257
|
+
```
|
|
258
|
+
All agent memory looks healthy. No action required.
|
|
259
|
+
```
|
|
@@ -0,0 +1,405 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: "VPC Persona Drift Detector"
|
|
3
|
+
description: "Detect when user personas defined in the VPC are drifting from actual usage patterns. Compares persona Jobs/Pains/Gains against the product backlog, implemented features, and agent memory to surface alignment gaps and recommend VPC updates."
|
|
4
|
+
category: Product
|
|
5
|
+
tags: [product, vpc, personas, drift, alignment]
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
Analyze **{{PROJECT_NAME}}** for VPC persona drift — gaps between what persona definitions promise and what the product actually delivers. Produces a per-persona alignment score, drifted attributes, and concrete VPC update recommendations.
|
|
9
|
+
|
|
10
|
+
**Input:** $ARGUMENTS — optional flags:
|
|
11
|
+
- `--persona <names>` — comma-separated persona names to analyze. Default: all personas.
|
|
12
|
+
- `--verbose` — show full attribute lists in output (default: summarized).
|
|
13
|
+
- `--format json` — emit the drift report as JSON instead of Markdown.
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
17
|
+
## Phase 0: Argument Parsing
|
|
18
|
+
|
|
19
|
+
Parse `$ARGUMENTS` to set runtime variables.
|
|
20
|
+
|
|
21
|
+
**Variables to set:**
|
|
22
|
+
|
|
23
|
+
- `PERSONA_FILTER` — array of lowercased persona names, or `"all"`. Default: `"all"`.
|
|
24
|
+
- `VERBOSE` — boolean. Default: `false`.
|
|
25
|
+
- `FORMAT` — `"markdown"` or `"json"`. Default: `"markdown"`.
|
|
26
|
+
|
|
27
|
+
**Parsing rules:**
|
|
28
|
+
|
|
29
|
+
1. Scan `$ARGUMENTS` for `--persona <names>`. If found, split `<names>` on commas, lowercase each, set `PERSONA_FILTER=<array>`. Strip from arguments.
|
|
30
|
+
2. Scan for `--verbose`. If found, set `VERBOSE=true`. Strip from arguments.
|
|
31
|
+
3. Scan for `--format <value>`. If found and value is `json`, set `FORMAT="json"`. Any other value: print `Error: unknown format "<value>". Valid: markdown, json` and stop.
|
|
32
|
+
|
|
33
|
+
**Print active configuration:**
|
|
34
|
+
|
|
35
|
+
```
|
|
36
|
+
Analyzing personas: <all | comma-separated list>
|
|
37
|
+
Format: <markdown|json>
|
|
38
|
+
Verbose: <yes|no>
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
---
|
|
42
|
+
|
|
43
|
+
## Phase 1: Load VPC Personas
|
|
44
|
+
|
|
45
|
+
Read the persona files to extract the VPC attribute definitions.
|
|
46
|
+
|
|
47
|
+
### Step 1a: Discover persona files
|
|
48
|
+
|
|
49
|
+
Glob for persona files using these paths in order (use the first that yields results):
|
|
50
|
+
|
|
51
|
+
1. `.claude/agents/` — look for `.md` files whose content includes a `## Value Proposition Canvas` section.
|
|
52
|
+
2. `{{PERSONA_DIR}}/` — project-level persona directory (set by installer).
|
|
53
|
+
|
|
54
|
+
If no persona files are found in either location:
|
|
55
|
+
|
|
56
|
+
```
|
|
57
|
+
Error: No VPC persona files found.
|
|
58
|
+
Expected location: .claude/agents/*.md or {{PERSONA_DIR}}/*.md
|
|
59
|
+
Each persona file must contain a ## Value Proposition Canvas section.
|
|
60
|
+
Run /setup to generate persona files from templates.
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
Stop.
|
|
64
|
+
|
|
65
|
+
### Step 1b: Parse each persona
|
|
66
|
+
|
|
67
|
+
For each discovered file, extract:
|
|
68
|
+
|
|
69
|
+
- `PERSONA_NAME` — from the `# Persona:` heading or frontmatter `name:` field.
|
|
70
|
+
- `PERSONA_ROLE` — from the profile table row `**Name**` (the role portion after "— The ").
|
|
71
|
+
- `JOBS` — rows from the `### Customer Jobs` table. Each row: `{ type, job }`.
|
|
72
|
+
- `PAINS` — rows from the `### Pains` table. Each row: `{ severity, pain }`.
|
|
73
|
+
- `GAINS` — rows from the `### Gains` table. Each row: `{ impact, gain }`.
|
|
74
|
+
|
|
75
|
+
If `PERSONA_FILTER` is not `"all"`, skip any persona whose lowercased name is not in `PERSONA_FILTER`.
|
|
76
|
+
|
|
77
|
+
Store parsed personas in `PERSONAS` (array of objects).
|
|
78
|
+
|
|
79
|
+
**Print after discovery:**
|
|
80
|
+
|
|
81
|
+
```
|
|
82
|
+
Found <N> persona(s): <Name1> (<Role1>), <Name2> (<Role2>), ...
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
If `PERSONA_FILTER` was applied and yielded 0 matches:
|
|
86
|
+
|
|
87
|
+
```
|
|
88
|
+
Error: No personas matched filter: <PERSONA_FILTER>. Check spelling and try again.
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
Stop.
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
## Phase 2: Load Product Signals
|
|
96
|
+
|
|
97
|
+
Gather the three signal sources: backlog, implemented features, and agent memory.
|
|
98
|
+
|
|
99
|
+
### Step 2a: Backlog (requested features)
|
|
100
|
+
|
|
101
|
+
Load open/pending feature requests — these represent what the product *intends* to deliver.
|
|
102
|
+
|
|
103
|
+
1. **Cache:** Check whether `.claude/backlog-cache.json` exists and is valid JSON. If so, read all issues from it (`issues` map). Set `BACKLOG_SOURCE="cache"`.
|
|
104
|
+
2. **Live:** If no cache, run:
|
|
105
|
+
```bash
|
|
106
|
+
{{BACKLOG_FETCH_ALL_CMD}}
|
|
107
|
+
```
|
|
108
|
+
If the backlog provider is unavailable, set `BACKLOG_ITEMS=[]` and print:
|
|
109
|
+
```
|
|
110
|
+
Warning: backlog provider unavailable. Backlog signal will be skipped.
|
|
111
|
+
```
|
|
112
|
+
3. Parse each backlog item to extract:
|
|
113
|
+
- `title` — feature name.
|
|
114
|
+
- `description` — feature description (first 300 chars).
|
|
115
|
+
- `persona_scores` — per-persona scores from the Overview table (if present). Format: `{ "Alex": 3, "Sara": 5, "Kai": 0 }`.
|
|
116
|
+
- `area` — from the `area:*` label.
|
|
117
|
+
|
|
118
|
+
Store in `BACKLOG_ITEMS`. Print: `Backlog loaded: <N> items (source: <cache|live>)`.
|
|
119
|
+
|
|
120
|
+
### Step 2b: Implemented features
|
|
121
|
+
|
|
122
|
+
Gather signals about what has *actually been built*.
|
|
123
|
+
|
|
124
|
+
Run the following in sequence (each is best-effort — continue even if any fails):
|
|
125
|
+
|
|
126
|
+
**i. Git log (last 90 days):**
|
|
127
|
+
```bash
|
|
128
|
+
git log --oneline --since="90 days ago" --no-merges 2>/dev/null
|
|
129
|
+
```
|
|
130
|
+
Extract commit subjects. Filter out pure chore/docs/test/ci commits (those whose subject starts with `chore:`, `docs:`, `test:`, `ci:`). Store in `COMMIT_MESSAGES`.
|
|
131
|
+
|
|
132
|
+
**ii. CHANGELOG.md / CHANGELOG:**
|
|
133
|
+
Check whether `CHANGELOG.md` or `CHANGELOG` exists at the repo root. If found, read the last 500 lines. Extract headings and bullet points as feature descriptions. Store in `CHANGELOG_ENTRIES`.
|
|
134
|
+
|
|
135
|
+
**iii. Closed backlog issues (if GH available):**
|
|
136
|
+
```bash
|
|
137
|
+
{{BACKLOG_FETCH_CLOSED_CMD}}
|
|
138
|
+
```
|
|
139
|
+
Parse closed items the same way as open backlog items. Store in `CLOSED_ITEMS`.
|
|
140
|
+
|
|
141
|
+
Build `IMPLEMENTED_FEATURES` = array of strings combining `COMMIT_MESSAGES` + `CHANGELOG_ENTRIES` + closed item titles. Deduplicate by lowercased text.
|
|
142
|
+
|
|
143
|
+
Print: `Implemented signals: <N commits> commits, <N> changelog entries, <N> closed items`.
|
|
144
|
+
|
|
145
|
+
### Step 2c: Agent memory usage patterns
|
|
146
|
+
|
|
147
|
+
Check whether `.claude/agent-memory/` exists. If it does, glob all `.md` files within it. For each file:
|
|
148
|
+
- Read the filename and first 200 chars of content.
|
|
149
|
+
- Extract any feature names, tool names, or workflow keywords mentioned.
|
|
150
|
+
|
|
151
|
+
Store extracted terms in `MEMORY_SIGNALS` (flat string array).
|
|
152
|
+
|
|
153
|
+
If the directory does not exist or is empty: set `MEMORY_SIGNALS=[]`. Print: `Agent memory: no signals found.`
|
|
154
|
+
|
|
155
|
+
Otherwise: Print: `Agent memory: <N> signals from <N> files.`
|
|
156
|
+
|
|
157
|
+
---
|
|
158
|
+
|
|
159
|
+
## Phase 3: Drift Analysis — Per Persona
|
|
160
|
+
|
|
161
|
+
For each persona in `PERSONAS`, perform a full alignment analysis.
|
|
162
|
+
|
|
163
|
+
### Step 3a: Build a feature corpus
|
|
164
|
+
|
|
165
|
+
Create a combined text corpus:
|
|
166
|
+
```
|
|
167
|
+
CORPUS = BACKLOG_ITEMS titles + descriptions
|
|
168
|
+
+ IMPLEMENTED_FEATURES
|
|
169
|
+
+ MEMORY_SIGNALS
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
### Step 3b: Attribute matching
|
|
173
|
+
|
|
174
|
+
For each VPC attribute (Job, Pain, Gain), determine whether it is *addressed* by the corpus.
|
|
175
|
+
|
|
176
|
+
**Matching rule:** An attribute is considered addressed if at least one corpus entry contains 2+ meaningful keyword matches from the attribute text. Use semantic matching (synonyms count — e.g., "slow" matches "latency", "performance"). If exact matching is insufficient, use AI-assisted reasoning to determine relevance.
|
|
177
|
+
|
|
178
|
+
For each attribute, record:
|
|
179
|
+
- `addressed` — boolean: is this attribute addressed?
|
|
180
|
+
- `matched_by` — array of corpus items (up to 3) that most strongly address it.
|
|
181
|
+
- `match_confidence` — `"strong"` (3+ keywords or explicit mention), `"weak"` (2 keywords, indirect), `"none"`.
|
|
182
|
+
|
|
183
|
+
### Step 3c: Compute alignment scores
|
|
184
|
+
|
|
185
|
+
```
|
|
186
|
+
JOBS_ADDRESSED = count(jobs where addressed=true)
|
|
187
|
+
PAINS_RELIEVED = count(pains where addressed=true)
|
|
188
|
+
GAINS_CREATED = count(gains where addressed=true)
|
|
189
|
+
|
|
190
|
+
JOBS_SCORE = JOBS_ADDRESSED / total_jobs (0.0–1.0)
|
|
191
|
+
PAINS_SCORE = PAINS_RELIEVED / total_pains (0.0–1.0)
|
|
192
|
+
GAINS_SCORE = GAINS_CREATED / total_gains (0.0–1.0)
|
|
193
|
+
|
|
194
|
+
OVERALL_SCORE = (JOBS_SCORE + PAINS_SCORE + GAINS_SCORE) / 3
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
If a category has 0 attributes (e.g., no pains defined): exclude it from the OVERALL_SCORE denominator.
|
|
198
|
+
|
|
199
|
+
### Step 3d: Classify drift level
|
|
200
|
+
|
|
201
|
+
| Overall Score | Drift Level |
|
|
202
|
+
|---------------|-------------|
|
|
203
|
+
| ≥ 0.80 | Low |
|
|
204
|
+
| 0.60–0.79 | Medium |
|
|
205
|
+
| 0.40–0.59 | High |
|
|
206
|
+
| < 0.40 | Critical |
|
|
207
|
+
|
|
208
|
+
### Step 3e: Identify drifted attributes
|
|
209
|
+
|
|
210
|
+
A VPC attribute is **drifted** when `addressed=false`.
|
|
211
|
+
|
|
212
|
+
Rank drifted attributes by severity/impact weight:
|
|
213
|
+
- Pains with severity `critical` → weight 3
|
|
214
|
+
- Pains with severity `high` or Jobs/Gains with impact `high` → weight 2
|
|
215
|
+
- All others → weight 1
|
|
216
|
+
|
|
217
|
+
Sort drifted attributes by weight descending.
|
|
218
|
+
|
|
219
|
+
### Step 3f: Identify misaligned backlog items
|
|
220
|
+
|
|
221
|
+
A backlog item is **misaligned** for this persona when:
|
|
222
|
+
1. The item's `persona_scores` gives this persona a score of 0, AND
|
|
223
|
+
2. The item's description does not match any of this persona's VPC attributes (by the same matching rule as Step 3b).
|
|
224
|
+
|
|
225
|
+
OR when the item has no persona score data at all and its description does not semantically relate to any of this persona's Jobs/Pains/Gains.
|
|
226
|
+
|
|
227
|
+
### Step 3g: Generate VPC update recommendations
|
|
228
|
+
|
|
229
|
+
For each drifted attribute (weight ≥ 2), produce a concrete recommendation:
|
|
230
|
+
|
|
231
|
+
- If many features address a *different* pain than what's defined: "Consider updating the `<Pain>` attribute to reflect the observed pattern: [observed pattern]."
|
|
232
|
+
- If a Job is completely unaddressed across the product: "Either prioritize features addressing `<Job>`, or remove it from the VPC if no longer relevant."
|
|
233
|
+
- If a Gain is partially addressed: "Strengthen the `<Gain>` attribute description to capture the nuance being delivered by [feature(s)]."
|
|
234
|
+
|
|
235
|
+
Limit to top 5 recommendations per persona, sorted by weight descending.
|
|
236
|
+
|
|
237
|
+
Store per-persona results in `PERSONA_DRIFT` array.
|
|
238
|
+
|
|
239
|
+
---
|
|
240
|
+
|
|
241
|
+
## Phase 4: Detect Cross-Persona Patterns
|
|
242
|
+
|
|
243
|
+
After all per-persona analyses are complete, look for systemic patterns.
|
|
244
|
+
|
|
245
|
+
**Over-represented persona:** If one persona's backlog items make up > 60% of total items, flag it:
|
|
246
|
+
```
|
|
247
|
+
⚠️ Over-representation detected: <PersonaName> drives <N>% of backlog items.
|
|
248
|
+
This may indicate under-investment in other personas' pain points.
|
|
249
|
+
```
|
|
250
|
+
|
|
251
|
+
**Under-served persona:** If a persona's OVERALL_SCORE < 0.40:
|
|
252
|
+
```
|
|
253
|
+
🚨 Critical drift for <PersonaName>: only <N>% of their VPC attributes are being addressed.
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
**Orphan backlog items:** Items with no persona scores at all (neither from score data nor semantic matching). Count them. If > 20% of total backlog, flag:
|
|
257
|
+
```
|
|
258
|
+
⚠️ <N> backlog items (<N>%) have no clear persona linkage.
|
|
259
|
+
Consider running /sr:update-product-driven-backlog to re-evaluate them.
|
|
260
|
+
```
|
|
261
|
+
|
|
262
|
+
Store in `CROSS_PERSONA_FINDINGS`.
|
|
263
|
+
|
|
264
|
+
---
|
|
265
|
+
|
|
266
|
+
## Phase 5: Build and Render Drift Report
|
|
267
|
+
|
|
268
|
+
### If FORMAT = "json"
|
|
269
|
+
|
|
270
|
+
Emit a single JSON object:
|
|
271
|
+
|
|
272
|
+
```json
|
|
273
|
+
{
|
|
274
|
+
"schema_version": "1",
|
|
275
|
+
"project": "{{PROJECT_NAME}}",
|
|
276
|
+
"generated_at": "<ISO 8601 timestamp>",
|
|
277
|
+
"personas": [
|
|
278
|
+
{
|
|
279
|
+
"name": "<PersonaName>",
|
|
280
|
+
"role": "<Role>",
|
|
281
|
+
"drift_level": "<Low|Medium|High|Critical>",
|
|
282
|
+
"scores": {
|
|
283
|
+
"jobs": <0.0–1.0>,
|
|
284
|
+
"pains": <0.0–1.0>,
|
|
285
|
+
"gains": <0.0–1.0>,
|
|
286
|
+
"overall": <0.0–1.0>
|
|
287
|
+
},
|
|
288
|
+
"drifted_attributes": [
|
|
289
|
+
{ "category": "<job|pain|gain>", "text": "...", "weight": <1|2|3> }
|
|
290
|
+
],
|
|
291
|
+
"misaligned_items": ["<title>", ...],
|
|
292
|
+
"recommendations": ["..."]
|
|
293
|
+
}
|
|
294
|
+
],
|
|
295
|
+
"cross_persona_findings": ["..."],
|
|
296
|
+
"summary": {
|
|
297
|
+
"total_personas": <N>,
|
|
298
|
+
"critical": <N>,
|
|
299
|
+
"high": <N>,
|
|
300
|
+
"medium": <N>,
|
|
301
|
+
"low": <N>
|
|
302
|
+
}
|
|
303
|
+
}
|
|
304
|
+
```
|
|
305
|
+
|
|
306
|
+
Stop after emitting JSON.
|
|
307
|
+
|
|
308
|
+
### If FORMAT = "markdown"
|
|
309
|
+
|
|
310
|
+
Render the full drift report:
|
|
311
|
+
|
|
312
|
+
```
|
|
313
|
+
## VPC Persona Drift Report — {{PROJECT_NAME}}
|
|
314
|
+
Generated: <YYYY-MM-DD HH:MM> | Backlog: <N> items | Implemented signals: <N>
|
|
315
|
+
|
|
316
|
+
### Summary
|
|
317
|
+
|
|
318
|
+
| Persona | Role | Jobs | Pains | Gains | Overall | Drift Level |
|
|
319
|
+
|---------|------|------|-------|-------|---------|-------------|
|
|
320
|
+
| <Name> | <Role> | <N%> | <N%> | <N%> | <N%> | 🟢 Low / 🟡 Medium / 🟠 High / 🔴 Critical |
|
|
321
|
+
|
|
322
|
+
<for each CROSS_PERSONA_FINDING: render the warning/flag block>
|
|
323
|
+
|
|
324
|
+
---
|
|
325
|
+
```
|
|
326
|
+
|
|
327
|
+
Then for each persona:
|
|
328
|
+
|
|
329
|
+
```
|
|
330
|
+
### Persona: <Name> — <Role>
|
|
331
|
+
|
|
332
|
+
**Drift Level:** 🟢/🟡/🟠/🔴 <Level> | **Alignment: <N>%** (Jobs: <N>%, Pains: <N>%, Gains: <N>%)
|
|
333
|
+
|
|
334
|
+
#### ✅ Addressed Attributes (<N> of <total>)
|
|
335
|
+
|
|
336
|
+
<if VERBOSE=true:>
|
|
337
|
+
| Category | Attribute | Confidence | Matched by |
|
|
338
|
+
|----------|-----------|------------|------------|
|
|
339
|
+
| Job | <text> | Strong | <feature1>, <feature2> |
|
|
340
|
+
| Pain | <text> | Weak | <feature1> |
|
|
341
|
+
|
|
342
|
+
<if VERBOSE=false:>
|
|
343
|
+
- **Jobs**: <N> of <total> addressed
|
|
344
|
+
- **Pains**: <N> of <total> relieved
|
|
345
|
+
- **Gains**: <N> of <total> created
|
|
346
|
+
|
|
347
|
+
#### ⚠️ Drifted Attributes (<N> unaddressed)
|
|
348
|
+
|
|
349
|
+
| Category | Attribute | Severity/Impact | Weight |
|
|
350
|
+
|----------|-----------|-----------------|--------|
|
|
351
|
+
| Pain | <text> | critical | ●●● |
|
|
352
|
+
| Job | <text> | high | ●● |
|
|
353
|
+
| Gain | <text> | medium | ● |
|
|
354
|
+
|
|
355
|
+
<if no drifted attributes:>
|
|
356
|
+
_No drifted attributes — all VPC definitions are reflected in the product._
|
|
357
|
+
|
|
358
|
+
#### ❌ Misaligned Backlog Items (<N> items)
|
|
359
|
+
|
|
360
|
+
<if items exist:>
|
|
361
|
+
| # | Title | Persona Score | Why Misaligned |
|
|
362
|
+
|---|-------|---------------|----------------|
|
|
363
|
+
| <number> | <title> | 0/5 | No matching VPC attribute |
|
|
364
|
+
|
|
365
|
+
<if no items:>
|
|
366
|
+
_All backlog items have clear VPC alignment for this persona._
|
|
367
|
+
|
|
368
|
+
#### 💡 Recommended VPC Updates
|
|
369
|
+
|
|
370
|
+
<numbered list of up to 5 recommendations>
|
|
371
|
+
|
|
372
|
+
---
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
After all personas:
|
|
376
|
+
|
|
377
|
+
```
|
|
378
|
+
### Next Steps
|
|
379
|
+
|
|
380
|
+
1. Review drifted attributes and decide: **update VPC** (if the product has legitimately evolved) or **add backlog items** (if the persona's needs are being neglected).
|
|
381
|
+
2. Run `/sr:update-product-driven-backlog` after updating personas to regenerate aligned feature ideas.
|
|
382
|
+
3. Re-run `/sr:vpc-drift` after one sprint to measure improvement.
|
|
383
|
+
|
|
384
|
+
_Generated by `/sr:vpc-drift` in {{PROJECT_NAME}} on <ISO date>_
|
|
385
|
+
```
|
|
386
|
+
|
|
387
|
+
---
|
|
388
|
+
|
|
389
|
+
## Phase 6: Save Snapshot (optional)
|
|
390
|
+
|
|
391
|
+
After rendering, write a drift snapshot to `.claude/health-history/`:
|
|
392
|
+
|
|
393
|
+
1. Filename: `vpc-drift-<YYYY-MM-DD>.json`
|
|
394
|
+
2. Directory: `.claude/vpc-drift-history/` (create if absent, idempotent).
|
|
395
|
+
3. Content: the same JSON object described in Phase 5 (regardless of FORMAT setting).
|
|
396
|
+
|
|
397
|
+
Print: `Snapshot saved: .claude/vpc-drift-history/vpc-drift-<YYYY-MM-DD>.json`
|
|
398
|
+
|
|
399
|
+
If the write fails: print `Warning: could not write drift snapshot. Continuing.` Do not abort.
|
|
400
|
+
|
|
401
|
+
**Housekeeping:** If `.claude/vpc-drift-history/` has more than 30 `.json` files, print:
|
|
402
|
+
```
|
|
403
|
+
Note: .claude/vpc-drift-history/ has <N> snapshots. Prune old ones with:
|
|
404
|
+
ls -t .claude/vpc-drift-history/ | tail -n +31 | xargs -I{} rm .claude/vpc-drift-history/{}
|
|
405
|
+
```
|