relay-workflow 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/skills/relay-analyze/SKILL.md +6 -0
- package/.claude/skills/relay-analyze/workflow.md +108 -0
- package/.claude/skills/relay-brainstorm/SKILL.md +6 -0
- package/.claude/skills/relay-brainstorm/workflow.md +114 -0
- package/.claude/skills/relay-cleanup/SKILL.md +6 -0
- package/.claude/skills/relay-cleanup/workflow.md +53 -0
- package/.claude/skills/relay-design/SKILL.md +6 -0
- package/.claude/skills/relay-design/workflow.md +108 -0
- package/.claude/skills/relay-discover/SKILL.md +6 -0
- package/.claude/skills/relay-discover/workflow.md +56 -0
- package/.claude/skills/relay-help/SKILL.md +6 -0
- package/.claude/skills/relay-help/workflow.md +165 -0
- package/.claude/skills/relay-new-issue/SKILL.md +6 -0
- package/.claude/skills/relay-new-issue/workflow.md +90 -0
- package/.claude/skills/relay-notebook/SKILL.md +6 -0
- package/.claude/skills/relay-notebook/workflow.md +264 -0
- package/.claude/skills/relay-order/SKILL.md +6 -0
- package/.claude/skills/relay-order/workflow.md +51 -0
- package/.claude/skills/relay-plan/SKILL.md +6 -0
- package/.claude/skills/relay-plan/workflow.md +133 -0
- package/.claude/skills/relay-resolve/SKILL.md +6 -0
- package/.claude/skills/relay-resolve/workflow.md +135 -0
- package/.claude/skills/relay-review/SKILL.md +6 -0
- package/.claude/skills/relay-review/workflow.md +163 -0
- package/.claude/skills/relay-scan/SKILL.md +6 -0
- package/.claude/skills/relay-scan/workflow.md +103 -0
- package/.claude/skills/relay-setup/SKILL.md +6 -0
- package/.claude/skills/relay-setup/workflow.md +296 -0
- package/.claude/skills/relay-verify/SKILL.md +6 -0
- package/.claude/skills/relay-verify/workflow.md +100 -0
- package/LICENSE +21 -0
- package/README.md +374 -0
- package/package.json +43 -0
- package/tools/cli.js +186 -0
|
@@ -0,0 +1,90 @@
|
|
|
1
|
+
# Relay: Create Issue
|
|
2
|
+
|
|
3
|
+
**Sequence**: **`/relay-new-issue`** → `/relay-scan` → `/relay-order` → `/relay-analyze` → `/relay-plan` → `/relay-review` → *implement* → `/relay-verify` → `/relay-notebook` → `/relay-resolve`
|
|
4
|
+
|
|
5
|
+
## How to invoke
|
|
6
|
+
|
|
7
|
+
There are two usage patterns:
|
|
8
|
+
|
|
9
|
+
**In-context** — you're already investigating the codebase and find something:
|
|
10
|
+
```
|
|
11
|
+
/relay-new-issue The batch processor silently drops episodes when the connection pool is exhausted.
|
|
12
|
+
```
|
|
13
|
+
|
|
14
|
+
**Cross-chat handoff** — you found something in another conversation and want to file it. Paste the relevant context so the AI can investigate from a cold start:
|
|
15
|
+
```
|
|
16
|
+
/relay-new-issue
|
|
17
|
+
|
|
18
|
+
Context from another session:
|
|
19
|
+
While testing voice ingestion, the extraction pipeline threw a
|
|
20
|
+
KeyError on `coherence_facts` when the LLM returned an empty list
|
|
21
|
+
instead of a dict. The error was in pipeline.py around line 340.
|
|
22
|
+
The episode was silently marked as complete despite the failure.
|
|
23
|
+
|
|
24
|
+
Create an issue for this.
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
|
|
29
|
+
Analyze the following and create a documentation file.
|
|
30
|
+
|
|
31
|
+
Topic: [DESCRIPTION — what you found, what you want filed]
|
|
32
|
+
|
|
33
|
+
Context (if from another session):
|
|
34
|
+
[Paste error messages, observations, file paths, or conversation
|
|
35
|
+
excerpts that describe the finding. If invoking in-context, this
|
|
36
|
+
section can be omitted.]
|
|
37
|
+
|
|
38
|
+
1. Investigate the topic in the codebase — read relevant files, understand
|
|
39
|
+
the current state. If context is provided from another session, use it
|
|
40
|
+
as a starting point but verify everything against the actual code.
|
|
41
|
+
|
|
42
|
+
2. Determine if this is an issue (bug, shortcoming, gap) or a feature
|
|
43
|
+
(new capability, enhancement). If it's a feature, STOP — do not
|
|
44
|
+
create a file. Skip to the Navigation section and direct the user
|
|
45
|
+
to /relay-brainstorm.
|
|
46
|
+
|
|
47
|
+
3. Check the full docs landscape for existing coverage:
|
|
48
|
+
- .relay/issues/ — is this already tracked?
|
|
49
|
+
- .relay/features/ — is this already planned?
|
|
50
|
+
- .relay/implemented/ — was this already addressed?
|
|
51
|
+
- .relay/archive/issues/ and .relay/archive/features/ — was this
|
|
52
|
+
previously resolved or attempted?
|
|
53
|
+
If a matching file exists, update it rather than creating a duplicate.
|
|
54
|
+
If an archived issue has regressed, create a new issue that references
|
|
55
|
+
the archive file.
|
|
56
|
+
|
|
57
|
+
4. Write the analysis to:
|
|
58
|
+
- .relay/issues/[descriptive_name].md if it's an issue
|
|
59
|
+
- If step 2 determined this is a feature (new capability, enhancement),
|
|
60
|
+
do NOT create a file — skip to the Navigation section and direct the
|
|
61
|
+
user to /relay-brainstorm. All features go through the brainstorm
|
|
62
|
+
→ design pipeline to ensure they get proper *Status: DESIGNED*
|
|
63
|
+
metadata and are tracked by the prepare skills (/relay-scan and /relay-order).
|
|
64
|
+
|
|
65
|
+
5. Include:
|
|
66
|
+
- *Created: [YYYY-MM-DD]*
|
|
67
|
+
- Title and severity (P0 critical / P1 high / P2 medium / P3 low)
|
|
68
|
+
- Problem statement: what's wrong or what's needed, and why it matters
|
|
69
|
+
- Current state: exact file paths, line numbers, code snippets
|
|
70
|
+
- Impact: what breaks, what's degraded, who's affected
|
|
71
|
+
- Proposed fix: concrete remediation steps
|
|
72
|
+
- Affected files: list of files that need changes
|
|
73
|
+
|
|
74
|
+
Output: New issue doc in .relay/issues/, or redirect to /relay-brainstorm
|
|
75
|
+
|
|
76
|
+
## Navigation
|
|
77
|
+
When finished, tell the user the next step based on the outcome:
|
|
78
|
+
- If the topic is a feature (no file was created):
|
|
79
|
+
"This is a feature. Run **/relay-brainstorm** to explore and design it."
|
|
80
|
+
- If an issue file was created:
|
|
81
|
+
"Next: run **/relay-scan** to update project status, then **/relay-order** to prioritize the work."
|
|
82
|
+
|
|
83
|
+
## Notes
|
|
84
|
+
|
|
85
|
+
- Replace `[DESCRIPTION]` with what you want analyzed — be as specific as possible
|
|
86
|
+
- Always check for existing docs first to avoid duplication
|
|
87
|
+
- Use descriptive filenames: `fuzzy_matching_gaps.md` not `issue_42.md`
|
|
88
|
+
- This skill does NOT update relay-status.md or relay-ordering.md — that is the responsibility of /relay-scan and /relay-order
|
|
89
|
+
- For cross-chat handoffs: include enough context that the AI can reproduce the finding without the original conversation
|
|
90
|
+
- All features (small or large) are redirected to /relay-brainstorm — only issues are filed directly by this skill
|
|
@@ -0,0 +1,264 @@
|
|
|
1
|
+
# Relay: Code — Create & Validate Verification Notebook
|
|
2
|
+
|
|
3
|
+
**Sequence**: `/relay-analyze` → `/relay-plan` → `/relay-review` → *implement* → `/relay-verify` → **`/relay-notebook`** → `/relay-resolve`
|
|
4
|
+
|
|
5
|
+
Create a verification notebook for each implemented and verified issue/feature
|
|
6
|
+
file in this phase, then RUN every cell and iterate until all cells pass.
|
|
7
|
+
|
|
8
|
+
## Prerequisites Check
|
|
9
|
+
|
|
10
|
+
Before proceeding, read the target item file(s) and verify the **most
|
|
11
|
+
recent** ## Verification Report section has verdict COMPLETE (from
|
|
12
|
+
/relay-verify). Note: there may be multiple Verification Report sections
|
|
13
|
+
if re-verification occurred — check the last one. If missing or the last
|
|
14
|
+
verdict is not COMPLETE, STOP and tell the user:
|
|
15
|
+
"No completed verification found. Run **/relay-verify** first."
|
|
16
|
+
|
|
17
|
+
## Part 0 — Environment Check
|
|
18
|
+
|
|
19
|
+
Before creating the notebook, verify the notebook execution dependencies
|
|
20
|
+
are available:
|
|
21
|
+
|
|
22
|
+
1. Check if `nbclient`, `nbformat`, and `nbconvert` are importable:
|
|
23
|
+
```
|
|
24
|
+
python3 -c "import nbclient, nbformat, nbconvert"
|
|
25
|
+
```
|
|
26
|
+
Also check that IPython 7+ is available (required for top-level
|
|
27
|
+
`await` directly in notebook cells):
|
|
28
|
+
```
|
|
29
|
+
python3 -c "import IPython; v=tuple(int(x) for x in IPython.__version__.split('.')[:2]); assert v>=(7,0), f'IPython 7+ required for top-level await, found {IPython.__version__}'"
|
|
30
|
+
```
|
|
31
|
+
If the IPython check fails, upgrade before proceeding:
|
|
32
|
+
`pip install --upgrade ipython ipykernel`
|
|
33
|
+
|
|
34
|
+
2. If all checks pass, proceed to Part A.
|
|
35
|
+
|
|
36
|
+
3. If they are NOT available:
|
|
37
|
+
a. Look for an existing virtual environment (`.venv/`, `venv/`, or
|
|
38
|
+
check if already running inside one via `sys.prefix != sys.base_prefix`).
|
|
39
|
+
b. If a venv exists: activate it and run
|
|
40
|
+
`pip install nbclient nbformat nbconvert`, then proceed to Part A.
|
|
41
|
+
c. If no venv exists but Python 3 is available: ask the user before
|
|
42
|
+
installing globally: "No virtual environment found. Install
|
|
43
|
+
nbclient/nbformat/nbconvert into the system Python? (Or create a
|
|
44
|
+
venv first with `python3 -m venv .venv`)"
|
|
45
|
+
If the user approves, run `pip install nbclient nbformat nbconvert`,
|
|
46
|
+
then proceed to Part A.
|
|
47
|
+
d. If Python 3 is not available: tell the user
|
|
48
|
+
"Python 3 is required for verification notebooks. Install Python 3
|
|
49
|
+
and re-run this step." Do NOT proceed — this is a blocker.
|
|
50
|
+
|
|
51
|
+
## Part A — Create the Notebook
|
|
52
|
+
|
|
53
|
+
Location: .relay/notebooks/
|
|
54
|
+
Naming: Use the SAME name as the issue/feature file, but with .ipynb
|
|
55
|
+
extension instead of .md.
|
|
56
|
+
e.g., `delete_entity_not_atomic.md` → `delete_entity_not_atomic.ipynb`
|
|
57
|
+
|
|
58
|
+
For each issue/feature file in the phase:
|
|
59
|
+
|
|
60
|
+
1. HEADER CELL (markdown):
|
|
61
|
+
- Title: `# [Item Title]: Verification`
|
|
62
|
+
- Brief description of what was changed and what this notebook tests
|
|
63
|
+
- Table of what changed (before/after, or list of changes)
|
|
64
|
+
- Reference to the item file (use repo-root-relative paths so links
|
|
65
|
+
work both before and after the notebook is archived):
|
|
66
|
+
```
|
|
67
|
+
**Item file**: [.relay/issues/FILENAME.md](.relay/issues/FILENAME.md) or [.relay/features/FILENAME.md](.relay/features/FILENAME.md)
|
|
68
|
+
(after resolution: [.relay/archive/issues/FILENAME.md](.relay/archive/issues/FILENAME.md) or [.relay/archive/features/FILENAME.md](.relay/archive/features/FILENAME.md))
|
|
69
|
+
```
|
|
70
|
+
- Prerequisites (database, env vars, install steps)
|
|
71
|
+
|
|
72
|
+
2. SETUP CELL:
|
|
73
|
+
- Read the "### Imports" section of .relay/relay-config.md and use it
|
|
74
|
+
as the import block for this cell
|
|
75
|
+
- Create connection
|
|
76
|
+
- Create isolated test fixtures (unique user, project, namespace with uuid tag)
|
|
77
|
+
- Set up pass/fail tracking:
|
|
78
|
+
```python
|
|
79
|
+
passed, failed = [], []
|
|
80
|
+
|
|
81
|
+
# Use record() when you already have a boolean condition to check.
|
|
82
|
+
# Example: record('entity exists', entity is not None)
|
|
83
|
+
def record(name, condition, detail=''):
|
|
84
|
+
if condition:
|
|
85
|
+
passed.append(name)
|
|
86
|
+
print(f' PASS: {name}')
|
|
87
|
+
else:
|
|
88
|
+
failed.append(name)
|
|
89
|
+
print(f' FAIL: {name} -- {detail}')
|
|
90
|
+
|
|
91
|
+
# Use run_test() when your test is a callable that might throw
|
|
92
|
+
# an exception. It catches errors and records them as failures.
|
|
93
|
+
# Example: await run_test('ingest works', lambda: m.ingest(ns_id, text))
|
|
94
|
+
async def run_test(name, fn):
|
|
95
|
+
"""Wrapper that catches exceptions and records them as failures."""
|
|
96
|
+
try:
|
|
97
|
+
result = fn()
|
|
98
|
+
if asyncio.iscoroutine(result):
|
|
99
|
+
result = await result
|
|
100
|
+
if isinstance(result, bool):
|
|
101
|
+
record(name, result)
|
|
102
|
+
elif result is None:
|
|
103
|
+
record(name, True) # No return = completed without error
|
|
104
|
+
else:
|
|
105
|
+
record(name, bool(result), f'returned falsy: {result!r}' if not result else '')
|
|
106
|
+
except Exception as e:
|
|
107
|
+
record(name, False, str(e))
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
**CRITICAL — Integration, not simulation**: The notebook MUST exercise the
|
|
111
|
+
project's real code — import the actual modules, connect to real backends,
|
|
112
|
+
call the public API, and verify observable behavior end-to-end. Do NOT
|
|
113
|
+
reimplement or simulate the fixed logic locally in the notebook. If a
|
|
114
|
+
notebook can run without the project's dependencies (database, services,
|
|
115
|
+
etc.), it is a unit test duplicate, not a verification notebook. Unit tests
|
|
116
|
+
already cover isolated logic — this step validates the fix works in the
|
|
117
|
+
integrated system.
|
|
118
|
+
|
|
119
|
+
3. SEED DATA: Ingest realistic test content that exercises the changed code path.
|
|
120
|
+
Read the "### Standard Fixtures" section of .relay/relay-config.md for
|
|
121
|
+
the project's standard test content and domain conventions. Use those
|
|
122
|
+
unless the specific change requires different data.
|
|
123
|
+
|
|
124
|
+
4. CORE VERIFICATION (one section per aspect of the change):
|
|
125
|
+
- Each section has a markdown header explaining what it tests
|
|
126
|
+
- Each test uses `record()` for pass/fail tracking
|
|
127
|
+
- Tests should verify BEHAVIOR, not implementation details
|
|
128
|
+
- Include the specific scenario that was broken/missing before the change
|
|
129
|
+
|
|
130
|
+
5. EDGE CASES:
|
|
131
|
+
- Empty input, None values, boundary conditions
|
|
132
|
+
- The specific edge cases identified during the review (/relay-review)
|
|
133
|
+
|
|
134
|
+
6. REGRESSION CHECKS:
|
|
135
|
+
- Verify that related functionality still works after the change
|
|
136
|
+
- If the change touches extraction: verify search still works
|
|
137
|
+
- If the change touches storage: verify ingest/search/export still work
|
|
138
|
+
- If the change touches API: verify public API equivalents still work
|
|
139
|
+
|
|
140
|
+
7. SUMMARY CELL:
|
|
141
|
+
```python
|
|
142
|
+
print(f'RESULTS: {len(passed)} passed, {len(failed)} failed')
|
|
143
|
+
if failed:
|
|
144
|
+
print('\nFailed:')
|
|
145
|
+
for name in failed:
|
|
146
|
+
print(f' FAIL: {name}')
|
|
147
|
+
else:
|
|
148
|
+
print('All tests passed!')
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
8. CLEANUP CELL:
|
|
152
|
+
- Delete any test fixtures created in the SETUP CELL (users, projects,
|
|
153
|
+
namespaces, test data) to prevent test artifacts from accumulating
|
|
154
|
+
- Close connections and release resources
|
|
155
|
+
- Read the "### Cleanup Pattern" section of .relay/relay-config.md and
|
|
156
|
+
use it as the basis for this cell's teardown code.
|
|
157
|
+
|
|
158
|
+
## Part B — Run, Validate, and Iterate
|
|
159
|
+
|
|
160
|
+
After creating the notebook, execute every cell sequentially. For each cell
|
|
161
|
+
that errors or produces a FAIL:
|
|
162
|
+
|
|
163
|
+
1. DIAGNOSE the failure. Classify it as one of three types:
|
|
164
|
+
|
|
165
|
+
a) NOTEBOOK CODE ISSUE — the cell code itself is wrong (typo, wrong API
|
|
166
|
+
call, incorrect assertion, missing await, etc.)
|
|
167
|
+
→ Fix the cell code directly and re-run. No item file update needed.
|
|
168
|
+
|
|
169
|
+
b) PROJECT CODE ISSUE — RELATED to the current change. The implementation
|
|
170
|
+
is incomplete, has a bug, or introduced a regression in the code path
|
|
171
|
+
that was just changed.
|
|
172
|
+
→ Append a Post-Implementation Fix to the item file (see below).
|
|
173
|
+
→ Implement the fix in the project code.
|
|
174
|
+
→ Re-run the failing cell.
|
|
175
|
+
|
|
176
|
+
c) PROJECT CODE ISSUE — UNRELATED to the current change. A pre-existing
|
|
177
|
+
bug that the notebook happened to expose.
|
|
178
|
+
→ Flag this to the user. Do NOT fix it inline.
|
|
179
|
+
→ Invoke **/relay-new-issue** to create a new issue file. Provide
|
|
180
|
+
the notebook context:
|
|
181
|
+
- Currently working on: [item file being verified]
|
|
182
|
+
- Notebook: [notebook file]
|
|
183
|
+
- Failing cell: [cell description]
|
|
184
|
+
- Error: [the actual error]
|
|
185
|
+
→ In the notebook, mark the cell with a comment:
|
|
186
|
+
```python
|
|
187
|
+
# KNOWN ISSUE: see .relay/issues/[new_issue_name].md
|
|
188
|
+
# (after resolution: .relay/archive/issues/[new_issue_name].md)
|
|
189
|
+
```
|
|
190
|
+
→ Continue to the next cell.
|
|
191
|
+
|
|
192
|
+
2. For type (b) — RELATED project code issues:
|
|
193
|
+
|
|
194
|
+
Append to the item file in .relay/issues/ or .relay/features/ (do NOT replace
|
|
195
|
+
existing sections — these are additive):
|
|
196
|
+
|
|
197
|
+
---
|
|
198
|
+
|
|
199
|
+
## Post-Implementation Fix #[N]
|
|
200
|
+
|
|
201
|
+
*Date: [YYYY-MM-DD]*
|
|
202
|
+
*Found during: notebook cell [cell description]*
|
|
203
|
+
|
|
204
|
+
### Problem
|
|
205
|
+
- What failed and what the error/unexpected behavior was
|
|
206
|
+
- Root cause: what the original implementation missed or got wrong
|
|
207
|
+
|
|
208
|
+
### Plan
|
|
209
|
+
- Specific code changes to fix this
|
|
210
|
+
- Files to modify
|
|
211
|
+
|
|
212
|
+
### Rollback
|
|
213
|
+
- How to revert just this post-implementation fix
|
|
214
|
+
- Whether reverting this also requires reverting the original change
|
|
215
|
+
|
|
216
|
+
After documenting, implement the fix, then re-run the failing cell.
|
|
217
|
+
If the fix introduces further failures, repeat this process as
|
|
218
|
+
Post-Implementation Fix #[N+1].
|
|
219
|
+
|
|
220
|
+
3. Keep iterating until:
|
|
221
|
+
- All cells pass, OR
|
|
222
|
+
- All remaining failures are type (c) unrelated issues with
|
|
223
|
+
`# KNOWN ISSUE` comments
|
|
224
|
+
|
|
225
|
+
4. Once stable, update the notebook's summary cell output to reflect
|
|
226
|
+
the final pass/fail state.
|
|
227
|
+
|
|
228
|
+
Output: Validated notebook(s) in .relay/notebooks/, updated item file(s)
|
|
229
|
+
if post-implementation fixes were needed
|
|
230
|
+
|
|
231
|
+
## Navigation
|
|
232
|
+
When finished, tell the user:
|
|
233
|
+
- "Next: run **/relay-resolve** to close out and archive."
|
|
234
|
+
|
|
235
|
+
## Guidelines
|
|
236
|
+
|
|
237
|
+
- Read the "### Async Pattern" section of .relay/relay-config.md for
|
|
238
|
+
project-specific async, logging flush, and timing requirements
|
|
239
|
+
- Use `uuid.uuid4().hex[:8]` tags for test isolation
|
|
240
|
+
- Prefer `record()` assertions over bare `assert` — record() shows all
|
|
241
|
+
results at the end, assert stops at first failure
|
|
242
|
+
- Keep test content short but realistic — enough to trigger the code path
|
|
243
|
+
- Do NOT test implementation details (internal method calls, log messages) —
|
|
244
|
+
test observable behavior (return values, stored data, search results)
|
|
245
|
+
- Every cell must produce visible output (print statements, record() calls)
|
|
246
|
+
so the user can review what happened during execution
|
|
247
|
+
- Build high-quality verification cells that thoroughly test the fix/feature
|
|
248
|
+
in the context of the actual project — not toy examples
|
|
249
|
+
- Log intermediate state (e.g., print entity counts, show query results)
|
|
250
|
+
so failures are diagnosable from the notebook output alone
|
|
251
|
+
- Run every cell and verify the output before considering the notebook done —
|
|
252
|
+
a notebook with untested cells is incomplete
|
|
253
|
+
|
|
254
|
+
## Notes
|
|
255
|
+
|
|
256
|
+
- Notebooks live in `.relay/notebooks/`, NOT in the project root `notebooks/` directory
|
|
257
|
+
- Notebook filename matches the issue/feature filename (`.md` → `.ipynb`) for traceability
|
|
258
|
+
- The header cell includes both the active and archived path so the link works before and after resolution
|
|
259
|
+
- When /relay-resolve archives the item, it also archives the notebook to `.relay/archive/notebooks/`
|
|
260
|
+
- Every notebook should be self-contained: create its own fixtures, don't depend on other notebooks
|
|
261
|
+
- If a phase has multiple item files, create one notebook per item file (not one giant notebook)
|
|
262
|
+
- Post-Implementation Fixes are numbered sequentially (#1, #2, #3...) and never replace each other — this preserves the full history of what went wrong and how it was addressed
|
|
263
|
+
- If a post-implementation fix is large or complex enough to warrant full analysis, escalate: tell the user to re-run /relay-analyze → /relay-plan → /relay-review for it instead of handling inline
|
|
264
|
+
- For type (c) unrelated issues: /relay-new-issue handles the issue filing — provide the notebook context so it can investigate from a cold start
|
|
@@ -0,0 +1,6 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: relay-order
|
|
3
|
+
description: 'Generate prioritized work ordering in relay-ordering.md. Analyzes dependencies, severity, and complexity to produce phased implementation plan. Use after relay-scan updates status.'
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
Follow the instructions in ./workflow.md.
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# Relay: Generate Solution Ordering
|
|
2
|
+
|
|
3
|
+
**Sequence**: `/relay-scan` → **`/relay-order`** → `/relay-analyze` → ...
|
|
4
|
+
|
|
5
|
+
Generate a solution ordering for all outstanding work:
|
|
6
|
+
|
|
7
|
+
0. Read the "*Last generated:*" date in .relay/relay-status.md. If it
|
|
8
|
+
is more than 1 day old, WARN the user: "relay-status.md was generated
|
|
9
|
+
on [date] — consider running **/relay-scan** first to refresh before
|
|
10
|
+
ordering." Wait for the user to confirm before proceeding.
|
|
11
|
+
|
|
12
|
+
1. Read .relay/relay-status.md for current item status
|
|
13
|
+
2. Read the full content of every OUTSTANDING and PARTIAL item
|
|
14
|
+
(files in .relay/issues/ and .relay/features/)
|
|
15
|
+
- Exclude brainstorm files (*_brainstorm.md) — these are managed by
|
|
16
|
+
the feature workflow, not actionable work items. Only individual
|
|
17
|
+
feature files (created by /relay-design) are ordered here.
|
|
18
|
+
3. Check for intra-feature dependencies:
|
|
19
|
+
- Read the Development Order and Dependencies sections in individual
|
|
20
|
+
feature files — these record which features depend on others and
|
|
21
|
+
link back to their brainstorm for context
|
|
22
|
+
- Respect these intra-feature orderings: keep related features grouped
|
|
23
|
+
and sequenced as specified unless a cross-cutting dependency overrides
|
|
24
|
+
4. Analyze cross-item dependencies (does resolving X require Y first?)
|
|
25
|
+
5. Consider: severity/priority, complexity, blast radius, quick wins
|
|
26
|
+
6. Update the `*Last generated:*` date header to today's date (YYYY-MM-DD).
|
|
27
|
+
Produce an ordered implementation plan in .relay/relay-ordering.md with:
|
|
28
|
+
- Phases (groups of items that can be done together)
|
|
29
|
+
- For each item: ID, title, file link, estimated complexity, dependencies
|
|
30
|
+
- Rationale for the ordering
|
|
31
|
+
- If a phase contains related features (sharing the same brainstorm
|
|
32
|
+
link), note their intended build order from the feature files
|
|
33
|
+
7. Keep RESOLVED items in the phases where they were originally placed,
|
|
34
|
+
struck through with a link to their implementation doc. This preserves
|
|
35
|
+
phase history and context. Only create new ordering entries for
|
|
36
|
+
OUTSTANDING and PARTIAL items. Fully completed phases should be marked
|
|
37
|
+
with "— COMPLETE" in their heading.
|
|
38
|
+
|
|
39
|
+
Output: Updated .relay/relay-ordering.md
|
|
40
|
+
|
|
41
|
+
## Navigation
|
|
42
|
+
When finished, tell the user:
|
|
43
|
+
- "Next: run **/relay-analyze** and specify which phase/item to work on from relay-ordering.md."
|
|
44
|
+
|
|
45
|
+
## Notes
|
|
46
|
+
|
|
47
|
+
- relay-ordering.md is a generated artifact — regenerate it when the backlog changes
|
|
48
|
+
- The ordering should consider both issues (bugs/gaps) and features (new capabilities)
|
|
49
|
+
- Dependencies matter: e.g., a feature may require an issue to be resolved first
|
|
50
|
+
- Feature files from /relay-design carry explicit Development Order metadata — use it
|
|
51
|
+
- Brainstorm files are excluded — feature files carry their own Development Order and Dependencies metadata from /relay-design
|
|
@@ -0,0 +1,133 @@
|
|
|
1
|
+
# Relay: Code — Implementation Plan
|
|
2
|
+
|
|
3
|
+
**Sequence**: `/relay-analyze` → **`/relay-plan`** → `/relay-review` → *implement* → `/relay-verify` → `/relay-notebook` → `/relay-resolve`
|
|
4
|
+
|
|
5
|
+
Based on the analysis, create a detailed implementation plan.
|
|
6
|
+
|
|
7
|
+
0. Read the target item file(s) and verify an ## Analysis section exists
|
|
8
|
+
(from /relay-analyze). If no analysis exists, STOP and tell the user:
|
|
9
|
+
"No analysis found in the item file. Run **/relay-analyze** first."
|
|
10
|
+
|
|
11
|
+
Freshness check: read the *Analyzed:* date in the Analysis section.
|
|
12
|
+
If the analysis is more than 7 days old, WARN the user:
|
|
13
|
+
"Analysis was done on [date] — the codebase may have changed since then.
|
|
14
|
+
Consider re-running **/relay-analyze** to revalidate before planning."
|
|
15
|
+
Wait for the user to confirm before proceeding.
|
|
16
|
+
|
|
17
|
+
If an Adversarial Review section exists in the issue/feature file with
|
|
18
|
+
verdict REJECTED (from a previous /relay-review), read it first and
|
|
19
|
+
incorporate the rejection feedback into this revised plan. Address every
|
|
20
|
+
issue raised in the review.
|
|
21
|
+
|
|
22
|
+
All dates in this workflow use YYYY-MM-DD format.
|
|
23
|
+
|
|
24
|
+
Requirements for the plan:
|
|
25
|
+
|
|
26
|
+
1. Break the change into atomic, independently-verifiable steps. Each step should:
|
|
27
|
+
- Change as few files as possible (ideally one)
|
|
28
|
+
- Be testable in isolation (you could run tests after just this step)
|
|
29
|
+
- Not leave the codebase in a broken state if you stop here
|
|
30
|
+
Order steps so that each builds on the last without breaking what came before.
|
|
31
|
+
|
|
32
|
+
2. For each step, specify:
|
|
33
|
+
- WHAT: exact file, function, and line range to change
|
|
34
|
+
- HOW: the specific code change (pseudocode or actual code)
|
|
35
|
+
- WHY: what this step accomplishes and how it connects to the root cause / requirement
|
|
36
|
+
- RISK: what could go wrong, what regression could this introduce
|
|
37
|
+
- VERIFY: how to confirm this step worked (test command, manual check, etc.)
|
|
38
|
+
- ROLLBACK: if this step causes problems, how to revert safely
|
|
39
|
+
|
|
40
|
+
3. Cross-check every step against the blast radius from the analysis:
|
|
41
|
+
- For each caller/consumer identified: does this step change their behavior?
|
|
42
|
+
- If yes: is the behavior change correct? Do their tests still pass?
|
|
43
|
+
- For each related item: does this step interact with it?
|
|
44
|
+
|
|
45
|
+
4. Identify test changes needed:
|
|
46
|
+
- Existing tests that need updating (mock changes, new assertions, etc.)
|
|
47
|
+
- New tests that should be added to cover the change
|
|
48
|
+
- Integration/regression tests to run after all steps complete
|
|
49
|
+
|
|
50
|
+
5. Consider the full breadth of this change:
|
|
51
|
+
- Does this change any public API behavior?
|
|
52
|
+
- Does it change any stored data format?
|
|
53
|
+
- Does it affect performance characteristics?
|
|
54
|
+
- Does it interact with configuration options?
|
|
55
|
+
- Could it affect existing deployments during upgrade?
|
|
56
|
+
|
|
57
|
+
6. Stress-test every assumption:
|
|
58
|
+
- Re-read each affected function body. Don't rely on memory or the
|
|
59
|
+
issue/feature description — verify the actual code matches your mental model.
|
|
60
|
+
- For each "this is safe because X" claim, verify X is actually true.
|
|
61
|
+
- For each "callers do Y", grep to confirm all callers actually do Y.
|
|
62
|
+
- Run multiple passes over the plan: does step 3 invalidate step 1?
|
|
63
|
+
Does the final state match what you intended?
|
|
64
|
+
|
|
65
|
+
7. Persist the plan in each relevant issue/feature file in .relay/issues/ or
|
|
66
|
+
.relay/features/. If an Implementation Plan section already exists (from a
|
|
67
|
+
previous rejected plan), REPLACE it with the revised plan — do not append
|
|
68
|
+
a second copy. If no plan exists yet, APPEND after the Analysis section.
|
|
69
|
+
Add a horizontal rule separator, then these sections:
|
|
70
|
+
|
|
71
|
+
---
|
|
72
|
+
|
|
73
|
+
## Implementation Plan
|
|
74
|
+
|
|
75
|
+
*Generated: [YYYY-MM-DD]*
|
|
76
|
+
|
|
77
|
+
### Step 1: [title]
|
|
78
|
+
**File**: path/to/file.py
|
|
79
|
+
**Change**: [description]
|
|
80
|
+
**Code**: [specific change]
|
|
81
|
+
**Why**: [what this step accomplishes and how it connects to the root cause]
|
|
82
|
+
**Risk**: [what could break]
|
|
83
|
+
**Verify**: [how to check]
|
|
84
|
+
**Rollback**: [how to revert]
|
|
85
|
+
|
|
86
|
+
### Step 2: [title]
|
|
87
|
+
...
|
|
88
|
+
|
|
89
|
+
## Test Changes
|
|
90
|
+
- [list of test file changes]
|
|
91
|
+
|
|
92
|
+
## Post-Implementation Checks
|
|
93
|
+
- [ordered list of verification commands]
|
|
94
|
+
|
|
95
|
+
## Risks & Mitigations
|
|
96
|
+
- [consolidated risk register]
|
|
97
|
+
|
|
98
|
+
## Rollback Plan
|
|
99
|
+
- If the change is purely code (no DB migrations, no config changes,
|
|
100
|
+
no stored data format changes), the rollback plan is a single line:
|
|
101
|
+
`git revert <actual-commit-hash>` — fill in the real commit hash
|
|
102
|
+
after implementation, not a placeholder.
|
|
103
|
+
- If the change involves DB migrations, config changes, or stored
|
|
104
|
+
data format changes: include ordered revert steps for each
|
|
105
|
+
(migration reversal commands, config restoration, data cleanup).
|
|
106
|
+
|
|
107
|
+
If the phase spans multiple item files, append the relevant steps to
|
|
108
|
+
each file (each item file gets only the steps that apply to it, plus
|
|
109
|
+
the shared rollback plan). Each file must cross-reference the other
|
|
110
|
+
item files in the phase with links and a note about execution order
|
|
111
|
+
(e.g., "This plan depends on steps in [other_item.md](../issues/other_item.md)
|
|
112
|
+
being completed first").
|
|
113
|
+
|
|
114
|
+
If the plan spans multiple item files and was REJECTED, the Adversarial
|
|
115
|
+
Review will be in each affected file. Read the review from ALL affected
|
|
116
|
+
files and coordinate the revised plan across them — ensure cross-file
|
|
117
|
+
dependencies are addressed together.
|
|
118
|
+
|
|
119
|
+
Output: Updated issue/feature file(s) in .relay/issues/ or .relay/features/ with plan persisted
|
|
120
|
+
|
|
121
|
+
## Navigation
|
|
122
|
+
When finished, tell the user:
|
|
123
|
+
- "Next: run **/relay-review** for adversarial review of the plan."
|
|
124
|
+
|
|
125
|
+
## Notes
|
|
126
|
+
|
|
127
|
+
- The plan is persisted in the issue/feature file so it survives across conversations and is archived with the item when resolved
|
|
128
|
+
- The step-by-step decomposition is key: it allows incremental implementation with verification at each stage
|
|
129
|
+
- "Stress-test every assumption" means re-reading the actual code, not relying on the issue/feature description
|
|
130
|
+
- The plan should be detailed enough that someone unfamiliar with the codebase could execute it
|
|
131
|
+
- If the analysis revealed related items that should be addressed together, the plan should include steps for all of them
|
|
132
|
+
- On a revision cycle (after REJECTED), the plan replaces the previous version in-place — never two Implementation Plan sections in one file
|
|
133
|
+
- If the plan is later revised (e.g., after /relay-review returns APPROVED WITH CHANGES), update the plan in the issue/feature file — don't append a second copy
|