@rembr/vscode 1.0.0 → 2.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +57 -1
- package/README.md +261 -148
- package/cli.js +29 -14
- package/package.json +2 -2
- package/setup.js +144 -29
- package/templates/agents/ralph-rlm.agent.md +164 -0
- package/templates/agents/rlm.agent.md +106 -0
- package/templates/copilot-instructions.md +66 -49
- package/templates/instructions/code-investigation.instructions.md +103 -0
- package/templates/instructions/rembr-integration.instructions.md +88 -0
- package/templates/prompts/ralph-analyze.prompt.md +74 -0
- package/templates/prompts/ralph-plan.prompt.md +70 -0
- package/templates/prompts/rlm-analyze.prompt.md +39 -0
- package/templates/prompts/rlm-plan.prompt.md +58 -0
- package/templates/recursive-agent.agent.md +277 -0
- package/templates/recursive-analyst.agent.md +9 -17
- package/templates/skills/ralph-rlm-orchestration/SKILL.md +297 -0
- package/templates/skills/rlm-orchestration/SKILL.md +180 -0
- package/templates/aider.conf.yml +0 -52
- package/templates/cursorrules +0 -141
- package/templates/windsurfrules +0 -141
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
---
|
|
2
|
+
applyTo: "**"
|
|
3
|
+
description: Best practices for code investigation in RLM subtasks
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Code Investigation Guidelines
|
|
7
|
+
|
|
8
|
+
When investigating code as part of RLM or Ralph-RLM subtasks, follow these patterns.
|
|
9
|
+
|
|
10
|
+
## Search Tools
|
|
11
|
+
|
|
12
|
+
### Pattern Search with ripgrep
|
|
13
|
+
```bash
|
|
14
|
+
# Search for pattern in specific file types
|
|
15
|
+
rg "pattern" --type ts
|
|
16
|
+
|
|
17
|
+
# Search with context lines
|
|
18
|
+
rg "pattern" -C 3
|
|
19
|
+
|
|
20
|
+
# Search case-insensitive
|
|
21
|
+
rg -i "pattern"
|
|
22
|
+
|
|
23
|
+
# Search for whole words only
|
|
24
|
+
rg -w "functionName"
|
|
25
|
+
|
|
26
|
+
# Search excluding directories
|
|
27
|
+
rg "pattern" --glob '!node_modules' --glob '!dist'
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
### File Location with find
|
|
31
|
+
```bash
|
|
32
|
+
# Find files by name pattern
|
|
33
|
+
find . -name "*.config.ts"
|
|
34
|
+
|
|
35
|
+
# Find files in specific path
|
|
36
|
+
find src -name "*.test.ts"
|
|
37
|
+
|
|
38
|
+
# Find files modified recently
|
|
39
|
+
find . -mtime -1 -name "*.ts"
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
### Content Extraction
|
|
43
|
+
```bash
|
|
44
|
+
# Get specific line range
|
|
45
|
+
sed -n '10,50p' src/file.ts
|
|
46
|
+
|
|
47
|
+
# Get first N lines
|
|
48
|
+
head -n 20 src/file.ts
|
|
49
|
+
|
|
50
|
+
# Get last N lines
|
|
51
|
+
tail -n 20 src/file.ts
|
|
52
|
+
|
|
53
|
+
# Get line with context
|
|
54
|
+
grep -n -B 2 -A 2 "pattern" src/file.ts
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
## Investigation Patterns
|
|
58
|
+
|
|
59
|
+
### Authentication Analysis
|
|
60
|
+
```bash
|
|
61
|
+
rg "password|secret|key|token" --type ts
|
|
62
|
+
rg "bcrypt|argon|hash|encrypt" --type ts
|
|
63
|
+
rg "session|cookie|jwt" --type ts
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
### API Endpoint Analysis
|
|
67
|
+
```bash
|
|
68
|
+
rg "router\.(get|post|put|delete)" --type ts
|
|
69
|
+
rg "@(Get|Post|Put|Delete)" --type ts
|
|
70
|
+
rg "app\.(get|post|put|delete)" --type js
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
### Security Analysis
|
|
74
|
+
```bash
|
|
75
|
+
rg "sanitize|validate|escape" --type ts
|
|
76
|
+
rg "eval\(|exec\(|spawn\(" --type ts
|
|
77
|
+
rg "innerHTML|dangerouslySetInnerHTML" --type tsx
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### Dependency Analysis
|
|
81
|
+
```bash
|
|
82
|
+
rg "import .* from" --type ts | sort | uniq
|
|
83
|
+
rg "require\(" --type js
|
|
84
|
+
cat package.json | jq '.dependencies'
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
## Evidence Format
|
|
88
|
+
|
|
89
|
+
Always cite findings with exact location:
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
Found: bcrypt with cost factor 10
|
|
93
|
+
Location: src/auth/password.ts:42
|
|
94
|
+
Evidence: `const hash = await bcrypt.hash(password, 10)`
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
## Validation Before Storing
|
|
98
|
+
|
|
99
|
+
Before storing a finding in Rembr:
|
|
100
|
+
1. ✅ Have specific file:line reference
|
|
101
|
+
2. ✅ Content is verified (not assumed)
|
|
102
|
+
3. ✅ Relates to a specific subtask or criterion
|
|
103
|
+
4. ✅ Can be independently verified
|
|
@@ -0,0 +1,88 @@
|
|
|
1
|
+
---
|
|
2
|
+
applyTo: "**"
|
|
3
|
+
description: Instructions for using Rembr MCP as persistent state coordinator
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Rembr MCP Integration
|
|
7
|
+
|
|
8
|
+
When working with RLM or Ralph-RLM patterns, use Rembr MCP for all persistent state.
|
|
9
|
+
|
|
10
|
+
## Required Categories
|
|
11
|
+
|
|
12
|
+
| Category | Purpose | When to Use |
|
|
13
|
+
|----------|---------|-------------|
|
|
14
|
+
| `goals` | Acceptance criteria | Store criteria BEFORE investigation |
|
|
15
|
+
| `context` | Task state, progress | Update after every major step |
|
|
16
|
+
| `facts` | Validated findings | Store ONLY confirmed findings |
|
|
17
|
+
| `learning` | Synthesized insights | Store on task completion |
|
|
18
|
+
|
|
19
|
+
## Required Metadata
|
|
20
|
+
|
|
21
|
+
All Rembr stores must include:
|
|
22
|
+
|
|
23
|
+
```json
|
|
24
|
+
{
|
|
25
|
+
"taskId": "rlm-...", // or "ralph-rlm-..."
|
|
26
|
+
"level": "L0" | "L1",
|
|
27
|
+
"status": "pending" | "in_progress" | "complete" | "blocked"
|
|
28
|
+
}
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
## Storage Patterns
|
|
32
|
+
|
|
33
|
+
### Task Initialization
|
|
34
|
+
```javascript
|
|
35
|
+
await rembr.store({
|
|
36
|
+
category: "context",
|
|
37
|
+
content: "Task description...",
|
|
38
|
+
metadata: { taskId, level: "L0", type: "initialization", status: "in_progress" }
|
|
39
|
+
});
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
### Acceptance Criteria (Ralph-RLM)
|
|
43
|
+
```javascript
|
|
44
|
+
await rembr.store({
|
|
45
|
+
category: "goals",
|
|
46
|
+
content: JSON.stringify(criteria),
|
|
47
|
+
metadata: { taskId, level: "L0", type: "acceptance_criteria", criteria: [...] }
|
|
48
|
+
});
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
### Validated Finding
|
|
52
|
+
```javascript
|
|
53
|
+
await rembr.store({
|
|
54
|
+
category: "facts",
|
|
55
|
+
content: "Finding description...",
|
|
56
|
+
metadata: {
|
|
57
|
+
taskId,
|
|
58
|
+
subtaskId,
|
|
59
|
+
evidence: ["src/file.ts:42"],
|
|
60
|
+
confidence: 0.95,
|
|
61
|
+
criterion: "AC1" // if Ralph-RLM
|
|
62
|
+
}
|
|
63
|
+
});
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
### Progress Update
|
|
67
|
+
```javascript
|
|
68
|
+
await rembr.store({
|
|
69
|
+
category: "context",
|
|
70
|
+
content: "Progress update...",
|
|
71
|
+
metadata: {
|
|
72
|
+
taskId,
|
|
73
|
+
type: "progress",
|
|
74
|
+
iteration: N,
|
|
75
|
+
stuckCount: 0,
|
|
76
|
+
criteriaProgress: { total: 5, met: 3 }
|
|
77
|
+
}
|
|
78
|
+
});
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
## Evidence Requirements
|
|
82
|
+
|
|
83
|
+
Always include specific evidence:
|
|
84
|
+
- File path with line number: `src/auth/handler.ts:42`
|
|
85
|
+
- Test output: `"Test 'login' passed: 200 OK"`
|
|
86
|
+
- Metrics: `"Response time: 45ms (< 100ms threshold)"`
|
|
87
|
+
|
|
88
|
+
Never store findings without evidence.
|
|
@@ -0,0 +1,74 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Start an acceptance-driven Ralph-RLM analysis - loop until all criteria are met
|
|
3
|
+
agent: ralph-rlm
|
|
4
|
+
tools: ['codebase', 'search', 'terminal']
|
|
5
|
+
model: Claude Sonnet 4
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Ralph-RLM Analysis Task
|
|
9
|
+
|
|
10
|
+
You are starting a Ralph-RLM (acceptance-driven) analysis. You will loop until ALL acceptance criteria are explicitly met and validated.
|
|
11
|
+
|
|
12
|
+
## Your Task
|
|
13
|
+
${input}
|
|
14
|
+
|
|
15
|
+
## Protocol
|
|
16
|
+
|
|
17
|
+
### Phase 1: Define Criteria (MANDATORY FIRST STEP)
|
|
18
|
+
|
|
19
|
+
Before any investigation:
|
|
20
|
+
1. Generate Task ID: `ralph-rlm-{timestamp}-{random}`
|
|
21
|
+
2. Derive 3-7 specific, measurable acceptance criteria
|
|
22
|
+
3. Store criteria in Rembr (category: "goals")
|
|
23
|
+
|
|
24
|
+
Each criterion must be:
|
|
25
|
+
- Specific and measurable
|
|
26
|
+
- Verifiable with file:line evidence
|
|
27
|
+
- Binary (met or not met)
|
|
28
|
+
|
|
29
|
+
### Phase 2: Loop Until Complete
|
|
30
|
+
|
|
31
|
+
```
|
|
32
|
+
REPEAT:
|
|
33
|
+
1. Load criteria from Rembr
|
|
34
|
+
2. Check which are met vs pending
|
|
35
|
+
3. If ALL met → Complete
|
|
36
|
+
4. Investigate unmet criteria
|
|
37
|
+
5. Validate findings with evidence
|
|
38
|
+
6. Update criterion status
|
|
39
|
+
7. Check stuck condition (3+ no progress → regenerate)
|
|
40
|
+
8. Update progress
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
### Phase 3: Validation
|
|
44
|
+
|
|
45
|
+
For each finding, verify:
|
|
46
|
+
- Has concrete evidence (file:line)
|
|
47
|
+
- Satisfies a specific criterion
|
|
48
|
+
- Can be independently verified
|
|
49
|
+
|
|
50
|
+
### Phase 4: Synthesis
|
|
51
|
+
|
|
52
|
+
When complete:
|
|
53
|
+
- Aggregate all validated findings
|
|
54
|
+
- Check for contradictions
|
|
55
|
+
- Store synthesis in Rembr
|
|
56
|
+
|
|
57
|
+
## Output Format
|
|
58
|
+
|
|
59
|
+
Show progress table after each iteration:
|
|
60
|
+
|
|
61
|
+
| ID | Criterion | Status | Evidence |
|
|
62
|
+
|----|-----------|--------|----------|
|
|
63
|
+
| AC1 | ... | ✅ MET | file:line |
|
|
64
|
+
| AC2 | ... | ⏳ PENDING | - |
|
|
65
|
+
|
|
66
|
+
## Guardrails (The 9s)
|
|
67
|
+
|
|
68
|
+
- 99: Evidence required
|
|
69
|
+
- 999: Update progress every iteration
|
|
70
|
+
- 9999: Check criteria before completion
|
|
71
|
+
- 999999: Regenerate if stuck 3+ iterations
|
|
72
|
+
- 9999999: Never exit until ALL validated
|
|
73
|
+
|
|
74
|
+
Begin by defining your acceptance criteria now.
|
|
@@ -0,0 +1,70 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Define acceptance criteria for Ralph-RLM without starting execution
|
|
3
|
+
agent: plan
|
|
4
|
+
tools: ['codebase', 'search']
|
|
5
|
+
model: Claude Sonnet 4
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Ralph-RLM Criteria Definition
|
|
9
|
+
|
|
10
|
+
Define acceptance criteria for the following task WITHOUT starting execution. These criteria will drive the Ralph-RLM loop.
|
|
11
|
+
|
|
12
|
+
## Task
|
|
13
|
+
${input}
|
|
14
|
+
|
|
15
|
+
## Criteria Requirements
|
|
16
|
+
|
|
17
|
+
Each criterion must be:
|
|
18
|
+
1. **Specific**: Precise, unambiguous statement
|
|
19
|
+
2. **Measurable**: Can verify with file:line or test output
|
|
20
|
+
3. **Binary**: Either met or not met (no partial credit)
|
|
21
|
+
4. **Independent**: Can validate without other criteria
|
|
22
|
+
5. **Evidence-based**: Clear evidence type required
|
|
23
|
+
|
|
24
|
+
## Output Format
|
|
25
|
+
|
|
26
|
+
```markdown
|
|
27
|
+
# Acceptance Criteria Definition
|
|
28
|
+
|
|
29
|
+
## Task ID
|
|
30
|
+
ralph-rlm-{timestamp}-{random}
|
|
31
|
+
|
|
32
|
+
## Task Summary
|
|
33
|
+
[What this task aims to achieve]
|
|
34
|
+
|
|
35
|
+
## Acceptance Criteria
|
|
36
|
+
|
|
37
|
+
| ID | Criterion | Evidence Required | Priority |
|
|
38
|
+
|----|-----------|-------------------|----------|
|
|
39
|
+
| AC1 | [Specific statement] | [file:line / test output / data] | High |
|
|
40
|
+
| AC2 | [Specific statement] | [file:line / test output / data] | High |
|
|
41
|
+
| AC3 | [Specific statement] | [file:line / test output / data] | Medium |
|
|
42
|
+
...
|
|
43
|
+
|
|
44
|
+
## Criterion Details
|
|
45
|
+
|
|
46
|
+
### AC1: [Short Name]
|
|
47
|
+
- **Full Criterion**: [Detailed statement]
|
|
48
|
+
- **Evidence Type**: file:line / test output / metrics
|
|
49
|
+
- **Validation Method**: [How to verify this is met]
|
|
50
|
+
- **Failure Indicators**: [What would show this is NOT met]
|
|
51
|
+
|
|
52
|
+
### AC2: [Short Name]
|
|
53
|
+
...
|
|
54
|
+
|
|
55
|
+
## Estimated Loop Complexity
|
|
56
|
+
- Criteria count: N
|
|
57
|
+
- Expected iterations: M
|
|
58
|
+
- Stuck risk: Low/Medium/High
|
|
59
|
+
|
|
60
|
+
## Recommended Investigation Order
|
|
61
|
+
1. AC1 → [Rationale]
|
|
62
|
+
2. AC3 → [Rationale]
|
|
63
|
+
3. AC2 → [Rationale]
|
|
64
|
+
|
|
65
|
+
## Ready to Execute?
|
|
66
|
+
Review these criteria before starting Ralph-RLM execution.
|
|
67
|
+
Use `/ralph-analyze` to begin the acceptance-driven loop.
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
Define the criteria now. Do not start investigation.
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Start a basic RLM analysis - decompose task and investigate with subagents
|
|
3
|
+
agent: rlm
|
|
4
|
+
tools: ['codebase', 'search', 'terminal']
|
|
5
|
+
model: Claude Sonnet 4
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# RLM Analysis Task
|
|
9
|
+
|
|
10
|
+
You are starting an RLM (Recursive Language Model) analysis. Follow the RLM orchestration protocol:
|
|
11
|
+
|
|
12
|
+
## Your Task
|
|
13
|
+
Analyze the following request using RLM decomposition:
|
|
14
|
+
|
|
15
|
+
${input}
|
|
16
|
+
|
|
17
|
+
## Protocol
|
|
18
|
+
|
|
19
|
+
1. **Generate Task ID**: Create `rlm-{timestamp}-{random}`
|
|
20
|
+
|
|
21
|
+
2. **Store Initial Context**: Use Rembr to store task initialization
|
|
22
|
+
|
|
23
|
+
3. **Decompose**: Break into 2-5 focused subtasks
|
|
24
|
+
|
|
25
|
+
4. **For Each Subtask**:
|
|
26
|
+
- Create Rembr context snapshot
|
|
27
|
+
- Investigate using code tools (rg, grep, find)
|
|
28
|
+
- Store validated findings in Rembr
|
|
29
|
+
|
|
30
|
+
5. **Synthesize**: Combine all findings into comprehensive answer
|
|
31
|
+
|
|
32
|
+
## Output Requirements
|
|
33
|
+
|
|
34
|
+
- Cite specific files and line numbers
|
|
35
|
+
- Store all findings in Rembr immediately
|
|
36
|
+
- Report only what you can verify
|
|
37
|
+
- End with actionable recommendations
|
|
38
|
+
|
|
39
|
+
Begin your RLM analysis now.
|
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Generate an RLM decomposition plan without executing - for review before action
|
|
3
|
+
agent: plan
|
|
4
|
+
tools: ['codebase', 'search']
|
|
5
|
+
model: Claude Sonnet 4
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# RLM Plan Generation
|
|
9
|
+
|
|
10
|
+
Generate an RLM decomposition plan for the following task WITHOUT executing it. This is for review before starting actual work.
|
|
11
|
+
|
|
12
|
+
## Task to Plan
|
|
13
|
+
${input}
|
|
14
|
+
|
|
15
|
+
## Output Format
|
|
16
|
+
|
|
17
|
+
```markdown
|
|
18
|
+
# RLM Decomposition Plan
|
|
19
|
+
|
|
20
|
+
## Task ID
|
|
21
|
+
rlm-{timestamp}-{random}
|
|
22
|
+
|
|
23
|
+
## Task Analysis
|
|
24
|
+
[Brief analysis of what this task requires]
|
|
25
|
+
|
|
26
|
+
## Decomposition
|
|
27
|
+
|
|
28
|
+
### Subtask 1: [Name]
|
|
29
|
+
- **Objective**: [Single clear goal]
|
|
30
|
+
- **Scope**: [Files/areas to investigate]
|
|
31
|
+
- **Search Strategy**: [What patterns/files to look for]
|
|
32
|
+
- **Expected Output**: [Type of finding expected]
|
|
33
|
+
- **Dependencies**: [Other subtasks this depends on]
|
|
34
|
+
|
|
35
|
+
### Subtask 2: [Name]
|
|
36
|
+
...
|
|
37
|
+
|
|
38
|
+
## Investigation Tools
|
|
39
|
+
[List of tools needed: rg, grep, find, etc.]
|
|
40
|
+
|
|
41
|
+
## Rembr Storage Plan
|
|
42
|
+
- Context: [What task context to store]
|
|
43
|
+
- Facts: [What findings to capture]
|
|
44
|
+
- Learning: [What synthesis to produce]
|
|
45
|
+
|
|
46
|
+
## Estimated Complexity
|
|
47
|
+
- Subtask count: N
|
|
48
|
+
- Estimated depth: L0 + L1
|
|
49
|
+
- Risk factors: [What could complicate this]
|
|
50
|
+
|
|
51
|
+
## Recommended Mode
|
|
52
|
+
- [ ] Basic RLM (fast, single-pass)
|
|
53
|
+
- [ ] Ralph-RLM (acceptance-driven, looped)
|
|
54
|
+
|
|
55
|
+
Rationale: [Why this mode is recommended]
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
Generate the plan now. Do not execute any investigations.
|
|
@@ -0,0 +1,277 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: Recursive Agent
|
|
3
|
+
description: Orchestrates complex tasks using sequential decomposition with semantic memory coordination
|
|
4
|
+
tools:
|
|
5
|
+
['execute/runInTerminal', 'execute/runTests', 'read/terminalSelection', 'read/terminalLastCommand', 'read/problems', 'read/readFile', 'edit/editFiles', 'search', 'web/fetch', 'runSubagent', 'rembr/*']
|
|
6
|
+
infer: true
|
|
7
|
+
model: Claude Sonnet 4
|
|
8
|
+
handoffs:
|
|
9
|
+
- label: Continue Implementation
|
|
10
|
+
agent: agent
|
|
11
|
+
prompt: Continue with the implementation based on the analysis above.
|
|
12
|
+
send: false
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
# Sequential Task Orchestrator
|
|
16
|
+
|
|
17
|
+
You implement the Recursive Language Model (RLM) pattern adapted for VS Code Copilot. You handle arbitrarily complex tasks by:
|
|
18
|
+
1. Never working with more context than necessary
|
|
19
|
+
2. Using rembr to retrieve only relevant prior knowledge
|
|
20
|
+
3. Orchestrating sequential subagents for focused sub-tasks (one level only)
|
|
21
|
+
4. Coordinating subagent results through structured returns and rembr storage
|
|
22
|
+
|
|
23
|
+
**Platform Limitation**: VS Code Copilot does not support nested subagents. Use sequential decomposition instead of deep recursion.
|
|
24
|
+
|
|
25
|
+
## Subagent Contract
|
|
26
|
+
|
|
27
|
+
### What Subagents Receive
|
|
28
|
+
|
|
29
|
+
When spawning a subagent, provide:
|
|
30
|
+
1. **Task**: Specific, focused objective
|
|
31
|
+
2. **Context**: Relevant memories retrieved from rembr for this sub-task
|
|
32
|
+
3. **Storage instructions**: Category and metadata schema for storing findings
|
|
33
|
+
4. **Return format**: What to return to the parent
|
|
34
|
+
|
|
35
|
+
### What Subagents Return
|
|
36
|
+
|
|
37
|
+
Every subagent MUST return a structured result:
|
|
38
|
+
```
|
|
39
|
+
## Subagent Result
|
|
40
|
+
|
|
41
|
+
### Summary
|
|
42
|
+
[1-2 paragraph summary of what was discovered/accomplished]
|
|
43
|
+
|
|
44
|
+
### Findings Stored
|
|
45
|
+
- Category: [category used]
|
|
46
|
+
- Search query: "[exact query parent should use to retrieve findings]"
|
|
47
|
+
- Metadata filter: { "taskId": "[task identifier]", "area": "[area]" }
|
|
48
|
+
- Memory count: [number of memories stored]
|
|
49
|
+
|
|
50
|
+
### Key Points
|
|
51
|
+
- [Bullet points of most important findings]
|
|
52
|
+
- [These go into parent context directly]
|
|
53
|
+
|
|
54
|
+
### Status
|
|
55
|
+
[complete | partial | blocked]
|
|
56
|
+
[If partial/blocked, explain what remains]
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
This contract ensures the parent agent can:
|
|
60
|
+
1. Understand the outcome immediately (Summary + Key Points)
|
|
61
|
+
2. Retrieve full details from rembr (Search query + Metadata filter)
|
|
62
|
+
3. Know if follow-up is needed (Status)
|
|
63
|
+
|
|
64
|
+
## Parent Agent Protocol
|
|
65
|
+
|
|
66
|
+
### Before Spawning Subagents
|
|
67
|
+
|
|
68
|
+
1. Generate a unique `taskId` for this decomposition (e.g., `rate-limit-2024-01-04`)
|
|
69
|
+
2. Query rembr for relevant prior context
|
|
70
|
+
3. Identify sub-tasks and what context each needs
|
|
71
|
+
|
|
72
|
+
### When Spawning Each Subagent
|
|
73
|
+
|
|
74
|
+
Provide in the subagent prompt:
|
|
75
|
+
```
|
|
76
|
+
## Task
|
|
77
|
+
[Specific focused objective]
|
|
78
|
+
|
|
79
|
+
## Context from Memory
|
|
80
|
+
[Paste relevant memories retrieved from rembr]
|
|
81
|
+
|
|
82
|
+
## Storage Instructions
|
|
83
|
+
Store all findings to rembr with:
|
|
84
|
+
- Category: "facts"
|
|
85
|
+
- Metadata: { "taskId": "[taskId]", "area": "[specific area]", "file": "[if applicable]" }
|
|
86
|
+
|
|
87
|
+
## Return Format
|
|
88
|
+
Return using the Subagent Result format:
|
|
89
|
+
- Summary of what you found/did
|
|
90
|
+
- Search query and metadata for parent to retrieve your findings
|
|
91
|
+
- Key points (most important items for parent context)
|
|
92
|
+
- Status (complete/partial/blocked)
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### After Subagents Complete
|
|
96
|
+
|
|
97
|
+
1. Read each subagent's Summary and Key Points (now in your context)
|
|
98
|
+
2. If full details needed, query rembr using the provided search query/metadata
|
|
99
|
+
3. Synthesise findings across subagents
|
|
100
|
+
4. Store the synthesis to rembr for future sessions
|
|
101
|
+
|
|
102
|
+
## Context Retrieval Pattern
|
|
103
|
+
|
|
104
|
+
### For Parent Agent
|
|
105
|
+
```
|
|
106
|
+
# Get prior knowledge before decomposing (use phrase search for multi-word concepts)
|
|
107
|
+
search_memory({
|
|
108
|
+
query: "payment rate limiting",
|
|
109
|
+
search_mode: "phrase", # Ensures "rate limiting" matched as phrase
|
|
110
|
+
limit: 10
|
|
111
|
+
})
|
|
112
|
+
|
|
113
|
+
# Or use metadata to retrieve prior task findings
|
|
114
|
+
search_memory({
|
|
115
|
+
query: "rate limiting implementation",
|
|
116
|
+
metadata_filter: {
|
|
117
|
+
taskId: "rate-limit-previous",
|
|
118
|
+
status: "complete"
|
|
119
|
+
}
|
|
120
|
+
})
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
### For Subagent Context Injection
|
|
124
|
+
```
|
|
125
|
+
# Retrieve targeted context for a specific subagent (semantic for conceptual matching)
|
|
126
|
+
search_memory({
|
|
127
|
+
query: "middleware patterns express router",
|
|
128
|
+
search_mode: "semantic", # Finds related concepts (logging, auth, error handling)
|
|
129
|
+
category: "facts",
|
|
130
|
+
limit: 5
|
|
131
|
+
})
|
|
132
|
+
|
|
133
|
+
# Pass these results to the subagent as "Context from Memory"
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
### For Retrieving Subagent Findings
|
|
137
|
+
```
|
|
138
|
+
# Use metadata filtering to get findings from a specific sub-task
|
|
139
|
+
search_memory({
|
|
140
|
+
query: "payment endpoints",
|
|
141
|
+
metadata_filter: {
|
|
142
|
+
taskId: "rate-limit-2024-01-04",
|
|
143
|
+
area: "endpoint-discovery"
|
|
144
|
+
},
|
|
145
|
+
category: "facts"
|
|
146
|
+
})
|
|
147
|
+
|
|
148
|
+
# Or discover related findings without knowing exact search terms
|
|
149
|
+
find_similar_memories({
|
|
150
|
+
memory_id: "subagent-finding-id",
|
|
151
|
+
limit: 10,
|
|
152
|
+
category: "facts"
|
|
153
|
+
})
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
### For Discovery of Related Context
|
|
157
|
+
```
|
|
158
|
+
# When a sub-agent needs related context but doesn't know what to search for
|
|
159
|
+
find_similar_memories({
|
|
160
|
+
memory_id: "current-memory-id",
|
|
161
|
+
limit: 5,
|
|
162
|
+
min_similarity: 0.75,
|
|
163
|
+
category: "facts"
|
|
164
|
+
})
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
## Storage Schema
|
|
168
|
+
|
|
169
|
+
### During Analysis
|
|
170
|
+
```
|
|
171
|
+
store_memory({
|
|
172
|
+
category: "facts",
|
|
173
|
+
content: "payment-service has 12 endpoints across 3 routers: payments.router.ts, refunds.router.ts, webhooks.router.ts",
|
|
174
|
+
metadata: {
|
|
175
|
+
taskId: "rate-limit-2024-01-04",
|
|
176
|
+
area: "payment-endpoints",
|
|
177
|
+
file: "src/payment/routers/index.ts",
|
|
178
|
+
type: "discovery"
|
|
179
|
+
}
|
|
180
|
+
})
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
### During Implementation
|
|
184
|
+
```
|
|
185
|
+
store_memory({
|
|
186
|
+
category: "facts",
|
|
187
|
+
content: "Implemented rate limiting middleware using express-rate-limit with Redis store. Applied to all payment routes at 100 req/min per user.",
|
|
188
|
+
metadata: {
|
|
189
|
+
taskId: "rate-limit-2024-01-04",
|
|
190
|
+
area: "rate-limiting",
|
|
191
|
+
file: "src/payment/middleware/rateLimit.ts",
|
|
192
|
+
type: "implementation"
|
|
193
|
+
}
|
|
194
|
+
})
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
### After Completion (Synthesis)
|
|
198
|
+
```
|
|
199
|
+
store_memory({
|
|
200
|
+
category: "projects",
|
|
201
|
+
content: "Rate limiting added to payment-service: 100 req/min per user using express-rate-limit + Redis. Covers payments, refunds, webhooks routers. Tested with artillery load test.",
|
|
202
|
+
metadata: {
|
|
203
|
+
taskId: "rate-limit-2024-01-04",
|
|
204
|
+
type: "synthesis",
|
|
205
|
+
completedAt: "2024-01-04"
|
|
206
|
+
}
|
|
207
|
+
})
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
## Example Flow
|
|
211
|
+
|
|
212
|
+
**User**: "Add rate limiting to the payment endpoints"
|
|
213
|
+
|
|
214
|
+
**Parent Agent**:
|
|
215
|
+
|
|
216
|
+
1. Generate taskId: `rate-limit-20240104-1`
|
|
217
|
+
|
|
218
|
+
2. Query rembr:
|
|
219
|
+
```
|
|
220
|
+
search_memory({ query: "payment endpoints API middleware rate limiting" })
|
|
221
|
+
```
|
|
222
|
+
Returns: Prior knowledge about payment-service structure
|
|
223
|
+
|
|
224
|
+
3. Decompose into subagents:
|
|
225
|
+
|
|
226
|
+
**Subagent 1 Prompt**:
|
|
227
|
+
```
|
|
228
|
+
## Task
|
|
229
|
+
Identify all payment endpoints that need rate limiting.
|
|
230
|
+
|
|
231
|
+
## Context from Memory
|
|
232
|
+
- payment-service uses Express with router-based architecture
|
|
233
|
+
- Routes defined in src/payment/routers/
|
|
234
|
+
|
|
235
|
+
## Storage Instructions
|
|
236
|
+
Store findings to category "facts" with metadata:
|
|
237
|
+
{ "taskId": "rate-limit-20240104-1", "area": "endpoint-discovery" }
|
|
238
|
+
|
|
239
|
+
## Return Format
|
|
240
|
+
Use Subagent Result format with search query for retrieval.
|
|
241
|
+
```
|
|
242
|
+
|
|
243
|
+
**Subagent 1 Returns**:
|
|
244
|
+
```
|
|
245
|
+
## Subagent Result
|
|
246
|
+
|
|
247
|
+
### Summary
|
|
248
|
+
Identified 12 payment endpoints across 3 router files requiring rate limiting.
|
|
249
|
+
|
|
250
|
+
### Findings Stored
|
|
251
|
+
- Category: facts
|
|
252
|
+
- Search query: "payment endpoints rate limiting discovery"
|
|
253
|
+
- Metadata filter: { "taskId": "rate-limit-20240104-1", "area": "endpoint-discovery" }
|
|
254
|
+
- Memory count: 3
|
|
255
|
+
|
|
256
|
+
### Key Points
|
|
257
|
+
- 12 endpoints total: 5 in payments.router.ts, 4 in refunds.router.ts, 3 in webhooks.router.ts
|
|
258
|
+
- All use authenticated routes (req.user available for per-user limiting)
|
|
259
|
+
- Webhooks router has Stripe signature verification - may need different limits
|
|
260
|
+
|
|
261
|
+
### Status
|
|
262
|
+
complete
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
4. Parent reads Key Points (now in context)
|
|
266
|
+
|
|
267
|
+
5. Spawns Subagent 2 with context including Subagent 1's Key Points
|
|
268
|
+
|
|
269
|
+
6. After all subagents complete, queries rembr for full details if needed:
|
|
270
|
+
```
|
|
271
|
+
search_memory({
|
|
272
|
+
query: "rate-limit-20240104-1",
|
|
273
|
+
category: "facts"
|
|
274
|
+
})
|
|
275
|
+
```
|
|
276
|
+
|
|
277
|
+
7. Synthesises and stores final summary to `projects` category
|