@rembr/vscode 1.0.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,10 +1,66 @@
1
1
  # Changelog
2
2
 
3
- All notable changes to @rembr/client will be documented in this file.
3
+ All notable changes to @rembr/vscode will be documented in this file.
4
4
 
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [2.0.0] - 2026-01-10
9
+
10
+ ### Added
11
+
12
+ #### Complete RLM Agent System
13
+ - **Custom Agents**: `@rlm` (basic decomposition) and `@ralph-rlm` (acceptance-driven quality)
14
+ - **Skills**: RLM orchestration patterns with quality validation
15
+ - `rlm-orchestration/SKILL.md` - Basic RLM decomposition patterns
16
+ - `ralph-rlm-orchestration/SKILL.md` - Acceptance-driven loop patterns
17
+ - **Prompts**: Quick-start commands for Chat interface
18
+ - `/rlm-analyze` - Start basic RLM analysis
19
+ - `/ralph-analyze` - Start acceptance-driven analysis
20
+ - `/rlm-plan` - Generate decomposition plan only
21
+ - `/ralph-plan` - Define acceptance criteria only
22
+ - **Instructions**: Specialized guidance for agents
23
+ - `rembr-integration.instructions.md` - Memory storage patterns
24
+ - `code-investigation.instructions.md` - Code search best practices
25
+
26
+ #### Agent Architecture
27
+ - **Basic RLM**: Fast task decomposition with single-pass completion
28
+ - **Ralph-RLM**: Iterative quality loops with acceptance criteria validation
29
+ - **Handoffs**: Seamless transitions between agent modes
30
+ - **Stuck Detection**: Auto-regenerates plans when blocked (Ralph-RLM)
31
+ - **Memory Categories**: Specialized storage (goals, context, facts, learning)
32
+
33
+ #### Enhanced Setup
34
+ - Updated installer copies complete `.github/` structure
35
+ - VS Code settings requirements check
36
+ - Migration guide for v1.x users
37
+ - Backwards compatibility with existing recursive-analyst agent
38
+
39
+ ### Changed
40
+
41
+ #### Breaking Changes
42
+ - **Package Scope**: Renamed from `@rembr/client` to `@rembr/vscode`
43
+ - **Architecture**: Agent-based system replaces simple auto-detection
44
+ - **File Structure**: Complete `.github/` directory with agents, skills, prompts, instructions
45
+ - **Usage**: Chat agents (`@rlm`, `@ralph-rlm`) replace automatic pattern recognition
46
+
47
+ #### Improved
48
+ - **Token Efficiency**: Up to 52% reduction with quality improvements
49
+ - **Quality Assurance**: Ralph-RLM ensures acceptance criteria are met
50
+ - **Developer Experience**: Explicit agent selection vs automatic detection
51
+ - **Documentation**: Comprehensive examples and workflow guides
52
+
53
+ ### Migration from v1.x
54
+
55
+ 1. Backup existing `.github/` files
56
+ 2. Run `rembr-vscode-setup` to install v2.0 structure
57
+ 3. Copy custom configurations to new format
58
+ 4. Enable required VS Code settings:
59
+ - `github.copilot.chat.codeGeneration.useInstructionFiles: true`
60
+ - `chat.agent.enabled: true`
61
+ - `chat.useAgentSkills: true`
62
+ 5. Test with `/rlm-analyze` to verify setup
63
+
8
64
  ## [1.0.0] - 2026-01-06
9
65
 
10
66
  ### Added
package/README.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # @rembr/vscode
2
2
 
3
- **Recursive Language Model (RLM) patterns for VS Code and GitHub Copilot**
3
+ **Recursive Language Model (RLM) Integration for VS Code and GitHub Copilot**
4
4
 
5
- Transform your AI-assisted development with automatic task decomposition, achieving **51% token efficiency** improvements for complex coding tasks.
5
+ Complete RLM setup with custom agents, skills, prompts, and semantic memory integration. Choose between **basic RLM** for fast decomposition or **Ralph-RLM** for acceptance-driven quality assurance.
6
6
 
7
7
  ## Quick Start
8
8
 
@@ -13,227 +13,339 @@ rembr-vscode-setup
13
13
 
14
14
  **What happens next?**
15
15
 
16
- GitHub Copilot automatically detects complex tasks
17
- Recursive decomposition with focused subagents
18
- Semantic memory integration via rembr MCP
19
- 51% reduction in token usage for complex requests
20
- Complete documentation and workflow helpers
16
+ **Custom Agents** - `@rlm` and `@ralph-rlm` agents for task orchestration
17
+ **Skill System** - RLM orchestration skills teach decomposition patterns
18
+ **Chat Prompts** - `/rlm-analyze`, `/ralph-analyze` for quick starts
19
+ **Memory Integration** - Persistent context via rembr MCP
20
+ **Token Efficiency** - 51% reduction for complex tasks
21
21
 
22
- ## How It Works
22
+ ## Agent Modes
23
+
24
+ | Agent | Description | Best For | Exit Condition |
25
+ |-------|-------------|----------|----------------|
26
+ | `@rlm` | Basic RLM - fast decomposition | Quick analysis, bug investigation | Task complete |
27
+ | `@ralph-rlm` | Acceptance-driven loops | Security audits, quality-critical | All criteria met |
28
+
29
+ ### Basic RLM Flow
30
+ ```
31
+ User Task → Decompose → Investigate Subtasks → Synthesize Results
32
+
33
+ Store in Rembr (context, facts, learning)
34
+ ```
35
+
36
+ ### Ralph-RLM Flow
37
+ ```
38
+ User Task → Define Criteria → LOOP until ALL met:
39
+ ↓ ├── Load criteria
40
+ Store in Rembr ├── Validate findings
41
+ (goals category) ├── Update status
42
+ ├── Check stuck
43
+ └── Regenerate if needed
44
+ ```
45
+
46
+ ## Usage Examples
47
+
48
+ ### Using Custom Agents
49
+
50
+ Select the agent from the agent picker in Chat view:
51
+
52
+ ```
53
+ @rlm Analyze the authentication system and identify all password handling
54
+ ```
55
+
56
+ ```
57
+ @ralph-rlm Audit the API endpoints for OWASP Top 10 vulnerabilities
58
+ ```
59
+
60
+ ### Using Chat Prompts
61
+
62
+ Type `/` followed by prompt name:
23
63
 
24
- ### Before: Traditional Approach
25
64
  ```
26
- User: "Implement rate limiting for payment service with Redis, auth, monitoring, and tests"
65
+ /rlm-analyze Investigate how user sessions are managed across services
66
+ ```
27
67
 
28
- Copilot: [Loads entire codebase context → 12,847 tokens → Single massive response]
29
- Result: Incomplete implementation, missing edge cases, 4 revisions needed
68
+ ```
69
+ /ralph-analyze Perform security audit of payment processing flow
30
70
  ```
31
71
 
32
- ### After: RLM Patterns
33
72
  ```
34
- User: "Implement rate limiting for payment service with Redis, auth, monitoring, and tests"
73
+ /rlm-plan Generate decomposition plan for rate limiting implementation
74
+ ```
35
75
 
36
- Copilot: [Auto-detects complexity → Spawns focused subagents]
37
- ├── L1-Analysis: Analyze payment endpoints (280 tokens)
38
- ├── L1-Design: Design Redis strategy (180 tokens)
39
- ├── L1-Implementation: Build middleware (350 tokens)
40
- └── L1-Monitoring: Add metrics/tests (120 tokens)
76
+ ### Example Workflows
41
77
 
42
- Result: Complete implementation, 6,241 tokens (51% reduction), 1 revision
78
+ #### Quick Codebase Analysis
79
+ ```
80
+ @rlm Analyze the authentication system and find all places where passwords are validated
43
81
  ```
44
82
 
45
- The installer automatically configures:
83
+ **Output:**
84
+ - L1-Auth: Authentication middleware analysis
85
+ - L1-Validation: Password validation logic
86
+ - L1-Security: Hash verification patterns
87
+ - L1-Session: Session management review
46
88
 
47
- ### 🧠 GitHub Copilot Integration
48
- - **`.github/copilot-instructions.md`** - Auto-detection triggers for complex tasks
49
- - **`.github/agents/recursive-analyst.agent.md`** - Specialized RLM agent
50
- - **Pattern Recognition** - Automatically identifies when to decompose tasks
89
+ #### Security Audit with Validation
90
+ ```
91
+ @ralph-rlm Audit the API endpoints for OWASP Top 10 vulnerabilities
92
+
93
+ Acceptance Criteria:
94
+ ✓ Input validation checked
95
+ ✓ Authentication flaws identified
96
+ ✓ Sensitive data exposure reviewed
97
+ ✓ XML/XXE injection tested
98
+ ✓ Access control verified
99
+ ✓ Security misconfiguration found
100
+ ✓ XSS vulnerabilities checked
101
+ ✓ Deserialization flaws tested
102
+ ✓ Component vulnerabilities identified
103
+ ✓ Logging/monitoring gaps found
104
+ ```
51
105
 
52
- ### 📡 MCP Memory Integration
53
- - **VS Code settings** - REMBR MCP server configuration
54
- - **Semantic Memory** - Persistent context across sessions
55
- - **Cross-project Learning** - Reuse patterns between projects
106
+ #### Planning Before Execution
107
+ ```
108
+ /rlm-plan Investigate how rate limiting should be implemented across microservices
56
109
 
57
- ### 🛠 Workflow Tools
58
- - **`rlm-helper.js`** - Task coordination and analysis script
59
- - **`docs/rlm-patterns.md`** - Complete usage guide
60
- - **`docs/rlm-benchmarks.md`** - Performance analysis
110
+ Generated Plan:
111
+ 1. L1-Analysis: Current rate limiting state
112
+ 2. L1-Architecture: Distributed rate limit design
113
+ 3. L1-Implementation: Redis-based implementation
114
+ 4. L1-Monitoring: Metrics and alerting
115
+ ```
116
+
117
+ Then execute:
118
+ ```
119
+ @rlm [paste the generated plan above]
120
+ ```
61
121
 
62
122
  ## Auto-Detection Examples
63
123
 
64
- ### Complex Tasks (Auto-RLM)
65
- These requests automatically trigger recursive decomposition:
124
+ The system automatically detects when to use RLM patterns:
66
125
 
126
+ ### ✅ Complex Tasks (Auto-RLM)
67
127
  ```javascript
68
- // Multi-component implementations
128
+ // Multi-component implementations
69
129
  "Implement OAuth2 with JWT refresh tokens, rate limiting, and admin dashboard"
70
130
 
71
- // Cross-service integrations
131
+ // Cross-service integrations
72
132
  "Migrate user service to microservices with message queues and monitoring"
73
133
 
74
- // Architecture changes
75
- "Refactor monolith payment system for scalability and add caching"
134
+ // Security audits
135
+ "Audit the authentication system for OWASP Top 10 vulnerabilities"
76
136
 
77
- // Analysis + implementation
78
- "Analyze current auth flow and rebuild with SSO integration"
137
+ // Architecture analysis
138
+ "Analyze the caching layer and identify performance bottlenecks"
79
139
  ```
80
140
 
81
141
  ### 🎯 Simple Tasks (Standard)
82
- These use traditional single-shot responses:
83
-
84
142
  ```javascript
85
143
  // Single file changes
86
144
  "Fix this TypeScript type error"
87
- "Add logging to this function"
88
- "Update README with new installation steps"
89
- "Rename variable from userID to userId"
145
+ "Add logging to this function"
146
+ "Update README with installation steps"
90
147
  ```
91
148
 
92
- ## Performance Benefits
93
-
94
- | Complexity | Traditional Tokens | RLM Tokens | Reduction | Quality Gain |
95
- |------------|-------------------|------------|-----------|--------------|
96
- | **High** | 18,400 | 8,800 | **52%** | +2.5 points |
97
- | **Medium** | 8,200 | 4,900 | **40%** | +1.8 points |
98
- | **Low** | 2,100 | 2,100 | **0%** | No change |
99
-
100
- ### Why RLM Works
149
+ ## File Structure Installed
101
150
 
102
- 1. **Focused Context** - Each subagent receives only relevant code/memories
103
- 2. **Parallel Processing** - Multiple specialized agents work simultaneously
104
- 3. **Persistent Learning** - Solutions stored in semantic memory for reuse
105
- 4. **Incremental Validation** - Each level validates before proceeding
151
+ ```
152
+ your-project/
153
+ ├── .github/
154
+ │ ├── copilot-instructions.md # Repository-wide RLM instructions
155
+ │ ├── agents/
156
+ │ │ ├── rlm.agent.md # Basic RLM agent
157
+ │ │ └── ralph-rlm.agent.md # Acceptance-driven agent
158
+ │ ├── skills/
159
+ │ │ ├── rlm-orchestration/
160
+ │ │ │ └── SKILL.md # RLM skill definition
161
+ │ │ └── ralph-rlm-orchestration/
162
+ │ │ └── SKILL.md # Ralph-RLM skill definition
163
+ │ ├── prompts/
164
+ │ │ ├── rlm-analyze.prompt.md # Quick RLM analysis start
165
+ │ │ ├── ralph-analyze.prompt.md # Quick Ralph-RLM start
166
+ │ │ ├── rlm-plan.prompt.md # Generate plan only
167
+ │ │ └── ralph-plan.prompt.md # Define criteria only
168
+ │ └── instructions/
169
+ │ ├── rembr-integration.instructions.md # Memory patterns
170
+ │ └── code-investigation.instructions.md # Code search patterns
171
+ └── .vscode/
172
+ └── settings.json # MCP configuration
173
+ ```
106
174
 
107
175
  ## Memory Categories
108
176
 
109
- RLM automatically organizes knowledge:
177
+ RLM automatically organizes findings in rembr:
110
178
 
111
- - **`facts`** - Code patterns, implementation details, technical specifications
112
- - **`context`** - Session coordination, task decomposition metadata
113
- - **`projects`** - High-level summaries, architectural decisions
179
+ | Category | Purpose | Used By |
180
+ |----------|---------|---------|
181
+ | `goals` | Acceptance criteria and validation status | Ralph-RLM |
182
+ | `context` | Task state, decomposition progress | Both |
183
+ | `facts` | Validated findings and discoveries | Both |
184
+ | `learning` | Synthesized insights and patterns | Both |
114
185
 
115
- ## Usage Examples
186
+ ## Performance Benefits
116
187
 
117
- ### Automatic Pattern Recognition
188
+ | Complexity | Traditional | RLM Tokens | Reduction | Quality |
189
+ |------------|-------------|------------|-----------|---------|
190
+ | **High** | 18,400 | 8,800 | **52%** | +2.5 pts |
191
+ | **Medium** | 8,200 | 4,900 | **40%** | +1.8 pts |
192
+ | **Low** | 2,100 | 2,100 | **0%** | No change |
118
193
 
119
- ```bash
120
- # Analyze any task for RLM suitability
121
- node rlm-helper.js "implement microservice with auth and monitoring"
194
+ ### Why RLM Works
122
195
 
123
- # Output:
124
- # 🧠 RLM PATTERN DETECTED - Complex task requiring decomposition
125
- # 📋 Generated TaskId: microservice-20260107-abc12
126
- # 🔍 Key Concepts: microservice, auth, monitoring, api, service
127
- # 🏗️ Suggested Decomposition:
128
- # 1. L1-Analysis: Analyze current architecture and requirements
129
- # 2. L1-Design: Design service interfaces and auth strategy
130
- # 3. L1-Implementation: Build core service and endpoints
131
- # 4. L1-Monitoring: Add logging, metrics, and health checks
132
- ```
196
+ 1. **Focused Context** - Each subagent gets only relevant code/memories
197
+ 2. **Iterative Validation** - Ralph-RLM ensures quality criteria are met
198
+ 3. **Persistent Learning** - Solutions stored for future reference
199
+ 4. **Stuck Detection** - Automatically regenerates plans if blocked
133
200
 
134
- ### Memory Integration
201
+ ## Agent Handoffs
135
202
 
136
- ```javascript
137
- // RLM automatically stores and retrieves context
138
- search_memory({
139
- query: "rate limiting patterns express",
140
- category: "facts",
141
- limit: 5
142
- });
203
+ Agents support workflow transitions:
143
204
 
144
- // Find related implementations
145
- find_similar_memories({
146
- memory_id: "auth-implementation-xyz",
147
- min_similarity: 0.8,
148
- limit: 3
149
- });
205
+ ```
206
+ @rlm Can you switch to Ralph-RLM for higher quality validation?
207
+ Hands off to @ralph-rlm with current context
208
+
209
+ @ralph-rlm This looks good, switch to basic RLM for faster completion
210
+ → Hands off to @rlm with findings so far
150
211
  ```
151
212
 
152
213
  ## Configuration
153
214
 
154
- ### Set API Key
215
+ ### 1. Set API Key
216
+
217
+ Configure your rembr API key in VS Code settings:
155
218
 
156
- After installation, configure your rembr API key:
219
+ 1. Get key: [rembr.ai/dashboard/settings](https://rembr.ai/dashboard/settings)
220
+ 2. VS Code: `Cmd+,` → Extensions → MCP → `rembr.env.REMBR_API_KEY`
221
+ 3. Reload: `Cmd+Shift+P` → "Developer: Reload Window"
157
222
 
158
- 1. **Get API Key**: Visit [rembr.ai/dashboard/settings](https://rembr.ai/dashboard/settings)
159
- 2. **VS Code Settings**: `Cmd+,` → Extensions → MCP → `rembr.env.REMBR_API_KEY`
160
- 3. **Reload VS Code**: `Cmd+Shift+P` → "Developer: Reload Window"
223
+ ### 2. Enable Required Settings
161
224
 
162
- ### Customize Patterns
225
+ Ensure these VS Code settings are enabled:
226
+
227
+ ```json
228
+ {
229
+ "github.copilot.chat.codeGeneration.useInstructionFiles": true,
230
+ "chat.agent.enabled": true,
231
+ "chat.useAgentSkills": true
232
+ }
233
+ ```
163
234
 
164
- Edit `.github/copilot-instructions.md` to adjust:
165
- - Auto-detection triggers
166
- - Decomposition strategies
167
- - Memory categories
168
- - Subagent specializations
235
+ ### 3. MCP Server Configuration
236
+
237
+ The installer adds to `.vscode/settings.json`:
238
+
239
+ ```json
240
+ {
241
+ "mcp": {
242
+ "mcpServers": {
243
+ "rembr": {
244
+ "command": "npx",
245
+ "args": ["@rembr/mcp-client"],
246
+ "env": {
247
+ "REMBR_API_KEY": "${REMBR_API_KEY}",
248
+ "REMBR_PROJECT_ID": "${REMBR_PROJECT_ID}"
249
+ }
250
+ }
251
+ }
252
+ }
253
+ }
254
+ ```
169
255
 
170
256
  ## Advanced Usage
171
257
 
172
- ### Custom Agents
258
+ ### Custom Agent Creation
173
259
 
174
260
  Create domain-specific agents in `.github/agents/`:
175
261
 
176
262
  ```markdown
177
263
  ---
178
- name: Payment Service Expert
179
- description: Specialized in payment processing and financial integrations
180
- tools:
181
- - rembr/*
182
- - codebase
183
- model: Claude Sonnet 4
264
+ name: Security Auditor
265
+ description: Specialized in security analysis with OWASP Top 10 focus
266
+ instructions: |
267
+ Use Ralph-RLM patterns for security audits.
268
+ Always define comprehensive acceptance criteria.
269
+ Check for: injection, auth, exposure, XXE, access control,
270
+ misconfiguration, XSS, deserialization, components, logging.
184
271
  ---
185
-
186
- You handle payment system tasks using RLM patterns.
187
- Auto-decompose into: security, processing, webhooks, compliance.
188
272
  ```
189
273
 
190
- ### Manual Subagent Spawning
191
-
192
- For fine-grained control:
274
+ ### Memory Search Patterns
193
275
 
194
276
  ```javascript
195
- // Spawn specialized subagent
196
- runSubagent({
197
- description: "Implement Redis rate limiting middleware",
198
- prompt: `
199
- ## Task
200
- Create express-rate-limit middleware with Redis store
201
-
202
- ## Context from Memory
203
- ${await search_memory({query: "rate limiting redis patterns"})}
204
-
205
- ## Storage Instructions
206
- Store findings with metadata: {"taskId": "rate-limit-20260107", "area": "middleware"}
207
- `
277
+ // Find related implementations
278
+ search_memory({
279
+ query: "rate limiting middleware express redis",
280
+ category: "facts",
281
+ limit: 5
282
+ });
283
+
284
+ // Get similar solutions
285
+ find_similar_memories({
286
+ memory_id: "auth-jwt-implementation-abc",
287
+ min_similarity: 0.8
208
288
  });
209
289
  ```
210
290
 
211
- ## Troubleshooting
291
+ ### Skill Development
212
292
 
213
- ### Memory Not Persisting
214
- ```bash
215
- # Check MCP connection
216
- curl -H "X-API-Key: YOUR_KEY" https://rembr.ai/health
293
+ Skills teach agents how to orchestrate tasks:
217
294
 
218
- # Verify VS Code settings
219
- code ~/.vscode/settings.json
295
+ ```markdown
296
+ # RLM Orchestration Skill
297
+
298
+ ## When to use
299
+ - Complex tasks requiring decomposition
300
+ - Multi-component implementations
301
+ - Cross-system analysis
302
+
303
+ ## How to decompose
304
+ 1. Analyze task complexity
305
+ 2. Identify major components
306
+ 3. Create focused subtasks
307
+ 4. Store context in rembr
308
+ 5. Synthesize findings
220
309
  ```
221
310
 
222
- ### Auto-Detection Not Working
311
+ ## Troubleshooting
312
+
313
+ ### Agent not appearing
314
+ - Check `.github/agents/*.agent.md` exists
315
+ - Verify VS Code version ≥1.106
316
+ - Run "Chat: Configure Custom Agents" from Command Palette
317
+
318
+ ### Skills not loading
319
+ - Ensure `chat.useAgentSkills` is enabled
320
+ - Check `.github/skills/*/SKILL.md` structure
321
+ - Skills load automatically based on prompt match
322
+
323
+ ### Memory connection issues
324
+ - Verify MCP configuration in settings
325
+ - Check environment variables
326
+ - Test rembr connection: `curl -H "X-API-Key: YOUR_KEY" https://rembr.ai/health`
327
+
328
+ ### Auto-detection not working
223
329
  1. Ensure `.github/copilot-instructions.md` exists in project root
224
330
  2. Restart GitHub Copilot: `Cmd+Shift+P` → "GitHub Copilot: Restart Extension"
225
- 3. Use more explicit complexity indicators in your requests
331
+ 3. Use explicit complexity indicators in requests
332
+
333
+ ## Migration from v1.x
334
+
335
+ If upgrading from v1.x:
336
+
337
+ 1. **Backup**: Save existing `.github/` files
338
+ 2. **Install**: Run `rembr-vscode-setup` again
339
+ 3. **Migrate**: Copy custom configurations to new structure
340
+ 4. **Test**: Try `/rlm-analyze` to verify setup
226
341
 
227
- ### Subagent Failures
228
- - Check task complexity (max 3 decomposition levels)
229
- - Ensure focused, actionable subtasks
230
- - Verify proper metadata in memory storage
342
+ The new agent-based system replaces the simpler auto-detection patterns.
231
343
 
232
344
  ## Getting Support
233
345
 
234
346
  - **Documentation**: [docs.rembr.ai/rlm-patterns](https://docs.rembr.ai/rlm-patterns)
235
- - **Examples**: Browse with `search_memory({query: "implementation example"})`
236
- - **Community**: [GitHub Discussions](https://github.com/rembr-ai/community/discussions)
347
+ - **Examples**: Try `search_memory({query: "implementation example"})`
348
+ - **Community**: [GitHub Discussions](https://github.com/rembr-ai/community/discussions)
237
349
  - **Issues**: [GitHub Issues](https://github.com/rembr-ai/vscode-extension/issues)
238
350
 
239
351
  ## License
@@ -242,7 +354,8 @@ MIT - see [LICENSE](LICENSE) for details.
242
354
 
243
355
  ---
244
356
 
245
- **Version**: 1.0.0
246
- **Compatibility**: GitHub Copilot, VS Code, Claude Desktop
247
- **Memory Backend**: [rembr.ai](https://rembr.ai) (hosted) or self-hosted
248
- **Token Efficiency**: Up to 55% reduction for complex development tasks
357
+ **Version**: 2.0.0
358
+ **Agents**: Basic RLM (`@rlm`) and Ralph-RLM (`@ralph-rlm`)
359
+ **Skills**: RLM orchestration patterns with quality validation
360
+ **Memory Backend**: [rembr.ai](https://rembr.ai) semantic memory service
361
+ **Token Efficiency**: Up to 52% reduction with quality improvements
package/cli.js CHANGED
@@ -1,7 +1,7 @@
1
1
  #!/usr/bin/env node
2
2
  const { setup } = require('./setup');
3
3
 
4
- console.log('🫐 REMBR VS Code RLM Pattern Setup');
4
+ console.log('🫐 REMBR VS Code RLM Agent Setup v2.0');
5
5
 
6
6
  const args = process.argv.slice(2);
7
7
  const command = args[0];
@@ -9,7 +9,7 @@ const command = args[0];
9
9
  switch (command) {
10
10
  case 'setup':
11
11
  case 'init':
12
- console.log('Setting up GitHub Copilot with recursive language model patterns...\n');
12
+ console.log('Setting up GitHub Copilot with RLM agents, skills, and prompts...\n');
13
13
  setup(true);
14
14
  break;
15
15
 
@@ -17,31 +17,46 @@ switch (command) {
17
17
  case '--help':
18
18
  case '-h':
19
19
  console.log(`
20
- REMBR VS Code RLM Patterns - Recursive Language Model setup for GitHub Copilot
20
+ 🫐 REMBR VS Code RLM Agent System v2.0
21
21
 
22
22
  Usage:
23
- npx @rembr/vscode setup Configure RLM patterns in your project
23
+ npx @rembr/vscode setup Configure RLM agent system in your project
24
24
  npx @rembr/vscode help Show this help message
25
25
 
26
- What it does:
27
- Adds GitHub Copilot instructions for automatic task decomposition
28
- Creates Recursive Analyst agent for complex tasks
29
- Configures REMBR MCP server for semantic memory
30
- Adds RLM helper scripts and documentation
26
+ What it installs:
27
+ Custom Agents: @rlm (fast) and @ralph-rlm (quality-focused)
28
+ Skills: RLM orchestration patterns with validation
29
+ Prompts: /rlm-analyze, /ralph-analyze, /rlm-plan, /ralph-plan
30
+ Instructions: Memory patterns and code investigation guides
31
+ • MCP Server: Semantic memory integration
32
+
33
+ Agent Modes:
34
+ • @rlm: Fast decomposition for quick analysis
35
+ • @ralph-rlm: Acceptance-driven loops with quality validation
36
+
37
+ Usage Examples:
38
+ @rlm Analyze authentication system for security issues
39
+ @ralph-rlm Audit API endpoints for OWASP Top 10 vulnerabilities
40
+ /rlm-analyze "investigate rate limiting implementation"
31
41
 
32
42
  Benefits:
33
- 51% token efficiency improvement for complex tasks
34
- Automatic detection of decomposition-worthy tasks
43
+ 52% token efficiency improvement for complex tasks
44
+ Quality assurance through acceptance criteria
35
45
  • Persistent semantic memory across sessions
36
- Focused subagent coordination
46
+ Handoffs between agent modes
47
+
48
+ Requirements:
49
+ • VS Code 1.106+ with GitHub Copilot
50
+ • Settings: chat.agent.enabled, chat.useAgentSkills enabled
37
51
 
38
52
  Get started:
39
53
  1. Run: npx @rembr/vscode setup
40
54
  2. Get API key: https://rembr.ai/dashboard/settings
41
55
  3. Configure in VS Code: Settings → MCP → rembr.env.REMBR_API_KEY
42
- 4. Reload VS Code and try a complex task
56
+ 4. Enable required VS Code settings (see output)
57
+ 5. Try: @rlm or /rlm-analyze
43
58
 
44
- Learn more: https://docs.rembr.ai/rlm-patterns
59
+ Learn more: https://docs.rembr.ai/rlm-agents
45
60
  `);
46
61
  break;
47
62
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@rembr/vscode",
3
- "version": "1.0.0",
4
- "description": "VS Code setup for REMBR RLM patterns - semantic memory for AI agents with GitHub Copilot recursive workflows",
3
+ "version": "2.0.0",
4
+ "description": "VS Code RLM integration - Recursive Language Model patterns with GitHub Copilot agents, skills, and semantic memory",
5
5
  "main": "setup.js",
6
6
  "bin": {
7
7
  "rembr-vscode": "./cli.js"