@rembr/vscode 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,91 @@
1
+ # Changelog
2
+
3
+ All notable changes to @rembr/client will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [1.0.0] - 2026-01-06
9
+
10
+ ### Added
11
+
12
+ - Initial release of @rembr/client
13
+ - Automatic MCP server configuration for:
14
+ - VS Code + GitHub Copilot (`.vscode/mcp.json`)
15
+ - Cursor (`.cursor/mcp.json`)
16
+ - Windsurf (`.windsurf/mcp.json`)
17
+ - Claude Desktop (`~/Library/Application Support/Claude/claude_desktop_config.json`)
18
+ - Recursive Analyst agent template for GitHub Copilot (`.github/agents/recursive-analyst.agent.md`)
19
+ - Cursor integration rules (`.cursorrules`)
20
+ - Windsurf cascade instructions (`.windsurfrules`)
21
+ - Aider configuration (`.aider.conf.yml`)
22
+ - CLI tool: `npx rembr setup`
23
+ - Postinstall setup (optional, can be disabled with `SKIP_REMBR_SETUP=1`)
24
+
25
+ ### REMBR Server Features (v1.0.0)
26
+
27
+ **Core Memory Tools (5)**:
28
+ - `store_memory` - Store new memories with categories and metadata
29
+ - `search_memory` - Hybrid, semantic, text, or phrase search with metadata filtering
30
+ - `update_memory` - Update existing memories (auto-regenerates embeddings)
31
+ - `delete_memory` - Remove memories
32
+ - `list_memories` - List recent memories by category
33
+ - `get_memory` - Retrieve specific memory by ID
34
+
35
+ **Discovery Tools (2)**:
36
+ - `find_similar_memories` - Discover related memories via semantic similarity
37
+ - `get_embedding_stats` - Monitor embedding coverage and semantic search status
38
+
39
+ **Statistics (1)**:
40
+ - `get_stats` - Usage statistics, plan limits, memory counts
41
+
42
+ **Context Management for RLMs (3)**:
43
+ - `create_context` - Create logical groupings for related memories
44
+ - `list_contexts` - List all contexts in project
45
+ - `search_context` - Scoped search within a specific context
46
+ - `add_memory_to_context` - Link memories to contexts
47
+
48
+ **Snapshots for Sub-Agent Handoff (3)**:
49
+ - `create_snapshot` - Immutable memory snapshots with TTL
50
+ - `get_snapshot` - Retrieve snapshot with memories
51
+ - `list_snapshots` - List available snapshots
52
+
53
+ **Compilation & Analysis (3)**:
54
+ - `get_memory_graph` - Relationship graph for context memories
55
+ - `detect_contradictions` - Find contradicting memories
56
+ - `get_context_insights` - Pre-compiled insights (categories, temporal patterns, entities)
57
+
58
+ ### Search Capabilities
59
+
60
+ **4 Search Modes**:
61
+ 1. **hybrid** (default) - 0.7 semantic + 0.3 text (best for general use)
62
+ 2. **semantic** - Conceptual similarity via pgvector embeddings
63
+ 3. **text** - Fast fuzzy matching via pg_trgm
64
+ 4. **phrase** - Multi-word exact matching via PostgreSQL full-text search
65
+
66
+ **Metadata Filtering**:
67
+ - Filter by any metadata field: `{taskId: "...", area: "...", file: "..."}`
68
+ - Essential for RLM task isolation and sub-agent context management
69
+
70
+ **Performance** (10K memories):
71
+ - Phrase search: ~30ms
72
+ - Text search: ~15ms
73
+ - Semantic search: ~25ms
74
+ - Hybrid search: ~45ms
75
+ - +10ms per metadata filter
76
+
77
+ ### Pricing Tiers
78
+
79
+ - **Free**: 1,000 memories, 100 searches/day
80
+ - **Starter** (£9/mo): 10,000 memories, 1,000 searches/day
81
+ - **Pro** (£29/mo): 100,000 memories, 10,000 searches/day
82
+ - **Enterprise** (£99/mo): 1M memories, unlimited searches
83
+
84
+ ### Documentation
85
+
86
+ - Comprehensive README with setup instructions
87
+ - Recursive Analyst agent template with RLM patterns
88
+ - Tool-specific configuration for each supported editor
89
+ - Examples for all search modes and metadata filtering
90
+
91
+ [1.0.0]: https://github.com/radicalgeek/rembr-client/releases/tag/v1.0.0
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 REMBR
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,248 @@
1
+ # @rembr/vscode
2
+
3
+ **Recursive Language Model (RLM) patterns for VS Code and GitHub Copilot**
4
+
5
+ Transform your AI-assisted development with automatic task decomposition, achieving **51% token efficiency** improvements for complex coding tasks.
6
+
7
+ ## Quick Start
8
+
9
+ ```bash
10
+ npm install -g @rembr/vscode
11
+ rembr-vscode-setup
12
+ ```
13
+
14
+ **What happens next?**
15
+
16
+ ✅ GitHub Copilot automatically detects complex tasks
17
+ ✅ Recursive decomposition with focused subagents
18
+ ✅ Semantic memory integration via rembr MCP
19
+ ✅ 51% reduction in token usage for complex requests
20
+ ✅ Complete documentation and workflow helpers
21
+
22
+ ## How It Works
23
+
24
+ ### Before: Traditional Approach
25
+ ```
26
+ User: "Implement rate limiting for payment service with Redis, auth, monitoring, and tests"
27
+
28
+ Copilot: [Loads entire codebase context → 12,847 tokens → Single massive response]
29
+ Result: Incomplete implementation, missing edge cases, 4 revisions needed
30
+ ```
31
+
32
+ ### After: RLM Patterns
33
+ ```
34
+ User: "Implement rate limiting for payment service with Redis, auth, monitoring, and tests"
35
+
36
+ Copilot: [Auto-detects complexity → Spawns focused subagents]
37
+ ├── L1-Analysis: Analyze payment endpoints (280 tokens)
38
+ ├── L1-Design: Design Redis strategy (180 tokens)
39
+ ├── L1-Implementation: Build middleware (350 tokens)
40
+ └── L1-Monitoring: Add metrics/tests (120 tokens)
41
+
42
+ Result: Complete implementation, 6,241 tokens (51% reduction), 1 revision
43
+ ```
44
+
45
+ The installer automatically configures:
46
+
47
+ ### 🧠 GitHub Copilot Integration
48
+ - **`.github/copilot-instructions.md`** - Auto-detection triggers for complex tasks
49
+ - **`.github/agents/recursive-analyst.agent.md`** - Specialized RLM agent
50
+ - **Pattern Recognition** - Automatically identifies when to decompose tasks
51
+
52
+ ### 📡 MCP Memory Integration
53
+ - **VS Code settings** - REMBR MCP server configuration
54
+ - **Semantic Memory** - Persistent context across sessions
55
+ - **Cross-project Learning** - Reuse patterns between projects
56
+
57
+ ### 🛠 Workflow Tools
58
+ - **`rlm-helper.js`** - Task coordination and analysis script
59
+ - **`docs/rlm-patterns.md`** - Complete usage guide
60
+ - **`docs/rlm-benchmarks.md`** - Performance analysis
61
+
62
+ ## Auto-Detection Examples
63
+
64
+ ### ✅ Complex Tasks (Auto-RLM)
65
+ These requests automatically trigger recursive decomposition:
66
+
67
+ ```javascript
68
+ // Multi-component implementations
69
+ "Implement OAuth2 with JWT refresh tokens, rate limiting, and admin dashboard"
70
+
71
+ // Cross-service integrations
72
+ "Migrate user service to microservices with message queues and monitoring"
73
+
74
+ // Architecture changes
75
+ "Refactor monolith payment system for scalability and add caching"
76
+
77
+ // Analysis + implementation
78
+ "Analyze current auth flow and rebuild with SSO integration"
79
+ ```
80
+
81
+ ### 🎯 Simple Tasks (Standard)
82
+ These use traditional single-shot responses:
83
+
84
+ ```javascript
85
+ // Single file changes
86
+ "Fix this TypeScript type error"
87
+ "Add logging to this function"
88
+ "Update README with new installation steps"
89
+ "Rename variable from userID to userId"
90
+ ```
91
+
92
+ ## Performance Benefits
93
+
94
+ | Complexity | Traditional Tokens | RLM Tokens | Reduction | Quality Gain |
95
+ |------------|-------------------|------------|-----------|--------------|
96
+ | **High** | 18,400 | 8,800 | **52%** | +2.5 points |
97
+ | **Medium** | 8,200 | 4,900 | **40%** | +1.8 points |
98
+ | **Low** | 2,100 | 2,100 | **0%** | No change |
99
+
100
+ ### Why RLM Works
101
+
102
+ 1. **Focused Context** - Each subagent receives only relevant code/memories
103
+ 2. **Parallel Processing** - Multiple specialized agents work simultaneously
104
+ 3. **Persistent Learning** - Solutions stored in semantic memory for reuse
105
+ 4. **Incremental Validation** - Each level validates before proceeding
106
+
107
+ ## Memory Categories
108
+
109
+ RLM automatically organizes knowledge:
110
+
111
+ - **`facts`** - Code patterns, implementation details, technical specifications
112
+ - **`context`** - Session coordination, task decomposition metadata
113
+ - **`projects`** - High-level summaries, architectural decisions
114
+
115
+ ## Usage Examples
116
+
117
+ ### Automatic Pattern Recognition
118
+
119
+ ```bash
120
+ # Analyze any task for RLM suitability
121
+ node rlm-helper.js "implement microservice with auth and monitoring"
122
+
123
+ # Output:
124
+ # 🧠 RLM PATTERN DETECTED - Complex task requiring decomposition
125
+ # 📋 Generated TaskId: microservice-20260107-abc12
126
+ # 🔍 Key Concepts: microservice, auth, monitoring, api, service
127
+ # 🏗️ Suggested Decomposition:
128
+ # 1. L1-Analysis: Analyze current architecture and requirements
129
+ # 2. L1-Design: Design service interfaces and auth strategy
130
+ # 3. L1-Implementation: Build core service and endpoints
131
+ # 4. L1-Monitoring: Add logging, metrics, and health checks
132
+ ```
133
+
134
+ ### Memory Integration
135
+
136
+ ```javascript
137
+ // RLM automatically stores and retrieves context
138
+ search_memory({
139
+ query: "rate limiting patterns express",
140
+ category: "facts",
141
+ limit: 5
142
+ });
143
+
144
+ // Find related implementations
145
+ find_similar_memories({
146
+ memory_id: "auth-implementation-xyz",
147
+ min_similarity: 0.8,
148
+ limit: 3
149
+ });
150
+ ```
151
+
152
+ ## Configuration
153
+
154
+ ### Set API Key
155
+
156
+ After installation, configure your rembr API key:
157
+
158
+ 1. **Get API Key**: Visit [rembr.ai/dashboard/settings](https://rembr.ai/dashboard/settings)
159
+ 2. **VS Code Settings**: `Cmd+,` → Extensions → MCP → `rembr.env.REMBR_API_KEY`
160
+ 3. **Reload VS Code**: `Cmd+Shift+P` → "Developer: Reload Window"
161
+
162
+ ### Customize Patterns
163
+
164
+ Edit `.github/copilot-instructions.md` to adjust:
165
+ - Auto-detection triggers
166
+ - Decomposition strategies
167
+ - Memory categories
168
+ - Subagent specializations
169
+
170
+ ## Advanced Usage
171
+
172
+ ### Custom Agents
173
+
174
+ Create domain-specific agents in `.github/agents/`:
175
+
176
+ ```markdown
177
+ ---
178
+ name: Payment Service Expert
179
+ description: Specialized in payment processing and financial integrations
180
+ tools:
181
+ - rembr/*
182
+ - codebase
183
+ model: Claude Sonnet 4
184
+ ---
185
+
186
+ You handle payment system tasks using RLM patterns.
187
+ Auto-decompose into: security, processing, webhooks, compliance.
188
+ ```
189
+
190
+ ### Manual Subagent Spawning
191
+
192
+ For fine-grained control:
193
+
194
+ ```javascript
195
+ // Spawn specialized subagent
196
+ runSubagent({
197
+ description: "Implement Redis rate limiting middleware",
198
+ prompt: `
199
+ ## Task
200
+ Create express-rate-limit middleware with Redis store
201
+
202
+ ## Context from Memory
203
+ ${await search_memory({query: "rate limiting redis patterns"})}
204
+
205
+ ## Storage Instructions
206
+ Store findings with metadata: {"taskId": "rate-limit-20260107", "area": "middleware"}
207
+ `
208
+ });
209
+ ```
210
+
211
+ ## Troubleshooting
212
+
213
+ ### Memory Not Persisting
214
+ ```bash
215
+ # Check MCP connection
216
+ curl -H "X-API-Key: YOUR_KEY" https://rembr.ai/health
217
+
218
+ # Verify VS Code settings
219
+ code ~/.vscode/settings.json
220
+ ```
221
+
222
+ ### Auto-Detection Not Working
223
+ 1. Ensure `.github/copilot-instructions.md` exists in project root
224
+ 2. Restart GitHub Copilot: `Cmd+Shift+P` → "GitHub Copilot: Restart Extension"
225
+ 3. Use more explicit complexity indicators in your requests
226
+
227
+ ### Subagent Failures
228
+ - Check task complexity (max 3 decomposition levels)
229
+ - Ensure focused, actionable subtasks
230
+ - Verify proper metadata in memory storage
231
+
232
+ ## Getting Support
233
+
234
+ - **Documentation**: [docs.rembr.ai/rlm-patterns](https://docs.rembr.ai/rlm-patterns)
235
+ - **Examples**: Browse with `search_memory({query: "implementation example"})`
236
+ - **Community**: [GitHub Discussions](https://github.com/rembr-ai/community/discussions)
237
+ - **Issues**: [GitHub Issues](https://github.com/rembr-ai/vscode-extension/issues)
238
+
239
+ ## License
240
+
241
+ MIT - see [LICENSE](LICENSE) for details.
242
+
243
+ ---
244
+
245
+ **Version**: 1.0.0
246
+ **Compatibility**: GitHub Copilot, VS Code, Claude Desktop
247
+ **Memory Backend**: [rembr.ai](https://rembr.ai) (hosted) or self-hosted
248
+ **Token Efficiency**: Up to 55% reduction for complex development tasks
package/cli.js ADDED
@@ -0,0 +1,56 @@
1
+ #!/usr/bin/env node
2
+ const { setup } = require('./setup');
3
+
4
+ console.log('🫐 REMBR VS Code RLM Pattern Setup');
5
+
6
+ const args = process.argv.slice(2);
7
+ const command = args[0];
8
+
9
+ switch (command) {
10
+ case 'setup':
11
+ case 'init':
12
+ console.log('Setting up GitHub Copilot with recursive language model patterns...\n');
13
+ setup(true);
14
+ break;
15
+
16
+ case 'help':
17
+ case '--help':
18
+ case '-h':
19
+ console.log(`
20
+ � REMBR VS Code RLM Patterns - Recursive Language Model setup for GitHub Copilot
21
+
22
+ Usage:
23
+ npx @rembr/vscode setup Configure RLM patterns in your project
24
+ npx @rembr/vscode help Show this help message
25
+
26
+ What it does:
27
+ • Adds GitHub Copilot instructions for automatic task decomposition
28
+ • Creates Recursive Analyst agent for complex tasks
29
+ • Configures REMBR MCP server for semantic memory
30
+ • Adds RLM helper scripts and documentation
31
+
32
+ Benefits:
33
+ • 51% token efficiency improvement for complex tasks
34
+ • Automatic detection of decomposition-worthy tasks
35
+ • Persistent semantic memory across sessions
36
+ • Focused subagent coordination
37
+
38
+ Get started:
39
+ 1. Run: npx @rembr/vscode setup
40
+ 2. Get API key: https://rembr.ai/dashboard/settings
41
+ 3. Configure in VS Code: Settings → MCP → rembr.env.REMBR_API_KEY
42
+ 4. Reload VS Code and try a complex task
43
+
44
+ Learn more: https://docs.rembr.ai/rlm-patterns
45
+ `);
46
+ break;
47
+
48
+ default:
49
+ if (!command) {
50
+ setup(true);
51
+ } else {
52
+ console.error(`Unknown command: ${command}`);
53
+ console.log('Run `npx @rembr/vscode help` for usage information');
54
+ process.exit(1);
55
+ }
56
+ }
package/package.json ADDED
@@ -0,0 +1,62 @@
1
+ {
2
+ "name": "@rembr/vscode",
3
+ "version": "1.0.0",
4
+ "description": "VS Code setup for REMBR RLM patterns - semantic memory for AI agents with GitHub Copilot recursive workflows",
5
+ "main": "setup.js",
6
+ "bin": {
7
+ "rembr-vscode": "./cli.js"
8
+ },
9
+ "scripts": {
10
+ "postinstall": "node postinstall.js"
11
+ },
12
+ "files": [
13
+ "cli.js",
14
+ "postinstall.js",
15
+ "setup.js",
16
+ "templates/",
17
+ "README.md",
18
+ "CHANGELOG.md",
19
+ "LICENSE"
20
+ ],
21
+ "keywords": [
22
+ "vscode",
23
+ "github-copilot",
24
+ "mcp",
25
+ "model-context-protocol",
26
+ "ai",
27
+ "memory",
28
+ "rlm",
29
+ "recursive-language-model",
30
+ "agents",
31
+ "semantic",
32
+ "rembr",
33
+ "recursive-learning-machine",
34
+ "agent",
35
+ "copilot",
36
+ "cursor",
37
+ "windsurf",
38
+ "claude",
39
+ "aider",
40
+ "semantic-memory",
41
+ "ai-memory",
42
+ "vector-database",
43
+ "embeddings",
44
+ "ai-agent",
45
+ "multi-agent",
46
+ "pgvector",
47
+ "context-management"
48
+ ],
49
+ "author": "REMBR <hello@rembr.ai>",
50
+ "license": "MIT",
51
+ "homepage": "https://rembr.ai",
52
+ "repository": {
53
+ "type": "git",
54
+ "url": "https://github.com/radicalgeek/rembr-client.git"
55
+ },
56
+ "bugs": {
57
+ "url": "https://github.com/radicalgeek/rembr-client/issues"
58
+ },
59
+ "engines": {
60
+ "node": ">=18.0.0"
61
+ }
62
+ }
package/postinstall.js ADDED
@@ -0,0 +1,30 @@
1
+ #!/usr/bin/env node
2
+ const { setup } = require('./setup');
3
+ const path = require('path');
4
+ const fs = require('fs');
5
+
6
+ // Always run setup on postinstall, unless we're being installed globally
7
+ const isGlobalInstall = process.env.npm_config_global === 'true';
8
+
9
+ if (!isGlobalInstall) {
10
+ console.log('\n📦 Running REMBR postinstall setup...');
11
+ console.log('Current working directory:', process.cwd());
12
+ console.log('Package directory:', __dirname);
13
+
14
+ // Find the project root - look for nearest package.json that isn't us
15
+ let projectRoot = process.cwd();
16
+ const packageDir = path.resolve(__dirname, '../../../'); // Go up from node_modules/@rembr/client
17
+
18
+ // Check if we're in a node_modules directory and use parent project
19
+ if (process.cwd().includes('node_modules')) {
20
+ projectRoot = packageDir;
21
+ }
22
+
23
+ console.log('Project root:', projectRoot);
24
+
25
+ // Change to project directory before running setup
26
+ process.chdir(projectRoot);
27
+ setup(false);
28
+ } else {
29
+ console.log('\n📦 @rembr/client installed globally. Run `npx rembr setup` to configure a project.\n');
30
+ }