remdb 0.3.114__py3-none-any.whl → 0.3.127__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of remdb might be problematic. Click here for more details.
- rem/agentic/agents/sse_simulator.py +2 -0
- rem/agentic/context.py +23 -3
- rem/agentic/mcp/tool_wrapper.py +29 -3
- rem/agentic/otel/setup.py +1 -0
- rem/agentic/providers/pydantic_ai.py +26 -2
- rem/api/main.py +4 -1
- rem/api/mcp_router/server.py +9 -3
- rem/api/mcp_router/tools.py +324 -2
- rem/api/routers/admin.py +218 -1
- rem/api/routers/chat/completions.py +250 -4
- rem/api/routers/chat/models.py +81 -7
- rem/api/routers/chat/otel_utils.py +33 -0
- rem/api/routers/chat/sse_events.py +17 -1
- rem/api/routers/chat/streaming.py +35 -1
- rem/api/routers/feedback.py +134 -14
- rem/api/routers/query.py +6 -3
- rem/cli/commands/README.md +42 -0
- rem/cli/commands/cluster.py +617 -168
- rem/cli/commands/configure.py +1 -3
- rem/cli/commands/db.py +66 -22
- rem/cli/commands/experiments.py +242 -26
- rem/cli/commands/schema.py +6 -5
- rem/config.py +8 -1
- rem/services/phoenix/client.py +59 -18
- rem/services/postgres/diff_service.py +108 -3
- rem/services/postgres/schema_generator.py +205 -4
- rem/services/session/compression.py +7 -0
- rem/settings.py +150 -18
- rem/sql/migrations/001_install.sql +156 -0
- rem/sql/migrations/002_install_models.sql +1864 -1
- rem/sql/migrations/004_cache_system.sql +548 -0
- rem/utils/__init__.py +18 -0
- rem/utils/schema_loader.py +94 -3
- rem/utils/sql_paths.py +146 -0
- rem/workers/__init__.py +3 -1
- rem/workers/db_listener.py +579 -0
- rem/workers/unlogged_maintainer.py +463 -0
- {remdb-0.3.114.dist-info → remdb-0.3.127.dist-info}/METADATA +213 -177
- {remdb-0.3.114.dist-info → remdb-0.3.127.dist-info}/RECORD +41 -36
- {remdb-0.3.114.dist-info → remdb-0.3.127.dist-info}/WHEEL +0 -0
- {remdb-0.3.114.dist-info → remdb-0.3.127.dist-info}/entry_points.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: remdb
|
|
3
|
-
Version: 0.3.
|
|
3
|
+
Version: 0.3.127
|
|
4
4
|
Summary: Resources Entities Moments - Bio-inspired memory system for agentic AI workloads
|
|
5
5
|
Project-URL: Homepage, https://github.com/Percolation-Labs/reminiscent
|
|
6
6
|
Project-URL: Documentation, https://github.com/Percolation-Labs/reminiscent/blob/main/README.md
|
|
@@ -123,9 +123,8 @@ Cloud-native unified memory infrastructure for agentic AI systems built with Pyd
|
|
|
123
123
|
|
|
124
124
|
Choose your path:
|
|
125
125
|
|
|
126
|
-
- **Option 1: Package Users with Example Data** (Recommended
|
|
127
|
-
- **Option 2:
|
|
128
|
-
- **Option 3: Developers** - Clone repo, local development with uv
|
|
126
|
+
- **Option 1: Package Users with Example Data** (Recommended) - PyPI + example datasets
|
|
127
|
+
- **Option 2: Developers** - Clone repo, local development with uv
|
|
129
128
|
|
|
130
129
|
---
|
|
131
130
|
|
|
@@ -144,10 +143,6 @@ pip install "remdb[all]"
|
|
|
144
143
|
git clone https://github.com/Percolation-Labs/remstack-lab.git
|
|
145
144
|
cd remstack-lab
|
|
146
145
|
|
|
147
|
-
# Optional: Set default LLM provider via environment variable
|
|
148
|
-
# export LLM__DEFAULT_MODEL="openai:gpt-4.1-nano" # Fast and cheap
|
|
149
|
-
# export LLM__DEFAULT_MODEL="anthropic:claude-sonnet-4-5-20250929" # High quality (default)
|
|
150
|
-
|
|
151
146
|
# Start PostgreSQL with docker-compose
|
|
152
147
|
curl -O https://gist.githubusercontent.com/percolating-sirsh/d117b673bc0edfdef1a5068ccd3cf3e5/raw/docker-compose.prebuilt.yml
|
|
153
148
|
docker compose -f docker-compose.prebuilt.yml up -d postgres
|
|
@@ -156,22 +151,18 @@ docker compose -f docker-compose.prebuilt.yml up -d postgres
|
|
|
156
151
|
# Add --claude-desktop to register with Claude Desktop app
|
|
157
152
|
rem configure --install --claude-desktop
|
|
158
153
|
|
|
159
|
-
# Load quickstart dataset
|
|
154
|
+
# Load quickstart dataset
|
|
160
155
|
rem db load datasets/quickstart/sample_data.yaml
|
|
161
156
|
|
|
162
157
|
# Ask questions
|
|
163
158
|
rem ask "What documents exist in the system?"
|
|
164
159
|
rem ask "Show me meetings about API design"
|
|
165
160
|
|
|
166
|
-
# Ingest files (PDF, DOCX, images, etc.)
|
|
161
|
+
# Ingest files (PDF, DOCX, images, etc.)
|
|
167
162
|
rem process ingest datasets/formats/files/bitcoin_whitepaper.pdf --category research --tags bitcoin,whitepaper
|
|
168
163
|
|
|
169
164
|
# Query ingested content
|
|
170
165
|
rem ask "What is the Bitcoin whitepaper about?"
|
|
171
|
-
|
|
172
|
-
# Try other datasets (use --user-id for multi-tenant scenarios)
|
|
173
|
-
rem db load datasets/domains/recruitment/scenarios/candidate_pipeline/data.yaml --user-id acme-corp
|
|
174
|
-
rem ask --user-id acme-corp "Show me candidates with Python experience"
|
|
175
166
|
```
|
|
176
167
|
|
|
177
168
|
**What you get:**
|
|
@@ -181,130 +172,39 @@ rem ask --user-id acme-corp "Show me candidates with Python experience"
|
|
|
181
172
|
|
|
182
173
|
**Learn more**: [remstack-lab repository](https://github.com/Percolation-Labs/remstack-lab)
|
|
183
174
|
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
## Option 2: Package Users (No Example Data)
|
|
175
|
+
### Using the API
|
|
187
176
|
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
### Step 1: Start Database and API with Docker Compose
|
|
177
|
+
Once configured, you can also use the OpenAI-compatible chat completions API:
|
|
191
178
|
|
|
192
179
|
```bash
|
|
193
|
-
#
|
|
194
|
-
mkdir my-rem-project && cd my-rem-project
|
|
195
|
-
|
|
196
|
-
# Download docker-compose file from public gist
|
|
197
|
-
curl -O https://gist.githubusercontent.com/percolating-sirsh/d117b673bc0edfdef1a5068ccd3cf3e5/raw/docker-compose.prebuilt.yml
|
|
198
|
-
|
|
199
|
-
# IMPORTANT: Export API keys BEFORE running docker compose
|
|
200
|
-
# Docker Compose reads env vars at startup - exporting them after won't work!
|
|
201
|
-
|
|
202
|
-
# Required: OpenAI for embeddings (text-embedding-3-small)
|
|
203
|
-
export OPENAI_API_KEY="sk-..."
|
|
204
|
-
|
|
205
|
-
# Recommended: At least one chat completion provider
|
|
206
|
-
export ANTHROPIC_API_KEY="sk-ant-..." # Claude Sonnet 4.5 (high quality)
|
|
207
|
-
export CEREBRAS_API_KEY="csk-..." # Cerebras (fast, cheap inference)
|
|
208
|
-
|
|
209
|
-
# Start PostgreSQL + API
|
|
180
|
+
# Start the API server (if not using docker-compose for API)
|
|
210
181
|
docker compose -f docker-compose.prebuilt.yml up -d
|
|
211
182
|
|
|
212
|
-
#
|
|
213
|
-
curl http://localhost:8000/
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
### Step 2: Install and Configure CLI (REQUIRED)
|
|
222
|
-
|
|
223
|
-
**This step is required** before you can use REM - it installs the database schema and configures your LLM API keys.
|
|
224
|
-
|
|
225
|
-
```bash
|
|
226
|
-
# Install remdb package from PyPI
|
|
227
|
-
pip install remdb[all]
|
|
228
|
-
|
|
229
|
-
# Configure REM (defaults to port 5051 for package users)
|
|
230
|
-
rem configure --install --claude-desktop
|
|
183
|
+
# Test the API
|
|
184
|
+
curl -X POST http://localhost:8000/api/v1/chat/completions \
|
|
185
|
+
-H "Content-Type: application/json" \
|
|
186
|
+
-H "X-Session-Id: a1b2c3d4-e5f6-7890-abcd-ef1234567890" \
|
|
187
|
+
-d '{
|
|
188
|
+
"model": "anthropic:claude-sonnet-4-5-20250929",
|
|
189
|
+
"messages": [{"role": "user", "content": "What documents did Sarah Chen author?"}],
|
|
190
|
+
"stream": false
|
|
191
|
+
}'
|
|
231
192
|
```
|
|
232
193
|
|
|
233
|
-
The interactive wizard will:
|
|
234
|
-
1. **Configure PostgreSQL**: Defaults to `postgresql://rem:rem@localhost:5051/rem` (prebuilt docker-compose)
|
|
235
|
-
- Just press Enter to accept defaults
|
|
236
|
-
- Custom database: Enter your own host/port/credentials
|
|
237
|
-
2. **Configure LLM providers**: Enter your OpenAI/Anthropic API keys
|
|
238
|
-
3. **Install database tables**: Creates schema, functions, indexes (**required for CLI/API to work**)
|
|
239
|
-
4. **Register with Claude Desktop**: Adds REM MCP server to Claude
|
|
240
|
-
|
|
241
|
-
Configuration saved to `~/.rem/config.yaml` (can edit with `rem configure --edit`)
|
|
242
|
-
|
|
243
194
|
**Port Guide:**
|
|
244
195
|
- **5051**: Package users with `docker-compose.prebuilt.yml` (pre-built image)
|
|
245
196
|
- **5050**: Developers with `docker-compose.yml` (local build)
|
|
246
|
-
- **Custom**: Your own PostgreSQL database
|
|
247
197
|
|
|
248
198
|
**Next Steps:**
|
|
249
199
|
- See [CLI Reference](#cli-reference) for all available commands
|
|
250
200
|
- See [REM Query Dialect](#rem-query-dialect) for query examples
|
|
251
201
|
- See [API Endpoints](#api-endpoints) for OpenAI-compatible API usage
|
|
252
202
|
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
**Option A: Clone example datasets** (Recommended - works with all README examples)
|
|
256
|
-
|
|
257
|
-
```bash
|
|
258
|
-
# Clone datasets repository
|
|
259
|
-
git clone https://github.com/Percolation-Labs/remstack-lab.git
|
|
260
|
-
|
|
261
|
-
# Load quickstart dataset (uses default user)
|
|
262
|
-
rem db load --file remstack-lab/datasets/quickstart/sample_data.yaml
|
|
263
|
-
|
|
264
|
-
# Test with sample queries
|
|
265
|
-
rem ask "What documents exist in the system?"
|
|
266
|
-
rem ask "Show me meetings about API design"
|
|
267
|
-
rem ask "Who is Sarah Chen?"
|
|
268
|
-
|
|
269
|
-
# Try domain-specific datasets (use --user-id for multi-tenant scenarios)
|
|
270
|
-
rem db load --file remstack-lab/datasets/domains/recruitment/scenarios/candidate_pipeline/data.yaml --user-id acme-corp
|
|
271
|
-
rem ask --user-id acme-corp "Show me candidates with Python experience"
|
|
272
|
-
```
|
|
273
|
-
|
|
274
|
-
**Option B: Bring your own data**
|
|
275
|
-
|
|
276
|
-
```bash
|
|
277
|
-
# Ingest your own files (uses default user)
|
|
278
|
-
echo "REM is a bio-inspired memory system for agentic AI workloads." > test-doc.txt
|
|
279
|
-
rem process ingest test-doc.txt --category documentation --tags rem,ai
|
|
280
|
-
|
|
281
|
-
# Query your ingested data
|
|
282
|
-
rem ask "What do you know about REM from my knowledge base?"
|
|
283
|
-
```
|
|
284
|
-
|
|
285
|
-
### Step 4: Test the API
|
|
286
|
-
|
|
287
|
-
```bash
|
|
288
|
-
# Test the OpenAI-compatible chat completions API
|
|
289
|
-
curl -X POST http://localhost:8000/api/v1/chat/completions \
|
|
290
|
-
-H "Content-Type: application/json" \
|
|
291
|
-
-H "X-User-Id: demo-user" \
|
|
292
|
-
-d '{
|
|
293
|
-
"model": "anthropic:claude-sonnet-4-5-20250929",
|
|
294
|
-
"messages": [{"role": "user", "content": "What documents did Sarah Chen author?"}],
|
|
295
|
-
"stream": false
|
|
296
|
-
}'
|
|
297
|
-
```
|
|
298
|
-
|
|
299
|
-
**Available Commands:**
|
|
300
|
-
- `rem ask` - Natural language queries to REM
|
|
301
|
-
- `rem process ingest <file>` - Full ingestion pipeline (storage + parsing + embedding + database)
|
|
302
|
-
- `rem process uri <file>` - READ-ONLY parsing (no database storage, useful for testing parsers)
|
|
303
|
-
- `rem db load --file <yaml>` - Load structured datasets directly
|
|
203
|
+
---
|
|
304
204
|
|
|
305
205
|
## Example Datasets
|
|
306
206
|
|
|
307
|
-
|
|
207
|
+
Clone [remstack-lab](https://github.com/Percolation-Labs/remstack-lab) for curated datasets organized by domain and format.
|
|
308
208
|
|
|
309
209
|
**What's included:**
|
|
310
210
|
- **Quickstart**: Minimal dataset (3 users, 3 resources, 3 moments) - perfect for first-time users
|
|
@@ -316,14 +216,11 @@ curl -X POST http://localhost:8000/api/v1/chat/completions \
|
|
|
316
216
|
```bash
|
|
317
217
|
cd remstack-lab
|
|
318
218
|
|
|
319
|
-
# Load any dataset
|
|
219
|
+
# Load any dataset
|
|
320
220
|
rem db load --file datasets/quickstart/sample_data.yaml
|
|
321
221
|
|
|
322
222
|
# Explore formats
|
|
323
223
|
rem db load --file datasets/formats/engrams/scenarios/team_meeting/team_standup_meeting.yaml
|
|
324
|
-
|
|
325
|
-
# Try domain-specific examples (use --user-id for multi-tenant scenarios)
|
|
326
|
-
rem db load --file datasets/domains/recruitment/scenarios/candidate_pipeline/data.yaml --user-id acme-corp
|
|
327
224
|
```
|
|
328
225
|
|
|
329
226
|
## See Also
|
|
@@ -434,7 +331,7 @@ rem ask research-assistant "Find documents about machine learning architecture"
|
|
|
434
331
|
rem ask research-assistant "Summarize recent API design documents" --stream
|
|
435
332
|
|
|
436
333
|
# With session continuity
|
|
437
|
-
rem ask research-assistant "What did we discuss about ML?" --session-id
|
|
334
|
+
rem ask research-assistant "What did we discuss about ML?" --session-id c3d4e5f6-a7b8-9012-cdef-345678901234
|
|
438
335
|
```
|
|
439
336
|
|
|
440
337
|
### Agent Schema Structure
|
|
@@ -477,29 +374,16 @@ REM provides **4 built-in MCP tools** your agents can use:
|
|
|
477
374
|
|
|
478
375
|
### Multi-User Isolation
|
|
479
376
|
|
|
480
|
-
|
|
377
|
+
For multi-tenant deployments, custom agents are **scoped by `user_id`**, ensuring complete data isolation. Use `--user-id` flag when you need tenant separation:
|
|
481
378
|
|
|
482
379
|
```bash
|
|
483
|
-
#
|
|
484
|
-
rem process ingest my-agent.yaml --user-id
|
|
380
|
+
# Create agent for specific tenant
|
|
381
|
+
rem process ingest my-agent.yaml --user-id tenant-a --category agents
|
|
485
382
|
|
|
486
|
-
#
|
|
487
|
-
rem ask my-agent "test" --user-id
|
|
488
|
-
# ❌ Error: Schema not found (LOOKUP returns no results for user-b)
|
|
489
|
-
|
|
490
|
-
# User A can use their agent
|
|
491
|
-
rem ask my-agent "test" --user-id user-a
|
|
492
|
-
# ✅ Works - LOOKUP finds schema for user-a
|
|
383
|
+
# Query with tenant context
|
|
384
|
+
rem ask my-agent "test" --user-id tenant-a
|
|
493
385
|
```
|
|
494
386
|
|
|
495
|
-
### Advanced: Ontology Extractors
|
|
496
|
-
|
|
497
|
-
Custom agents can also be used as **ontology extractors** to extract structured knowledge from files. See [CLAUDE.md](../CLAUDE.md#ontology-extraction-pattern) for details on:
|
|
498
|
-
- Multi-provider testing (`provider_configs`)
|
|
499
|
-
- Semantic search configuration (`embedding_fields`)
|
|
500
|
-
- File matching rules (`OntologyConfig`)
|
|
501
|
-
- Dreaming workflow integration
|
|
502
|
-
|
|
503
387
|
### Troubleshooting
|
|
504
388
|
|
|
505
389
|
**Schema not found error:**
|
|
@@ -717,8 +601,8 @@ POST /api/v1/chat/completions
|
|
|
717
601
|
```
|
|
718
602
|
|
|
719
603
|
**Headers**:
|
|
720
|
-
- `X-
|
|
721
|
-
- `X-
|
|
604
|
+
- `X-User-Id`: User identifier (required for data isolation, uses default if not provided)
|
|
605
|
+
- `X-Tenant-Id`: Deprecated - use `X-User-Id` instead (kept for backwards compatibility)
|
|
722
606
|
- `X-Session-Id`: Session/conversation identifier
|
|
723
607
|
- `X-Agent-Schema`: Agent schema URI to use
|
|
724
608
|
|
|
@@ -889,9 +773,15 @@ This generates:
|
|
|
889
773
|
Compare Pydantic models against the live database using Alembic autogenerate.
|
|
890
774
|
|
|
891
775
|
```bash
|
|
892
|
-
# Show
|
|
776
|
+
# Show additive changes only (default, safe for production)
|
|
893
777
|
rem db diff
|
|
894
778
|
|
|
779
|
+
# Show all changes including drops
|
|
780
|
+
rem db diff --strategy full
|
|
781
|
+
|
|
782
|
+
# Show additive + safe type widenings
|
|
783
|
+
rem db diff --strategy safe
|
|
784
|
+
|
|
895
785
|
# CI mode: exit 1 if drift detected
|
|
896
786
|
rem db diff --check
|
|
897
787
|
|
|
@@ -899,9 +789,16 @@ rem db diff --check
|
|
|
899
789
|
rem db diff --generate
|
|
900
790
|
```
|
|
901
791
|
|
|
792
|
+
**Migration Strategies:**
|
|
793
|
+
| Strategy | Description |
|
|
794
|
+
|----------|-------------|
|
|
795
|
+
| `additive` | Only ADD columns/tables/indexes (safe, no data loss) - **default** |
|
|
796
|
+
| `full` | All changes including DROPs (use with caution) |
|
|
797
|
+
| `safe` | Additive + safe column type widenings (e.g., VARCHAR(50) → VARCHAR(256)) |
|
|
798
|
+
|
|
902
799
|
**Output shows:**
|
|
903
800
|
- `+ ADD COLUMN` - Column in model but not in DB
|
|
904
|
-
- `- DROP COLUMN` - Column in DB but not in model
|
|
801
|
+
- `- DROP COLUMN` - Column in DB but not in model (only with `--strategy full`)
|
|
905
802
|
- `~ ALTER COLUMN` - Column type or constraints differ
|
|
906
803
|
- `+ CREATE TABLE` / `- DROP TABLE` - Table additions/removals
|
|
907
804
|
|
|
@@ -1187,14 +1084,11 @@ Test Pydantic AI agent with natural language queries.
|
|
|
1187
1084
|
# Ask a question
|
|
1188
1085
|
rem ask "What documents did Sarah Chen author?"
|
|
1189
1086
|
|
|
1190
|
-
# With context headers
|
|
1191
|
-
rem ask "Find all resources about API design" \
|
|
1192
|
-
--user-id user-123 \
|
|
1193
|
-
--tenant-id acme-corp
|
|
1194
|
-
|
|
1195
1087
|
# Use specific agent schema
|
|
1196
|
-
rem ask "Analyze this contract"
|
|
1197
|
-
|
|
1088
|
+
rem ask contract-analyzer "Analyze this contract"
|
|
1089
|
+
|
|
1090
|
+
# Stream response
|
|
1091
|
+
rem ask "Find all resources about API design" --stream
|
|
1198
1092
|
```
|
|
1199
1093
|
|
|
1200
1094
|
### Global Options
|
|
@@ -1242,7 +1136,7 @@ export API__RELOAD=true
|
|
|
1242
1136
|
rem serve
|
|
1243
1137
|
```
|
|
1244
1138
|
|
|
1245
|
-
## Development (For Contributors)
|
|
1139
|
+
## Option 2: Development (For Contributors)
|
|
1246
1140
|
|
|
1247
1141
|
**Best for**: Contributing to REM or customizing the codebase.
|
|
1248
1142
|
|
|
@@ -1538,45 +1432,156 @@ Successfully installed ... kreuzberg-4.0.0rc1 ... remdb-0.3.10
|
|
|
1538
1432
|
|
|
1539
1433
|
REM wraps FastAPI - extend it exactly as you would any FastAPI app.
|
|
1540
1434
|
|
|
1435
|
+
### Recommended Project Structure
|
|
1436
|
+
|
|
1437
|
+
REM auto-detects `./agents/` and `./models/` folders - no configuration needed:
|
|
1438
|
+
|
|
1439
|
+
```
|
|
1440
|
+
my-rem-app/
|
|
1441
|
+
├── agents/ # Auto-detected for agent schemas
|
|
1442
|
+
│ ├── my-agent.yaml # Custom agent (rem ask my-agent "query")
|
|
1443
|
+
│ └── another-agent.yaml
|
|
1444
|
+
├── models/ # Auto-detected if __init__.py exists
|
|
1445
|
+
│ └── __init__.py # Register models with @rem.register_model
|
|
1446
|
+
├── routers/ # Custom FastAPI routers
|
|
1447
|
+
│ └── custom.py
|
|
1448
|
+
├── main.py # Entry point
|
|
1449
|
+
└── pyproject.toml
|
|
1450
|
+
```
|
|
1451
|
+
|
|
1452
|
+
### Quick Start
|
|
1453
|
+
|
|
1541
1454
|
```python
|
|
1542
|
-
|
|
1455
|
+
# main.py
|
|
1543
1456
|
from rem import create_app
|
|
1544
|
-
from
|
|
1457
|
+
from fastapi import APIRouter
|
|
1545
1458
|
|
|
1546
|
-
#
|
|
1547
|
-
|
|
1459
|
+
# Create REM app (auto-detects ./agents/ and ./models/)
|
|
1460
|
+
app = create_app()
|
|
1548
1461
|
|
|
1549
|
-
#
|
|
1550
|
-
|
|
1462
|
+
# Add custom router
|
|
1463
|
+
router = APIRouter(prefix="/custom", tags=["custom"])
|
|
1551
1464
|
|
|
1552
|
-
|
|
1553
|
-
|
|
1465
|
+
@router.get("/hello")
|
|
1466
|
+
async def hello():
|
|
1467
|
+
return {"message": "Hello from custom router!"}
|
|
1554
1468
|
|
|
1555
|
-
|
|
1556
|
-
app.include_router(my_router)
|
|
1469
|
+
app.include_router(router)
|
|
1557
1470
|
|
|
1471
|
+
# Add custom MCP tool
|
|
1558
1472
|
@app.mcp_server.tool()
|
|
1559
1473
|
async def my_tool(query: str) -> dict:
|
|
1560
|
-
"""Custom MCP tool."""
|
|
1474
|
+
"""Custom MCP tool available to agents."""
|
|
1561
1475
|
return {"result": query}
|
|
1562
1476
|
```
|
|
1563
1477
|
|
|
1564
|
-
###
|
|
1478
|
+
### Custom Models (Auto-Detected)
|
|
1479
|
+
|
|
1480
|
+
```python
|
|
1481
|
+
# models/__init__.py
|
|
1482
|
+
import rem
|
|
1483
|
+
from rem.models.core import CoreModel
|
|
1484
|
+
from pydantic import Field
|
|
1565
1485
|
|
|
1486
|
+
@rem.register_model
|
|
1487
|
+
class MyEntity(CoreModel):
|
|
1488
|
+
"""Custom entity - auto-registered for schema generation."""
|
|
1489
|
+
name: str = Field(description="Entity name")
|
|
1490
|
+
status: str = Field(default="active")
|
|
1566
1491
|
```
|
|
1567
|
-
|
|
1568
|
-
|
|
1569
|
-
|
|
1570
|
-
|
|
1571
|
-
|
|
1572
|
-
|
|
1573
|
-
|
|
1574
|
-
|
|
1575
|
-
|
|
1576
|
-
|
|
1492
|
+
|
|
1493
|
+
Run `rem db schema generate` to include your models in the database schema.
|
|
1494
|
+
|
|
1495
|
+
### Custom Agents (Auto-Detected)
|
|
1496
|
+
|
|
1497
|
+
```yaml
|
|
1498
|
+
# agents/my-agent.yaml
|
|
1499
|
+
type: object
|
|
1500
|
+
description: |
|
|
1501
|
+
You are a helpful assistant that...
|
|
1502
|
+
|
|
1503
|
+
properties:
|
|
1504
|
+
answer:
|
|
1505
|
+
type: string
|
|
1506
|
+
description: Your response
|
|
1507
|
+
|
|
1508
|
+
required:
|
|
1509
|
+
- answer
|
|
1510
|
+
|
|
1511
|
+
json_schema_extra:
|
|
1512
|
+
kind: agent
|
|
1513
|
+
name: my-agent
|
|
1514
|
+
version: "1.0.0"
|
|
1515
|
+
tools:
|
|
1516
|
+
- search_rem
|
|
1517
|
+
```
|
|
1518
|
+
|
|
1519
|
+
Test with: `rem ask my-agent "Hello!"`
|
|
1520
|
+
|
|
1521
|
+
### Example Custom Router
|
|
1522
|
+
|
|
1523
|
+
```python
|
|
1524
|
+
# routers/analytics.py
|
|
1525
|
+
from fastapi import APIRouter, Depends
|
|
1526
|
+
from rem.services.postgres import get_postgres_service
|
|
1527
|
+
|
|
1528
|
+
router = APIRouter(prefix="/analytics", tags=["analytics"])
|
|
1529
|
+
|
|
1530
|
+
@router.get("/stats")
|
|
1531
|
+
async def get_stats():
|
|
1532
|
+
"""Get database statistics."""
|
|
1533
|
+
db = get_postgres_service()
|
|
1534
|
+
if not db:
|
|
1535
|
+
return {"error": "Database not available"}
|
|
1536
|
+
|
|
1537
|
+
await db.connect()
|
|
1538
|
+
try:
|
|
1539
|
+
result = await db.execute(
|
|
1540
|
+
"SELECT COUNT(*) as count FROM resources"
|
|
1541
|
+
)
|
|
1542
|
+
return {"resource_count": result[0]["count"]}
|
|
1543
|
+
finally:
|
|
1544
|
+
await db.disconnect()
|
|
1545
|
+
|
|
1546
|
+
@router.get("/recent")
|
|
1547
|
+
async def get_recent(limit: int = 10):
|
|
1548
|
+
"""Get recent resources."""
|
|
1549
|
+
db = get_postgres_service()
|
|
1550
|
+
if not db:
|
|
1551
|
+
return {"error": "Database not available"}
|
|
1552
|
+
|
|
1553
|
+
await db.connect()
|
|
1554
|
+
try:
|
|
1555
|
+
result = await db.execute(
|
|
1556
|
+
f"SELECT label, category, created_at FROM resources ORDER BY created_at DESC LIMIT {limit}"
|
|
1557
|
+
)
|
|
1558
|
+
return {"resources": result}
|
|
1559
|
+
finally:
|
|
1560
|
+
await db.disconnect()
|
|
1561
|
+
```
|
|
1562
|
+
|
|
1563
|
+
Include in main.py:
|
|
1564
|
+
|
|
1565
|
+
```python
|
|
1566
|
+
from routers.analytics import router as analytics_router
|
|
1567
|
+
app.include_router(analytics_router)
|
|
1577
1568
|
```
|
|
1578
1569
|
|
|
1579
|
-
|
|
1570
|
+
### Running the App
|
|
1571
|
+
|
|
1572
|
+
```bash
|
|
1573
|
+
# Development (auto-reload)
|
|
1574
|
+
uv run uvicorn main:app --reload --port 8000
|
|
1575
|
+
|
|
1576
|
+
# Or use rem serve
|
|
1577
|
+
uv run rem serve --reload
|
|
1578
|
+
|
|
1579
|
+
# Test agent
|
|
1580
|
+
uv run rem ask my-agent "What can you help me with?"
|
|
1581
|
+
|
|
1582
|
+
# Test custom endpoint
|
|
1583
|
+
curl http://localhost:8000/analytics/stats
|
|
1584
|
+
```
|
|
1580
1585
|
|
|
1581
1586
|
### Extension Points
|
|
1582
1587
|
|
|
@@ -1588,6 +1593,37 @@ Generate this structure with: `rem scaffold my-app`
|
|
|
1588
1593
|
| **MCP Prompts** | `@app.mcp_server.prompt()` or `app.mcp_server.add_prompt(fn)` |
|
|
1589
1594
|
| **Models** | `rem.register_models(Model)` then `rem db schema generate` |
|
|
1590
1595
|
| **Agent Schemas** | `rem.register_schema_path("./schemas")` or `SCHEMA__PATHS` env var |
|
|
1596
|
+
| **SQL Migrations** | Place in `sql/migrations/` (auto-detected) |
|
|
1597
|
+
|
|
1598
|
+
### Custom Migrations
|
|
1599
|
+
|
|
1600
|
+
REM automatically discovers migrations from two sources:
|
|
1601
|
+
|
|
1602
|
+
1. **Package migrations** (001-099): Built-in migrations from the `remdb` package
|
|
1603
|
+
2. **User migrations** (100+): Your custom migrations in `./sql/migrations/`
|
|
1604
|
+
|
|
1605
|
+
**Convention**: Place custom SQL files in `sql/migrations/` relative to your project root:
|
|
1606
|
+
|
|
1607
|
+
```
|
|
1608
|
+
my-rem-app/
|
|
1609
|
+
├── sql/
|
|
1610
|
+
│ └── migrations/
|
|
1611
|
+
│ ├── 100_custom_table.sql # Runs after package migrations
|
|
1612
|
+
│ ├── 101_add_indexes.sql
|
|
1613
|
+
│ └── 102_custom_functions.sql
|
|
1614
|
+
└── ...
|
|
1615
|
+
```
|
|
1616
|
+
|
|
1617
|
+
**Numbering**: Use 100+ for user migrations to ensure they run after package migrations (001-099). All migrations are sorted by filename, so proper numbering ensures correct execution order.
|
|
1618
|
+
|
|
1619
|
+
**Running migrations**:
|
|
1620
|
+
```bash
|
|
1621
|
+
# Apply all migrations (package + user)
|
|
1622
|
+
rem db migrate
|
|
1623
|
+
|
|
1624
|
+
# Apply with background indexes (for production)
|
|
1625
|
+
rem db migrate --background-indexes
|
|
1626
|
+
```
|
|
1591
1627
|
|
|
1592
1628
|
## License
|
|
1593
1629
|
|