remdb 0.3.114__py3-none-any.whl → 0.3.172__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of remdb might be problematic. Click here for more details.

Files changed (83) hide show
  1. rem/agentic/agents/__init__.py +16 -0
  2. rem/agentic/agents/agent_manager.py +311 -0
  3. rem/agentic/agents/sse_simulator.py +2 -0
  4. rem/agentic/context.py +103 -5
  5. rem/agentic/context_builder.py +36 -9
  6. rem/agentic/mcp/tool_wrapper.py +161 -18
  7. rem/agentic/otel/setup.py +1 -0
  8. rem/agentic/providers/phoenix.py +371 -108
  9. rem/agentic/providers/pydantic_ai.py +172 -30
  10. rem/agentic/schema.py +8 -4
  11. rem/api/deps.py +3 -5
  12. rem/api/main.py +26 -4
  13. rem/api/mcp_router/resources.py +15 -10
  14. rem/api/mcp_router/server.py +11 -3
  15. rem/api/mcp_router/tools.py +418 -4
  16. rem/api/middleware/tracking.py +5 -5
  17. rem/api/routers/admin.py +218 -1
  18. rem/api/routers/auth.py +349 -6
  19. rem/api/routers/chat/completions.py +255 -7
  20. rem/api/routers/chat/models.py +81 -7
  21. rem/api/routers/chat/otel_utils.py +33 -0
  22. rem/api/routers/chat/sse_events.py +17 -1
  23. rem/api/routers/chat/streaming.py +126 -19
  24. rem/api/routers/feedback.py +134 -14
  25. rem/api/routers/messages.py +24 -15
  26. rem/api/routers/query.py +6 -3
  27. rem/auth/__init__.py +13 -3
  28. rem/auth/jwt.py +352 -0
  29. rem/auth/middleware.py +115 -10
  30. rem/auth/providers/__init__.py +4 -1
  31. rem/auth/providers/email.py +215 -0
  32. rem/cli/commands/README.md +42 -0
  33. rem/cli/commands/cluster.py +617 -168
  34. rem/cli/commands/configure.py +4 -7
  35. rem/cli/commands/db.py +66 -22
  36. rem/cli/commands/experiments.py +468 -76
  37. rem/cli/commands/schema.py +6 -5
  38. rem/cli/commands/session.py +336 -0
  39. rem/cli/dreaming.py +2 -2
  40. rem/cli/main.py +2 -0
  41. rem/config.py +8 -1
  42. rem/models/core/experiment.py +58 -14
  43. rem/models/entities/__init__.py +4 -0
  44. rem/models/entities/ontology.py +1 -1
  45. rem/models/entities/ontology_config.py +1 -1
  46. rem/models/entities/subscriber.py +175 -0
  47. rem/models/entities/user.py +1 -0
  48. rem/schemas/agents/core/agent-builder.yaml +235 -0
  49. rem/schemas/agents/examples/contract-analyzer.yaml +1 -1
  50. rem/schemas/agents/examples/contract-extractor.yaml +1 -1
  51. rem/schemas/agents/examples/cv-parser.yaml +1 -1
  52. rem/services/__init__.py +3 -1
  53. rem/services/content/service.py +4 -3
  54. rem/services/email/__init__.py +10 -0
  55. rem/services/email/service.py +513 -0
  56. rem/services/email/templates.py +360 -0
  57. rem/services/phoenix/client.py +59 -18
  58. rem/services/postgres/README.md +38 -0
  59. rem/services/postgres/diff_service.py +127 -6
  60. rem/services/postgres/pydantic_to_sqlalchemy.py +45 -13
  61. rem/services/postgres/repository.py +5 -4
  62. rem/services/postgres/schema_generator.py +205 -4
  63. rem/services/session/compression.py +120 -50
  64. rem/services/session/reload.py +14 -7
  65. rem/services/user_service.py +41 -9
  66. rem/settings.py +442 -23
  67. rem/sql/migrations/001_install.sql +156 -0
  68. rem/sql/migrations/002_install_models.sql +1951 -88
  69. rem/sql/migrations/004_cache_system.sql +548 -0
  70. rem/sql/migrations/005_schema_update.sql +145 -0
  71. rem/utils/README.md +45 -0
  72. rem/utils/__init__.py +18 -0
  73. rem/utils/files.py +157 -1
  74. rem/utils/schema_loader.py +139 -10
  75. rem/utils/sql_paths.py +146 -0
  76. rem/utils/vision.py +1 -1
  77. rem/workers/__init__.py +3 -1
  78. rem/workers/db_listener.py +579 -0
  79. rem/workers/unlogged_maintainer.py +463 -0
  80. {remdb-0.3.114.dist-info → remdb-0.3.172.dist-info}/METADATA +218 -180
  81. {remdb-0.3.114.dist-info → remdb-0.3.172.dist-info}/RECORD +83 -68
  82. {remdb-0.3.114.dist-info → remdb-0.3.172.dist-info}/WHEEL +0 -0
  83. {remdb-0.3.114.dist-info → remdb-0.3.172.dist-info}/entry_points.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: remdb
3
- Version: 0.3.114
3
+ Version: 0.3.172
4
4
  Summary: Resources Entities Moments - Bio-inspired memory system for agentic AI workloads
5
5
  Project-URL: Homepage, https://github.com/Percolation-Labs/reminiscent
6
6
  Project-URL: Documentation, https://github.com/Percolation-Labs/reminiscent/blob/main/README.md
@@ -12,9 +12,11 @@ Keywords: agents,ai,mcp,memory,postgresql,vector-search
12
12
  Classifier: Development Status :: 3 - Alpha
13
13
  Classifier: Intended Audience :: Developers
14
14
  Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3.11
15
16
  Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Programming Language :: Python :: 3.13
16
18
  Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
17
- Requires-Python: <3.13,>=3.12
19
+ Requires-Python: <3.14,>=3.11
18
20
  Requires-Dist: aioboto3>=13.0.0
19
21
  Requires-Dist: arize-phoenix>=5.0.0
20
22
  Requires-Dist: asyncpg>=0.30.0
@@ -123,9 +125,8 @@ Cloud-native unified memory infrastructure for agentic AI systems built with Pyd
123
125
 
124
126
  Choose your path:
125
127
 
126
- - **Option 1: Package Users with Example Data** (Recommended for first-time users) - PyPI + example datasets
127
- - **Option 2: Package Users** (Recommended for non-developers) - PyPI package + dockerized database
128
- - **Option 3: Developers** - Clone repo, local development with uv
128
+ - **Option 1: Package Users with Example Data** (Recommended) - PyPI + example datasets
129
+ - **Option 2: Developers** - Clone repo, local development with uv
129
130
 
130
131
  ---
131
132
 
@@ -144,34 +145,26 @@ pip install "remdb[all]"
144
145
  git clone https://github.com/Percolation-Labs/remstack-lab.git
145
146
  cd remstack-lab
146
147
 
147
- # Optional: Set default LLM provider via environment variable
148
- # export LLM__DEFAULT_MODEL="openai:gpt-4.1-nano" # Fast and cheap
149
- # export LLM__DEFAULT_MODEL="anthropic:claude-sonnet-4-5-20250929" # High quality (default)
150
-
151
- # Start PostgreSQL with docker-compose
148
+ # Start services (PostgreSQL, Phoenix observability)
152
149
  curl -O https://gist.githubusercontent.com/percolating-sirsh/d117b673bc0edfdef1a5068ccd3cf3e5/raw/docker-compose.prebuilt.yml
153
- docker compose -f docker-compose.prebuilt.yml up -d postgres
150
+ docker compose -f docker-compose.prebuilt.yml up -d
154
151
 
155
152
  # Configure REM (creates ~/.rem/config.yaml and installs database schema)
156
153
  # Add --claude-desktop to register with Claude Desktop app
157
154
  rem configure --install --claude-desktop
158
155
 
159
- # Load quickstart dataset (uses default user)
156
+ # Load quickstart dataset
160
157
  rem db load datasets/quickstart/sample_data.yaml
161
158
 
162
159
  # Ask questions
163
160
  rem ask "What documents exist in the system?"
164
161
  rem ask "Show me meetings about API design"
165
162
 
166
- # Ingest files (PDF, DOCX, images, etc.) - note: requires remstack-lab
163
+ # Ingest files (PDF, DOCX, images, etc.)
167
164
  rem process ingest datasets/formats/files/bitcoin_whitepaper.pdf --category research --tags bitcoin,whitepaper
168
165
 
169
166
  # Query ingested content
170
167
  rem ask "What is the Bitcoin whitepaper about?"
171
-
172
- # Try other datasets (use --user-id for multi-tenant scenarios)
173
- rem db load datasets/domains/recruitment/scenarios/candidate_pipeline/data.yaml --user-id acme-corp
174
- rem ask --user-id acme-corp "Show me candidates with Python experience"
175
168
  ```
176
169
 
177
170
  **What you get:**
@@ -181,130 +174,39 @@ rem ask --user-id acme-corp "Show me candidates with Python experience"
181
174
 
182
175
  **Learn more**: [remstack-lab repository](https://github.com/Percolation-Labs/remstack-lab)
183
176
 
184
- ---
185
-
186
- ## Option 2: Package Users (No Example Data)
177
+ ### Using the API
187
178
 
188
- **Best for**: Using REM as a service (API + CLI) without modifying code, bringing your own data.
189
-
190
- ### Step 1: Start Database and API with Docker Compose
179
+ Once configured, you can also use the OpenAI-compatible chat completions API:
191
180
 
192
181
  ```bash
193
- # Create a project directory
194
- mkdir my-rem-project && cd my-rem-project
195
-
196
- # Download docker-compose file from public gist
197
- curl -O https://gist.githubusercontent.com/percolating-sirsh/d117b673bc0edfdef1a5068ccd3cf3e5/raw/docker-compose.prebuilt.yml
198
-
199
- # IMPORTANT: Export API keys BEFORE running docker compose
200
- # Docker Compose reads env vars at startup - exporting them after won't work!
201
-
202
- # Required: OpenAI for embeddings (text-embedding-3-small)
203
- export OPENAI_API_KEY="sk-..."
204
-
205
- # Recommended: At least one chat completion provider
206
- export ANTHROPIC_API_KEY="sk-ant-..." # Claude Sonnet 4.5 (high quality)
207
- export CEREBRAS_API_KEY="csk-..." # Cerebras (fast, cheap inference)
208
-
209
- # Start PostgreSQL + API
182
+ # Start all services (PostgreSQL, Phoenix, API)
210
183
  docker compose -f docker-compose.prebuilt.yml up -d
211
184
 
212
- # Verify services are running
213
- curl http://localhost:8000/health
214
- ```
215
-
216
- This starts:
217
- - **PostgreSQL** with pgvector on port **5051** (connection: `postgresql://rem:rem@localhost:5051/rem`)
218
- - **REM API** on port **8000** with OpenAI-compatible chat completions + MCP server
219
- - Uses pre-built Docker image from Docker Hub (no local build required)
220
-
221
- ### Step 2: Install and Configure CLI (REQUIRED)
222
-
223
- **This step is required** before you can use REM - it installs the database schema and configures your LLM API keys.
224
-
225
- ```bash
226
- # Install remdb package from PyPI
227
- pip install remdb[all]
228
-
229
- # Configure REM (defaults to port 5051 for package users)
230
- rem configure --install --claude-desktop
185
+ # Test the API
186
+ curl -X POST http://localhost:8000/api/v1/chat/completions \
187
+ -H "Content-Type: application/json" \
188
+ -H "X-Session-Id: a1b2c3d4-e5f6-7890-abcd-ef1234567890" \
189
+ -d '{
190
+ "model": "anthropic:claude-sonnet-4-5-20250929",
191
+ "messages": [{"role": "user", "content": "What documents did Sarah Chen author?"}],
192
+ "stream": false
193
+ }'
231
194
  ```
232
195
 
233
- The interactive wizard will:
234
- 1. **Configure PostgreSQL**: Defaults to `postgresql://rem:rem@localhost:5051/rem` (prebuilt docker-compose)
235
- - Just press Enter to accept defaults
236
- - Custom database: Enter your own host/port/credentials
237
- 2. **Configure LLM providers**: Enter your OpenAI/Anthropic API keys
238
- 3. **Install database tables**: Creates schema, functions, indexes (**required for CLI/API to work**)
239
- 4. **Register with Claude Desktop**: Adds REM MCP server to Claude
240
-
241
- Configuration saved to `~/.rem/config.yaml` (can edit with `rem configure --edit`)
242
-
243
196
  **Port Guide:**
244
197
  - **5051**: Package users with `docker-compose.prebuilt.yml` (pre-built image)
245
198
  - **5050**: Developers with `docker-compose.yml` (local build)
246
- - **Custom**: Your own PostgreSQL database
247
199
 
248
200
  **Next Steps:**
249
201
  - See [CLI Reference](#cli-reference) for all available commands
250
202
  - See [REM Query Dialect](#rem-query-dialect) for query examples
251
203
  - See [API Endpoints](#api-endpoints) for OpenAI-compatible API usage
252
204
 
253
- ### Step 3: Load Sample Data (Optional but Recommended)
254
-
255
- **Option A: Clone example datasets** (Recommended - works with all README examples)
256
-
257
- ```bash
258
- # Clone datasets repository
259
- git clone https://github.com/Percolation-Labs/remstack-lab.git
260
-
261
- # Load quickstart dataset (uses default user)
262
- rem db load --file remstack-lab/datasets/quickstart/sample_data.yaml
263
-
264
- # Test with sample queries
265
- rem ask "What documents exist in the system?"
266
- rem ask "Show me meetings about API design"
267
- rem ask "Who is Sarah Chen?"
268
-
269
- # Try domain-specific datasets (use --user-id for multi-tenant scenarios)
270
- rem db load --file remstack-lab/datasets/domains/recruitment/scenarios/candidate_pipeline/data.yaml --user-id acme-corp
271
- rem ask --user-id acme-corp "Show me candidates with Python experience"
272
- ```
273
-
274
- **Option B: Bring your own data**
275
-
276
- ```bash
277
- # Ingest your own files (uses default user)
278
- echo "REM is a bio-inspired memory system for agentic AI workloads." > test-doc.txt
279
- rem process ingest test-doc.txt --category documentation --tags rem,ai
280
-
281
- # Query your ingested data
282
- rem ask "What do you know about REM from my knowledge base?"
283
- ```
284
-
285
- ### Step 4: Test the API
286
-
287
- ```bash
288
- # Test the OpenAI-compatible chat completions API
289
- curl -X POST http://localhost:8000/api/v1/chat/completions \
290
- -H "Content-Type: application/json" \
291
- -H "X-User-Id: demo-user" \
292
- -d '{
293
- "model": "anthropic:claude-sonnet-4-5-20250929",
294
- "messages": [{"role": "user", "content": "What documents did Sarah Chen author?"}],
295
- "stream": false
296
- }'
297
- ```
298
-
299
- **Available Commands:**
300
- - `rem ask` - Natural language queries to REM
301
- - `rem process ingest <file>` - Full ingestion pipeline (storage + parsing + embedding + database)
302
- - `rem process uri <file>` - READ-ONLY parsing (no database storage, useful for testing parsers)
303
- - `rem db load --file <yaml>` - Load structured datasets directly
205
+ ---
304
206
 
305
207
  ## Example Datasets
306
208
 
307
- 🎯 **Recommended**: Clone [remstack-lab](https://github.com/Percolation-Labs/remstack-lab) for curated datasets organized by domain and format.
209
+ Clone [remstack-lab](https://github.com/Percolation-Labs/remstack-lab) for curated datasets organized by domain and format.
308
210
 
309
211
  **What's included:**
310
212
  - **Quickstart**: Minimal dataset (3 users, 3 resources, 3 moments) - perfect for first-time users
@@ -316,14 +218,11 @@ curl -X POST http://localhost:8000/api/v1/chat/completions \
316
218
  ```bash
317
219
  cd remstack-lab
318
220
 
319
- # Load any dataset (uses default user)
221
+ # Load any dataset
320
222
  rem db load --file datasets/quickstart/sample_data.yaml
321
223
 
322
224
  # Explore formats
323
225
  rem db load --file datasets/formats/engrams/scenarios/team_meeting/team_standup_meeting.yaml
324
-
325
- # Try domain-specific examples (use --user-id for multi-tenant scenarios)
326
- rem db load --file datasets/domains/recruitment/scenarios/candidate_pipeline/data.yaml --user-id acme-corp
327
226
  ```
328
227
 
329
228
  ## See Also
@@ -434,7 +333,7 @@ rem ask research-assistant "Find documents about machine learning architecture"
434
333
  rem ask research-assistant "Summarize recent API design documents" --stream
435
334
 
436
335
  # With session continuity
437
- rem ask research-assistant "What did we discuss about ML?" --session-id abc-123
336
+ rem ask research-assistant "What did we discuss about ML?" --session-id c3d4e5f6-a7b8-9012-cdef-345678901234
438
337
  ```
439
338
 
440
339
  ### Agent Schema Structure
@@ -477,29 +376,16 @@ REM provides **4 built-in MCP tools** your agents can use:
477
376
 
478
377
  ### Multi-User Isolation
479
378
 
480
- Custom agents are **scoped by `user_id`**, ensuring complete data isolation:
379
+ For multi-tenant deployments, custom agents are **scoped by `user_id`**, ensuring complete data isolation. Use `--user-id` flag when you need tenant separation:
481
380
 
482
381
  ```bash
483
- # User A creates a custom agent
484
- rem process ingest my-agent.yaml --user-id user-a --category agents
382
+ # Create agent for specific tenant
383
+ rem process ingest my-agent.yaml --user-id tenant-a --category agents
485
384
 
486
- # User B cannot see User A's agent
487
- rem ask my-agent "test" --user-id user-b
488
- # ❌ Error: Schema not found (LOOKUP returns no results for user-b)
489
-
490
- # User A can use their agent
491
- rem ask my-agent "test" --user-id user-a
492
- # ✅ Works - LOOKUP finds schema for user-a
385
+ # Query with tenant context
386
+ rem ask my-agent "test" --user-id tenant-a
493
387
  ```
494
388
 
495
- ### Advanced: Ontology Extractors
496
-
497
- Custom agents can also be used as **ontology extractors** to extract structured knowledge from files. See [CLAUDE.md](../CLAUDE.md#ontology-extraction-pattern) for details on:
498
- - Multi-provider testing (`provider_configs`)
499
- - Semantic search configuration (`embedding_fields`)
500
- - File matching rules (`OntologyConfig`)
501
- - Dreaming workflow integration
502
-
503
389
  ### Troubleshooting
504
390
 
505
391
  **Schema not found error:**
@@ -717,8 +603,8 @@ POST /api/v1/chat/completions
717
603
  ```
718
604
 
719
605
  **Headers**:
720
- - `X-Tenant-Id`: Tenant identifier (required for REM)
721
- - `X-User-Id`: User identifier
606
+ - `X-User-Id`: User identifier (required for data isolation, uses default if not provided)
607
+ - `X-Tenant-Id`: Deprecated - use `X-User-Id` instead (kept for backwards compatibility)
722
608
  - `X-Session-Id`: Session/conversation identifier
723
609
  - `X-Agent-Schema`: Agent schema URI to use
724
610
 
@@ -889,9 +775,15 @@ This generates:
889
775
  Compare Pydantic models against the live database using Alembic autogenerate.
890
776
 
891
777
  ```bash
892
- # Show differences
778
+ # Show additive changes only (default, safe for production)
893
779
  rem db diff
894
780
 
781
+ # Show all changes including drops
782
+ rem db diff --strategy full
783
+
784
+ # Show additive + safe type widenings
785
+ rem db diff --strategy safe
786
+
895
787
  # CI mode: exit 1 if drift detected
896
788
  rem db diff --check
897
789
 
@@ -899,9 +791,16 @@ rem db diff --check
899
791
  rem db diff --generate
900
792
  ```
901
793
 
794
+ **Migration Strategies:**
795
+ | Strategy | Description |
796
+ |----------|-------------|
797
+ | `additive` | Only ADD columns/tables/indexes (safe, no data loss) - **default** |
798
+ | `full` | All changes including DROPs (use with caution) |
799
+ | `safe` | Additive + safe column type widenings (e.g., VARCHAR(50) → VARCHAR(256)) |
800
+
902
801
  **Output shows:**
903
802
  - `+ ADD COLUMN` - Column in model but not in DB
904
- - `- DROP COLUMN` - Column in DB but not in model
803
+ - `- DROP COLUMN` - Column in DB but not in model (only with `--strategy full`)
905
804
  - `~ ALTER COLUMN` - Column type or constraints differ
906
805
  - `+ CREATE TABLE` / `- DROP TABLE` - Table additions/removals
907
806
 
@@ -1187,14 +1086,11 @@ Test Pydantic AI agent with natural language queries.
1187
1086
  # Ask a question
1188
1087
  rem ask "What documents did Sarah Chen author?"
1189
1088
 
1190
- # With context headers
1191
- rem ask "Find all resources about API design" \
1192
- --user-id user-123 \
1193
- --tenant-id acme-corp
1194
-
1195
1089
  # Use specific agent schema
1196
- rem ask "Analyze this contract" \
1197
- --agent-schema contract-analyzer-v1
1090
+ rem ask contract-analyzer "Analyze this contract"
1091
+
1092
+ # Stream response
1093
+ rem ask "Find all resources about API design" --stream
1198
1094
  ```
1199
1095
 
1200
1096
  ### Global Options
@@ -1242,7 +1138,7 @@ export API__RELOAD=true
1242
1138
  rem serve
1243
1139
  ```
1244
1140
 
1245
- ## Development (For Contributors)
1141
+ ## Option 2: Development (For Contributors)
1246
1142
 
1247
1143
  **Best for**: Contributing to REM or customizing the codebase.
1248
1144
 
@@ -1538,45 +1434,156 @@ Successfully installed ... kreuzberg-4.0.0rc1 ... remdb-0.3.10
1538
1434
 
1539
1435
  REM wraps FastAPI - extend it exactly as you would any FastAPI app.
1540
1436
 
1437
+ ### Recommended Project Structure
1438
+
1439
+ REM auto-detects `./agents/` and `./models/` folders - no configuration needed:
1440
+
1441
+ ```
1442
+ my-rem-app/
1443
+ ├── agents/ # Auto-detected for agent schemas
1444
+ │ ├── my-agent.yaml # Custom agent (rem ask my-agent "query")
1445
+ │ └── another-agent.yaml
1446
+ ├── models/ # Auto-detected if __init__.py exists
1447
+ │ └── __init__.py # Register models with @rem.register_model
1448
+ ├── routers/ # Custom FastAPI routers
1449
+ │ └── custom.py
1450
+ ├── main.py # Entry point
1451
+ └── pyproject.toml
1452
+ ```
1453
+
1454
+ ### Quick Start
1455
+
1541
1456
  ```python
1542
- import rem
1457
+ # main.py
1543
1458
  from rem import create_app
1544
- from rem.models.core import CoreModel
1459
+ from fastapi import APIRouter
1545
1460
 
1546
- # 1. Register models (for schema generation)
1547
- rem.register_models(MyModel, AnotherModel)
1461
+ # Create REM app (auto-detects ./agents/ and ./models/)
1462
+ app = create_app()
1548
1463
 
1549
- # 2. Register schema paths (for custom agents/evaluators)
1550
- rem.register_schema_path("./schemas")
1464
+ # Add custom router
1465
+ router = APIRouter(prefix="/custom", tags=["custom"])
1551
1466
 
1552
- # 3. Create app
1553
- app = create_app()
1467
+ @router.get("/hello")
1468
+ async def hello():
1469
+ return {"message": "Hello from custom router!"}
1554
1470
 
1555
- # 4. Extend like normal FastAPI
1556
- app.include_router(my_router)
1471
+ app.include_router(router)
1557
1472
 
1473
+ # Add custom MCP tool
1558
1474
  @app.mcp_server.tool()
1559
1475
  async def my_tool(query: str) -> dict:
1560
- """Custom MCP tool."""
1476
+ """Custom MCP tool available to agents."""
1561
1477
  return {"result": query}
1562
1478
  ```
1563
1479
 
1564
- ### Project Structure
1480
+ ### Custom Models (Auto-Detected)
1481
+
1482
+ ```python
1483
+ # models/__init__.py
1484
+ import rem
1485
+ from rem.models.core import CoreModel
1486
+ from pydantic import Field
1565
1487
 
1488
+ @rem.register_model
1489
+ class MyEntity(CoreModel):
1490
+ """Custom entity - auto-registered for schema generation."""
1491
+ name: str = Field(description="Entity name")
1492
+ status: str = Field(default="active")
1566
1493
  ```
1567
- my-rem-app/
1568
- ├── my_app/
1569
- │ ├── main.py # Entry point (create_app + extensions)
1570
- │ ├── models.py # Custom models (inherit CoreModel)
1571
- │ └── routers/ # Custom FastAPI routers
1572
- ├── schemas/
1573
- │ ├── agents/ # Custom agent YAML schemas
1574
- │ └── evaluators/ # Custom evaluator schemas
1575
- ├── sql/migrations/ # Custom SQL migrations
1576
- └── pyproject.toml
1494
+
1495
+ Run `rem db schema generate` to include your models in the database schema.
1496
+
1497
+ ### Custom Agents (Auto-Detected)
1498
+
1499
+ ```yaml
1500
+ # agents/my-agent.yaml
1501
+ type: object
1502
+ description: |
1503
+ You are a helpful assistant that...
1504
+
1505
+ properties:
1506
+ answer:
1507
+ type: string
1508
+ description: Your response
1509
+
1510
+ required:
1511
+ - answer
1512
+
1513
+ json_schema_extra:
1514
+ kind: agent
1515
+ name: my-agent
1516
+ version: "1.0.0"
1517
+ tools:
1518
+ - search_rem
1519
+ ```
1520
+
1521
+ Test with: `rem ask my-agent "Hello!"`
1522
+
1523
+ ### Example Custom Router
1524
+
1525
+ ```python
1526
+ # routers/analytics.py
1527
+ from fastapi import APIRouter, Depends
1528
+ from rem.services.postgres import get_postgres_service
1529
+
1530
+ router = APIRouter(prefix="/analytics", tags=["analytics"])
1531
+
1532
+ @router.get("/stats")
1533
+ async def get_stats():
1534
+ """Get database statistics."""
1535
+ db = get_postgres_service()
1536
+ if not db:
1537
+ return {"error": "Database not available"}
1538
+
1539
+ await db.connect()
1540
+ try:
1541
+ result = await db.execute(
1542
+ "SELECT COUNT(*) as count FROM resources"
1543
+ )
1544
+ return {"resource_count": result[0]["count"]}
1545
+ finally:
1546
+ await db.disconnect()
1547
+
1548
+ @router.get("/recent")
1549
+ async def get_recent(limit: int = 10):
1550
+ """Get recent resources."""
1551
+ db = get_postgres_service()
1552
+ if not db:
1553
+ return {"error": "Database not available"}
1554
+
1555
+ await db.connect()
1556
+ try:
1557
+ result = await db.execute(
1558
+ f"SELECT label, category, created_at FROM resources ORDER BY created_at DESC LIMIT {limit}"
1559
+ )
1560
+ return {"resources": result}
1561
+ finally:
1562
+ await db.disconnect()
1563
+ ```
1564
+
1565
+ Include in main.py:
1566
+
1567
+ ```python
1568
+ from routers.analytics import router as analytics_router
1569
+ app.include_router(analytics_router)
1577
1570
  ```
1578
1571
 
1579
- Generate this structure with: `rem scaffold my-app`
1572
+ ### Running the App
1573
+
1574
+ ```bash
1575
+ # Development (auto-reload)
1576
+ uv run uvicorn main:app --reload --port 8000
1577
+
1578
+ # Or use rem serve
1579
+ uv run rem serve --reload
1580
+
1581
+ # Test agent
1582
+ uv run rem ask my-agent "What can you help me with?"
1583
+
1584
+ # Test custom endpoint
1585
+ curl http://localhost:8000/analytics/stats
1586
+ ```
1580
1587
 
1581
1588
  ### Extension Points
1582
1589
 
@@ -1588,6 +1595,37 @@ Generate this structure with: `rem scaffold my-app`
1588
1595
  | **MCP Prompts** | `@app.mcp_server.prompt()` or `app.mcp_server.add_prompt(fn)` |
1589
1596
  | **Models** | `rem.register_models(Model)` then `rem db schema generate` |
1590
1597
  | **Agent Schemas** | `rem.register_schema_path("./schemas")` or `SCHEMA__PATHS` env var |
1598
+ | **SQL Migrations** | Place in `sql/migrations/` (auto-detected) |
1599
+
1600
+ ### Custom Migrations
1601
+
1602
+ REM automatically discovers migrations from two sources:
1603
+
1604
+ 1. **Package migrations** (001-099): Built-in migrations from the `remdb` package
1605
+ 2. **User migrations** (100+): Your custom migrations in `./sql/migrations/`
1606
+
1607
+ **Convention**: Place custom SQL files in `sql/migrations/` relative to your project root:
1608
+
1609
+ ```
1610
+ my-rem-app/
1611
+ ├── sql/
1612
+ │ └── migrations/
1613
+ │ ├── 100_custom_table.sql # Runs after package migrations
1614
+ │ ├── 101_add_indexes.sql
1615
+ │ └── 102_custom_functions.sql
1616
+ └── ...
1617
+ ```
1618
+
1619
+ **Numbering**: Use 100+ for user migrations to ensure they run after package migrations (001-099). All migrations are sorted by filename, so proper numbering ensures correct execution order.
1620
+
1621
+ **Running migrations**:
1622
+ ```bash
1623
+ # Apply all migrations (package + user)
1624
+ rem db migrate
1625
+
1626
+ # Apply with background indexes (for production)
1627
+ rem db migrate --background-indexes
1628
+ ```
1591
1629
 
1592
1630
  ## License
1593
1631