@cortexmemory/cli 0.27.3 → 0.28.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/commands/db.d.ts.map +1 -1
- package/dist/commands/db.js +18 -6
- package/dist/commands/db.js.map +1 -1
- package/dist/commands/deploy.d.ts.map +1 -1
- package/dist/commands/deploy.js +191 -80
- package/dist/commands/deploy.js.map +1 -1
- package/dist/commands/dev.js +3 -2
- package/dist/commands/dev.js.map +1 -1
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/init.js +12 -0
- package/dist/commands/init.js.map +1 -1
- package/dist/types.d.ts +1 -1
- package/dist/types.d.ts.map +1 -1
- package/dist/utils/app-template-sync.d.ts.map +1 -1
- package/dist/utils/app-template-sync.js +35 -13
- package/dist/utils/app-template-sync.js.map +1 -1
- package/dist/utils/init/quickstart-setup.d.ts.map +1 -1
- package/dist/utils/init/quickstart-setup.js.map +1 -1
- package/package.json +4 -4
- package/templates/basic/.env.local.example +23 -0
- package/templates/basic/README.md +181 -56
- package/templates/basic/package-lock.json +2180 -406
- package/templates/basic/package.json +23 -5
- package/templates/basic/src/__tests__/chat.test.ts +340 -0
- package/templates/basic/src/__tests__/cortex.test.ts +260 -0
- package/templates/basic/src/__tests__/display.test.ts +455 -0
- package/templates/basic/src/__tests__/e2e/fact-extraction.test.ts +498 -0
- package/templates/basic/src/__tests__/e2e/memory-flow.test.ts +355 -0
- package/templates/basic/src/__tests__/e2e/server-e2e.test.ts +414 -0
- package/templates/basic/src/__tests__/helpers/test-utils.ts +345 -0
- package/templates/basic/src/__tests__/integration/chat-flow.test.ts +422 -0
- package/templates/basic/src/__tests__/integration/server.test.ts +441 -0
- package/templates/basic/src/__tests__/llm.test.ts +344 -0
- package/templates/basic/src/chat.ts +300 -0
- package/templates/basic/src/cortex.ts +203 -0
- package/templates/basic/src/display.ts +425 -0
- package/templates/basic/src/index.ts +194 -64
- package/templates/basic/src/llm.ts +214 -0
- package/templates/basic/src/server.ts +280 -0
- package/templates/basic/vitest.config.ts +33 -0
- package/templates/basic/vitest.e2e.config.ts +28 -0
- package/templates/basic/vitest.integration.config.ts +25 -0
- package/templates/vercel-ai-quickstart/app/api/auth/check/route.ts +1 -1
- package/templates/vercel-ai-quickstart/app/api/auth/login/route.ts +61 -19
- package/templates/vercel-ai-quickstart/app/api/auth/register/route.ts +14 -18
- package/templates/vercel-ai-quickstart/app/api/auth/setup/route.ts +4 -7
- package/templates/vercel-ai-quickstart/app/api/chat/route.ts +95 -23
- package/templates/vercel-ai-quickstart/app/api/chat-v6/route.ts +339 -0
- package/templates/vercel-ai-quickstart/app/api/conversations/route.ts +16 -16
- package/templates/vercel-ai-quickstart/app/globals.css +24 -9
- package/templates/vercel-ai-quickstart/app/page.tsx +41 -15
- package/templates/vercel-ai-quickstart/components/AdminSetup.tsx +3 -1
- package/templates/vercel-ai-quickstart/components/AuthProvider.tsx +6 -6
- package/templates/vercel-ai-quickstart/components/ChatHistorySidebar.tsx +19 -8
- package/templates/vercel-ai-quickstart/components/ChatInterface.tsx +46 -16
- package/templates/vercel-ai-quickstart/components/LoginScreen.tsx +10 -5
- package/templates/vercel-ai-quickstart/jest.config.js +8 -1
- package/templates/vercel-ai-quickstart/lib/agents/memory-agent.ts +165 -0
- package/templates/vercel-ai-quickstart/lib/password.ts +5 -5
- package/templates/vercel-ai-quickstart/lib/versions.ts +60 -0
- package/templates/vercel-ai-quickstart/next.config.js +10 -2
- package/templates/vercel-ai-quickstart/package.json +23 -12
- package/templates/vercel-ai-quickstart/test-api.mjs +303 -0
- package/templates/vercel-ai-quickstart/tests/e2e/chat-memory-flow.test.ts +483 -0
- package/templates/vercel-ai-quickstart/tests/helpers/mock-cortex.ts +40 -40
- package/templates/vercel-ai-quickstart/tests/integration/auth.test.ts +8 -8
- package/templates/vercel-ai-quickstart/tests/integration/conversations.test.ts +12 -8
- package/templates/vercel-ai-quickstart/tests/unit/password.test.ts +4 -1
|
@@ -1,33 +1,129 @@
|
|
|
1
1
|
# {{PROJECT_NAME}}
|
|
2
2
|
|
|
3
|
-
AI agent with persistent memory powered by [Cortex Memory SDK](https://
|
|
3
|
+
AI agent with persistent memory powered by [Cortex Memory SDK](https://cortexmemory.dev).
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
This demo shows how Cortex orchestrates data through memory layers - the same logic used in the Vercel AI quickstart, but without any UI dependencies.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
## Quick Start
|
|
8
8
|
|
|
9
|
-
###
|
|
10
|
-
|
|
11
|
-
**Terminal 1** - Start Convex:
|
|
9
|
+
### 1. Start Convex Backend
|
|
12
10
|
|
|
13
11
|
```bash
|
|
14
12
|
npm run dev
|
|
15
13
|
```
|
|
16
14
|
|
|
17
|
-
Leave this running. It watches for changes and keeps Convex server active.
|
|
15
|
+
Leave this running. It watches for changes and keeps the Convex server active.
|
|
18
16
|
|
|
19
|
-
|
|
17
|
+
### 2. Chat via CLI
|
|
20
18
|
|
|
21
19
|
```bash
|
|
22
20
|
npm start
|
|
23
21
|
```
|
|
24
22
|
|
|
25
|
-
|
|
23
|
+
This starts an interactive CLI where you can chat and see memory orchestration in real-time.
|
|
26
24
|
|
|
27
|
-
|
|
25
|
+
### 3. Or use the HTTP API
|
|
28
26
|
|
|
29
27
|
```bash
|
|
30
|
-
npm
|
|
28
|
+
npm run server
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
Then send requests:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
curl -X POST http://localhost:3001/chat \
|
|
35
|
+
-H "Content-Type: application/json" \
|
|
36
|
+
-d '{"message": "My name is Alex and I work at Acme Corp"}'
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
## Features
|
|
40
|
+
|
|
41
|
+
### Rich Console Output
|
|
42
|
+
|
|
43
|
+
Watch data flow through all memory layers in real-time:
|
|
44
|
+
|
|
45
|
+
```
|
|
46
|
+
You: My name is Alex and I work at Acme Corp
|
|
47
|
+
|
|
48
|
+
┌────────────────────────────────────────────────────────────────────┐
|
|
49
|
+
│ MEMORY ORCHESTRATION │
|
|
50
|
+
├────────────────────────────────────────────────────────────────────┤
|
|
51
|
+
│ 📦 Memory Space ✓ complete (2ms) │
|
|
52
|
+
│ → ID: basic-demo │
|
|
53
|
+
│ │
|
|
54
|
+
│ 👤 User ✓ complete (5ms) │
|
|
55
|
+
│ → ID: demo-user │
|
|
56
|
+
│ → Name: Demo User │
|
|
57
|
+
│ │
|
|
58
|
+
│ 🤖 Agent ✓ complete (3ms) │
|
|
59
|
+
│ → ID: basic-assistant │
|
|
60
|
+
│ → Name: Cortex CLI Assistant │
|
|
61
|
+
│ │
|
|
62
|
+
│ 💬 Conversation ✓ complete (8ms) │
|
|
63
|
+
│ → ID: conv-abc123 │
|
|
64
|
+
│ → Messages: 2 │
|
|
65
|
+
│ │
|
|
66
|
+
│ 🎯 Vector Store ✓ complete (45ms) │
|
|
67
|
+
│ → Embedded with 1536 dimensions │
|
|
68
|
+
│ → Importance: 75 │
|
|
69
|
+
│ │
|
|
70
|
+
│ 💡 Facts ✓ complete [NEW] (120ms) │
|
|
71
|
+
│ → Extracted 2 facts: │
|
|
72
|
+
│ • "User's name is Alex" (identity, 95%) │
|
|
73
|
+
│ • "User works at Acme Corp" (employment, 90%) │
|
|
74
|
+
│ │
|
|
75
|
+
│ 🕸️ Graph ○ skipped (not configured) │
|
|
76
|
+
├────────────────────────────────────────────────────────────────────┤
|
|
77
|
+
│ Total: 183ms │
|
|
78
|
+
└────────────────────────────────────────────────────────────────────┘
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### CLI Commands
|
|
82
|
+
|
|
83
|
+
| Command | Description |
|
|
84
|
+
|---------|-------------|
|
|
85
|
+
| `/recall <query>` | Search memories without storing |
|
|
86
|
+
| `/facts` | List all stored facts |
|
|
87
|
+
| `/history` | Show conversation history |
|
|
88
|
+
| `/new` | Start a new conversation |
|
|
89
|
+
| `/config` | Show current configuration |
|
|
90
|
+
| `/clear` | Clear the screen |
|
|
91
|
+
| `/exit` | Exit the demo |
|
|
92
|
+
|
|
93
|
+
### HTTP API Endpoints
|
|
94
|
+
|
|
95
|
+
| Endpoint | Method | Description |
|
|
96
|
+
|----------|--------|-------------|
|
|
97
|
+
| `/chat` | POST | Chat and store memory |
|
|
98
|
+
| `/recall` | GET | Search memories (`?query=...`) |
|
|
99
|
+
| `/facts` | GET | List stored facts |
|
|
100
|
+
| `/history/:id` | GET | Get conversation history |
|
|
101
|
+
| `/health` | GET | Health check |
|
|
102
|
+
|
|
103
|
+
## Configuration
|
|
104
|
+
|
|
105
|
+
All configuration is via environment variables in `.env.local`:
|
|
106
|
+
|
|
107
|
+
```env
|
|
108
|
+
# Required
|
|
109
|
+
CONVEX_URL=https://your-project.convex.cloud
|
|
110
|
+
|
|
111
|
+
# Optional: Enable AI responses (otherwise runs in echo mode)
|
|
112
|
+
OPENAI_API_KEY=sk-...
|
|
113
|
+
|
|
114
|
+
# Optional: Customize identifiers
|
|
115
|
+
MEMORY_SPACE_ID=basic-demo
|
|
116
|
+
USER_ID=demo-user
|
|
117
|
+
USER_NAME=Demo User
|
|
118
|
+
AGENT_ID=basic-assistant
|
|
119
|
+
AGENT_NAME=Cortex CLI Assistant
|
|
120
|
+
|
|
121
|
+
# Optional: Feature flags
|
|
122
|
+
CORTEX_FACT_EXTRACTION=true # Enable automatic fact extraction
|
|
123
|
+
CORTEX_GRAPH_SYNC=false # Enable graph database sync
|
|
124
|
+
|
|
125
|
+
# Optional: Debug mode
|
|
126
|
+
DEBUG=true
|
|
31
127
|
```
|
|
32
128
|
|
|
33
129
|
## Project Structure
|
|
@@ -35,65 +131,94 @@ npm start
|
|
|
35
131
|
```
|
|
36
132
|
.
|
|
37
133
|
├── src/
|
|
38
|
-
│
|
|
39
|
-
├──
|
|
40
|
-
│ ├──
|
|
41
|
-
│ ├──
|
|
42
|
-
│ ├──
|
|
43
|
-
│ └──
|
|
44
|
-
├──
|
|
134
|
+
│ ├── index.ts # CLI entry point
|
|
135
|
+
│ ├── server.ts # HTTP server entry point
|
|
136
|
+
│ ├── chat.ts # Core chat/memory logic
|
|
137
|
+
│ ├── cortex.ts # SDK client configuration
|
|
138
|
+
│ ├── llm.ts # Optional OpenAI integration
|
|
139
|
+
│ └── display.ts # Rich console output
|
|
140
|
+
├── convex/ # Cortex backend functions
|
|
141
|
+
├── .env.local # Environment configuration
|
|
142
|
+
├── dev-runner.mjs # Development helper
|
|
45
143
|
└── package.json
|
|
46
144
|
```
|
|
47
145
|
|
|
48
|
-
##
|
|
146
|
+
## How It Works
|
|
49
147
|
|
|
50
|
-
|
|
51
|
-
|
|
52
|
-
|
|
53
|
-
- **
|
|
54
|
-
- **User
|
|
55
|
-
- **
|
|
148
|
+
1. **Recall** - Before responding, Cortex searches for relevant memories and facts
|
|
149
|
+
2. **Generate** - Uses OpenAI (if configured) or echo mode to generate a response
|
|
150
|
+
3. **Remember** - Stores the exchange through all memory layers:
|
|
151
|
+
- **Memory Space** - Isolated namespace
|
|
152
|
+
- **User** - User profile tracking
|
|
153
|
+
- **Agent** - Agent registration
|
|
154
|
+
- **Conversation** - Message storage
|
|
155
|
+
- **Vector** - Semantic embeddings for search
|
|
156
|
+
- **Facts** - Extracted structured information
|
|
157
|
+
- **Graph** - Entity relationships (optional)
|
|
56
158
|
|
|
57
|
-
##
|
|
159
|
+
## Testing
|
|
160
|
+
|
|
161
|
+
The project includes comprehensive tests:
|
|
162
|
+
|
|
163
|
+
```bash
|
|
164
|
+
# Run all unit tests
|
|
165
|
+
npm test
|
|
166
|
+
|
|
167
|
+
# Run with coverage
|
|
168
|
+
npm run test:coverage
|
|
169
|
+
|
|
170
|
+
# Run integration tests (mocked SDK)
|
|
171
|
+
npm run test:integration
|
|
172
|
+
|
|
173
|
+
# Run e2e tests (requires real backend)
|
|
174
|
+
npm run test:e2e
|
|
58
175
|
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
For semantic search, add an embedding provider:
|
|
62
|
-
|
|
63
|
-
```typescript
|
|
64
|
-
import OpenAI from "openai";
|
|
65
|
-
|
|
66
|
-
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
|
|
67
|
-
|
|
68
|
-
await cortex.memory.remember({
|
|
69
|
-
memorySpaceId: "my-agent",
|
|
70
|
-
conversationId: "conv-1",
|
|
71
|
-
userMessage: "message",
|
|
72
|
-
agentResponse: "response",
|
|
73
|
-
userId: "user-1",
|
|
74
|
-
userName: "User",
|
|
75
|
-
generateEmbedding: async (text) => {
|
|
76
|
-
const result = await openai.embeddings.create({
|
|
77
|
-
model: "text-embedding-3-small",
|
|
78
|
-
input: text,
|
|
79
|
-
});
|
|
80
|
-
return result.data[0].embedding;
|
|
81
|
-
},
|
|
82
|
-
});
|
|
176
|
+
# Run all tests
|
|
177
|
+
npm run test:all
|
|
83
178
|
```
|
|
84
179
|
|
|
180
|
+
### E2E Test Requirements
|
|
181
|
+
|
|
182
|
+
E2E tests require additional setup:
|
|
183
|
+
|
|
184
|
+
1. **Memory flow tests** - Need `CONVEX_URL` pointing to a deployed Cortex backend
|
|
185
|
+
2. **Fact extraction tests** - Also need `OPENAI_API_KEY` for LLM-powered extraction
|
|
186
|
+
3. **Server tests** - Need the HTTP server running (`npm run server`)
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
# Run memory flow e2e tests
|
|
190
|
+
CONVEX_URL=https://your-project.convex.cloud npm run test:e2e
|
|
191
|
+
|
|
192
|
+
# Run fact extraction tests (requires OpenAI)
|
|
193
|
+
CONVEX_URL=https://your-project.convex.cloud \
|
|
194
|
+
OPENAI_API_KEY=sk-... \
|
|
195
|
+
npm run test:e2e
|
|
196
|
+
|
|
197
|
+
# Run server e2e tests
|
|
198
|
+
# Terminal 1: Start server
|
|
199
|
+
CONVEX_URL=https://your-project.convex.cloud npm run server
|
|
200
|
+
# Terminal 2: Run tests
|
|
201
|
+
npm run test:e2e
|
|
202
|
+
```
|
|
203
|
+
|
|
204
|
+
## Next Steps
|
|
205
|
+
|
|
206
|
+
### Enable AI Responses
|
|
207
|
+
|
|
208
|
+
Set `OPENAI_API_KEY` in `.env.local` for real AI-powered responses instead of echo mode.
|
|
209
|
+
|
|
85
210
|
### Enable Graph Database
|
|
86
211
|
|
|
87
|
-
For
|
|
212
|
+
For entity relationship queries:
|
|
88
213
|
|
|
89
214
|
1. Start Neo4j: `docker-compose -f docker-compose.graph.yml up -d`
|
|
90
|
-
2.
|
|
215
|
+
2. Set `CORTEX_GRAPH_SYNC=true` in `.env.local`
|
|
91
216
|
|
|
92
|
-
###
|
|
217
|
+
### Explore the API
|
|
93
218
|
|
|
94
|
-
- [Documentation](https://
|
|
95
|
-
- [API Reference](https://
|
|
96
|
-
- [
|
|
219
|
+
- [Cortex Documentation](https://cortexmemory.dev/docs)
|
|
220
|
+
- [API Reference](https://cortexmemory.dev/docs/api-reference)
|
|
221
|
+
- [GitHub Repository](https://github.com/SaintNick1214/Project-Cortex)
|
|
97
222
|
|
|
98
223
|
## Support
|
|
99
224
|
|