agentic-codememory 0.1.5__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- agentic_codememory-0.1.5/.claude/handoffs/2026-03-20-new-project-init.md +190 -0
- agentic_codememory-0.1.5/.claude/settings.local.json +11 -0
- agentic_codememory-0.1.5/.env.example +51 -0
- agentic_codememory-0.1.5/.github/workflows/ci.yml +110 -0
- agentic_codememory-0.1.5/.gitignore +12 -0
- agentic_codememory-0.1.5/.planning/PROJECT.md +118 -0
- agentic_codememory-0.1.5/.planning/ROADMAP.md +188 -0
- agentic_codememory-0.1.5/.planning/STATE.md +60 -0
- agentic_codememory-0.1.5/.planning/codebase/ARCHITECTURE.md +261 -0
- agentic_codememory-0.1.5/.planning/codebase/CONCERNS.md +245 -0
- agentic_codememory-0.1.5/.planning/codebase/CONVENTIONS.md +225 -0
- agentic_codememory-0.1.5/.planning/codebase/INTEGRATIONS.md +160 -0
- agentic_codememory-0.1.5/.planning/codebase/STACK.md +135 -0
- agentic_codememory-0.1.5/.planning/codebase/STRUCTURE.md +200 -0
- agentic_codememory-0.1.5/.planning/codebase/TESTING.md +360 -0
- agentic_codememory-0.1.5/.planning/config.json +13 -0
- agentic_codememory-0.1.5/.planning/research/ARCHITECTURE.md +328 -0
- agentic_codememory-0.1.5/.planning/research/FEATURES.md +207 -0
- agentic_codememory-0.1.5/.planning/research/PITFALLS.md +438 -0
- agentic_codememory-0.1.5/.planning/research/STACK.md +276 -0
- agentic_codememory-0.1.5/.planning/research/SUMMARY.md +190 -0
- agentic_codememory-0.1.5/.pre-commit-config.yaml +29 -0
- agentic_codememory-0.1.5/4-stage-ingestion-with-prep.md +18 -0
- agentic_codememory-0.1.5/4_pass_ingestion_with_prep_hybridgraphRAG.py +749 -0
- agentic_codememory-0.1.5/5_continuous_ingestion.py +776 -0
- agentic_codememory-0.1.5/5_continuous_ingestion_jina.py +779 -0
- agentic_codememory-0.1.5/AGENTS.md +375 -0
- agentic_codememory-0.1.5/CONTRIBUTING.md +772 -0
- agentic_codememory-0.1.5/DOCUMENTATION_SUMMARY.md +206 -0
- agentic_codememory-0.1.5/Dockerfile +29 -0
- agentic_codememory-0.1.5/GIT-INTEGRATION-SPEC.md +231 -0
- agentic_codememory-0.1.5/GRAPHRAG_README.md +78 -0
- agentic_codememory-0.1.5/LICENSE +21 -0
- agentic_codememory-0.1.5/PKG-INFO +343 -0
- agentic_codememory-0.1.5/README.md +316 -0
- agentic_codememory-0.1.5/SPEC.md +115 -0
- agentic_codememory-0.1.5/TODO.md +400 -0
- agentic_codememory-0.1.5/debug_extraction.py +95 -0
- agentic_codememory-0.1.5/docker-compose.yml +90 -0
- agentic_codememory-0.1.5/docs/API.md +1150 -0
- agentic_codememory-0.1.5/docs/ARCHITECTURE.md +1060 -0
- agentic_codememory-0.1.5/docs/FIELD_TEST_RESULTS_2026-02-24.md +161 -0
- agentic_codememory-0.1.5/docs/FIELD_TEST_TEMPLATE.md +99 -0
- agentic_codememory-0.1.5/docs/GIT_GRAPH.md +213 -0
- agentic_codememory-0.1.5/docs/INSTALLATION.md +668 -0
- agentic_codememory-0.1.5/docs/MCP_INTEGRATION.md +859 -0
- agentic_codememory-0.1.5/docs/NEO4J_BROWSER_VISUALIZATION.md +84 -0
- agentic_codememory-0.1.5/docs/TOOL_USE_ANNOTATION.md +95 -0
- agentic_codememory-0.1.5/docs/TROUBLESHOOTING.md +432 -0
- agentic_codememory-0.1.5/docs/evaluation-decision.md +41 -0
- agentic_codememory-0.1.5/docs/skill-adapter-security.md +127 -0
- agentic_codememory-0.1.5/docs/skill-adapter-workflows.md +132 -0
- agentic_codememory-0.1.5/evaluation/README.md +41 -0
- agentic_codememory-0.1.5/evaluation/__init__.py +2 -0
- agentic_codememory-0.1.5/evaluation/results/.gitkeep +1 -0
- agentic_codememory-0.1.5/evaluation/schemas/benchmark_results.schema.json +270 -0
- agentic_codememory-0.1.5/evaluation/scripts/create_run_scaffold.py +214 -0
- agentic_codememory-0.1.5/evaluation/scripts/summarize_results.py +194 -0
- agentic_codememory-0.1.5/evaluation/skills/skill-adapter-workflow.md +35 -0
- agentic_codememory-0.1.5/evaluation/tasks/benchmark_tasks.json +191 -0
- agentic_codememory-0.1.5/evaluation/templates/decision_memo_template.md +46 -0
- agentic_codememory-0.1.5/examples/README.md +127 -0
- agentic_codememory-0.1.5/examples/basic_usage.md +527 -0
- agentic_codememory-0.1.5/examples/docker_setup.md +786 -0
- agentic_codememory-0.1.5/examples/mcp_prompt_examples.md +588 -0
- agentic_codememory-0.1.5/graphrag_requirements.txt +21 -0
- agentic_codememory-0.1.5/pyproject.toml +138 -0
- agentic_codememory-0.1.5/requirements.txt +8 -0
- agentic_codememory-0.1.5/skills/agentic-memory-adapter/SKILL.md +87 -0
- agentic_codememory-0.1.5/skills/agentic-memory-adapter/scripts/health_check.sh +185 -0
- agentic_codememory-0.1.5/skills/agentic-memory-adapter/scripts/run_codememory.sh +201 -0
- agentic_codememory-0.1.5/src/codememory/__init__.py +0 -0
- agentic_codememory-0.1.5/src/codememory/cli.py +1242 -0
- agentic_codememory-0.1.5/src/codememory/config.py +237 -0
- agentic_codememory-0.1.5/src/codememory/docker/docker-compose.yml +48 -0
- agentic_codememory-0.1.5/src/codememory/ingestion/__init__.py +13 -0
- agentic_codememory-0.1.5/src/codememory/ingestion/git_graph.py +488 -0
- agentic_codememory-0.1.5/src/codememory/ingestion/graph.py +1423 -0
- agentic_codememory-0.1.5/src/codememory/ingestion/parser.py +254 -0
- agentic_codememory-0.1.5/src/codememory/ingestion/watcher.py +424 -0
- agentic_codememory-0.1.5/src/codememory/server/__init__.py +1 -0
- agentic_codememory-0.1.5/src/codememory/server/app.py +750 -0
- agentic_codememory-0.1.5/src/codememory/server/tools.py +104 -0
- agentic_codememory-0.1.5/src/codememory/telemetry.py +307 -0
- agentic_codememory-0.1.5/tests/__init__.py +1 -0
- agentic_codememory-0.1.5/tests/conftest.py +16 -0
- agentic_codememory-0.1.5/tests/test_cli.py +694 -0
- agentic_codememory-0.1.5/tests/test_git_graph.py +122 -0
- agentic_codememory-0.1.5/tests/test_graph.py +199 -0
- agentic_codememory-0.1.5/tests/test_parser.py +107 -0
- agentic_codememory-0.1.5/tests/test_server.py +352 -0
- agentic_codememory-0.1.5/upload_checkpoint.py +65 -0
|
@@ -0,0 +1,190 @@
|
|
|
1
|
+
# Session Handoff: New Project Initialization
|
|
2
|
+
|
|
3
|
+
**Created:** 2026-03-20
|
|
4
|
+
**Project:** D:\code\agentic-memory
|
|
5
|
+
**Branch:** main
|
|
6
|
+
**Session:** fa780870-3e1d-4373-add9-6a0e936326d8
|
|
7
|
+
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
## Current State Summary
|
|
11
|
+
|
|
12
|
+
The `/gsd:new-project` workflow is ~60% complete. Research phase just finished (4 research files written). The next step is to spawn the **research synthesizer** to create `SUMMARY.md`, then proceed to requirements definition.
|
|
13
|
+
|
|
14
|
+
**What's done:**
|
|
15
|
+
- Codebase mapped (7 docs in `.planning/codebase/`)
|
|
16
|
+
- Deep questioning completed — project scope fully defined
|
|
17
|
+
- `PROJECT.md` created and committed (3b3e332)
|
|
18
|
+
- `config.json` created and committed (646a0b8)
|
|
19
|
+
- 4 research files created in `.planning/research/` (STACK.md, FEATURES.md, ARCHITECTURE.md, PITFALLS.md)
|
|
20
|
+
|
|
21
|
+
**What's next immediately:**
|
|
22
|
+
1. Spawn `gsd-research-synthesizer` agent to create `.planning/research/SUMMARY.md`
|
|
23
|
+
2. Define v1 requirements (per-category scoping questions)
|
|
24
|
+
3. Create roadmap with phases
|
|
25
|
+
4. Initialize STATE.md
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
|
|
29
|
+
## Important Context
|
|
30
|
+
|
|
31
|
+
### What This Project Is
|
|
32
|
+
|
|
33
|
+
Expanding the existing `codememory` CLI/MCP tool (code-only knowledge graph) into a **modular multi-type knowledge graph** with two new v1 modules:
|
|
34
|
+
|
|
35
|
+
1. **Web Research Memory** — crawl4ai + Brave Search + Playwright/agent-browser, scheduled research pipelines, PDF ingestion, Gemini multimodal embeddings
|
|
36
|
+
2. **Agent Conversation Memory** — auto-capture or manual import, session tracking, context retrieval for AI agents
|
|
37
|
+
|
|
38
|
+
**Key architectural decisions already made:**
|
|
39
|
+
- **Separate Neo4j databases per module** to prevent embedding model conflicts (code uses OpenAI, web/chat use Gemini)
|
|
40
|
+
- Code: port 7687, Web: 7688, Chat: 7689
|
|
41
|
+
- Modular architecture — each module standalone, unified via MCP routing
|
|
42
|
+
- Gemini embeddings (gemini-embedding-2-preview) for non-code modules
|
|
43
|
+
- Crawl4AI for web extraction, Brave Search API for automated research
|
|
44
|
+
- Vercel agent-browser for dynamic content
|
|
45
|
+
- CLI + MCP interface (extends existing pattern)
|
|
46
|
+
- Long-term vision: universal adapter layer for any AI workflow
|
|
47
|
+
|
|
48
|
+
### What the User Wants
|
|
49
|
+
|
|
50
|
+
The user is building this for personal use (research & analysis pipelines) but wants it adaptable for anyone. Key UX goals:
|
|
51
|
+
- "One click install to whatever AI system they use"
|
|
52
|
+
- Automated capture by default (no friction)
|
|
53
|
+
- Deep research automation with scheduled variations
|
|
54
|
+
- "Seamless integration with any AI" = the magic
|
|
55
|
+
|
|
56
|
+
### User Profile
|
|
57
|
+
- Technical, building production-quality systems
|
|
58
|
+
- YOLO mode preference (auto-approve tools)
|
|
59
|
+
- Wants research/plan-check/verifier agents enabled
|
|
60
|
+
- Balanced model profile
|
|
61
|
+
- Parallel execution enabled
|
|
62
|
+
- Git tracking enabled
|
|
63
|
+
|
|
64
|
+
### GSD Workflow Config
|
|
65
|
+
|
|
66
|
+
```json
|
|
67
|
+
{
|
|
68
|
+
"mode": "yolo",
|
|
69
|
+
"granularity": "standard",
|
|
70
|
+
"parallelization": true,
|
|
71
|
+
"commit_docs": true,
|
|
72
|
+
"model_profile": "balanced",
|
|
73
|
+
"workflow": {
|
|
74
|
+
"research": true,
|
|
75
|
+
"plan_check": true,
|
|
76
|
+
"verifier": true,
|
|
77
|
+
"nyquist_validation": true
|
|
78
|
+
}
|
|
79
|
+
}
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
## Research Files Created
|
|
85
|
+
|
|
86
|
+
All 4 files are in `.planning/research/`:
|
|
87
|
+
|
|
88
|
+
| File | Contents |
|
|
89
|
+
|------|----------|
|
|
90
|
+
| `STACK.md` | Technology recommendations: Gemini embeddings, Crawl4AI, Playwright, Brave API, Neo4j multi-db |
|
|
91
|
+
| `FEATURES.md` | Table stakes vs differentiators vs anti-features for both modules + MVP recommendations |
|
|
92
|
+
| `ARCHITECTURE.md` | Hub-and-spoke pattern, component boundaries, 4-pass ingestion, anti-patterns to avoid |
|
|
93
|
+
| `PITFALLS.md` | 18 pitfalls categorized by severity (critical/moderate/minor) with prevention strategies + phase mapping |
|
|
94
|
+
|
|
95
|
+
**SUMMARY.md does NOT exist yet** — synthesizer hasn't run.
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Immediate Next Steps
|
|
100
|
+
|
|
101
|
+
1. **Spawn gsd-research-synthesizer** to create `.planning/research/SUMMARY.md`
|
|
102
|
+
- Prompt: "Synthesize the research outputs from the 4 files in D:\code\agentic-memory\.planning\research\ (STACK.md, FEATURES.md, ARCHITECTURE.md, PITFALLS.md) into a SUMMARY.md. This is for a project adding Web Research Memory and Agent Conversation Memory modules to an existing code-only knowledge graph tool."
|
|
103
|
+
|
|
104
|
+
2. **Display research complete banner** with key findings summary to user
|
|
105
|
+
|
|
106
|
+
3. **Requirements definition** — present feature categories and use AskUserQuestion to scope each for v1:
|
|
107
|
+
- Web Research Memory features
|
|
108
|
+
- Conversation Memory features
|
|
109
|
+
- Shared infrastructure features
|
|
110
|
+
|
|
111
|
+
4. **Create roadmap** — phases mapping requirements to implementation
|
|
112
|
+
|
|
113
|
+
5. **Initialize STATE.md**
|
|
114
|
+
|
|
115
|
+
---
|
|
116
|
+
|
|
117
|
+
## Key Patterns From Existing Codebase
|
|
118
|
+
|
|
119
|
+
From `.planning/codebase/` analysis:
|
|
120
|
+
- Uses FastMCP for MCP server
|
|
121
|
+
- 4-pass ingestion pipeline already exists for code
|
|
122
|
+
- OpenAI embeddings (text-embedding-3-large, 3072d) for code
|
|
123
|
+
- Neo4j with vector indexes
|
|
124
|
+
- Tree-sitter for code parsing
|
|
125
|
+
- Config via `.codememory/config.json`
|
|
126
|
+
- Existing concerns: silent embedding failures, text truncation, single-threaded embedding
|
|
127
|
+
|
|
128
|
+
**New modules should fix these patterns**, not replicate them.
|
|
129
|
+
|
|
130
|
+
---
|
|
131
|
+
|
|
132
|
+
## Critical Files
|
|
133
|
+
|
|
134
|
+
| File | Purpose |
|
|
135
|
+
|------|---------|
|
|
136
|
+
| `.planning/PROJECT.md` | Full project scope, requirements, constraints, decisions |
|
|
137
|
+
| `.planning/config.json` | GSD workflow configuration |
|
|
138
|
+
| `.planning/codebase/ARCHITECTURE.md` | Existing codebase architecture |
|
|
139
|
+
| `.planning/codebase/CONCERNS.md` | Known issues to avoid repeating |
|
|
140
|
+
| `.planning/research/FEATURES.md` | MVP feature recommendations |
|
|
141
|
+
| `.planning/research/PITFALLS.md` | 18 pitfalls with phase-specific warnings |
|
|
142
|
+
| `src/codememory/` | Existing code module to extend |
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
## Potential Gotchas
|
|
147
|
+
|
|
148
|
+
1. **Research agent file permissions** — In previous session, subagents had Write tool auto-denied. Files were manually created in main conversation. If spawning agents again, confirm Write permissions are available.
|
|
149
|
+
|
|
150
|
+
2. **Embedding dimension conflict** — Do NOT allow OpenAI + Gemini embeddings in same Neo4j database. This is Pitfall #1 in PITFALLS.md. Separate databases is the validated approach.
|
|
151
|
+
|
|
152
|
+
3. **STACK.md content** — Created from agent summary, not full output. May be less detailed than FEATURES.md/ARCHITECTURE.md/PITFALLS.md which were written with more complete agent outputs.
|
|
153
|
+
|
|
154
|
+
4. **Uncommitted changes** — `.planning/research/` is untracked (`??` in git status). Commit after SUMMARY.md is created.
|
|
155
|
+
|
|
156
|
+
5. **Main branch vs master** — Working on `main` but `master` is listed as the "main branch" for PRs. Use `main` for development.
|
|
157
|
+
|
|
158
|
+
---
|
|
159
|
+
|
|
160
|
+
## Decisions Made (With Rationale)
|
|
161
|
+
|
|
162
|
+
| Decision | Rationale |
|
|
163
|
+
|----------|-----------|
|
|
164
|
+
| Separate Neo4j databases per module | Prevent embedding model conflicts (OpenAI 3072d vs Gemini 768d incompatible) |
|
|
165
|
+
| Gemini for web/chat, OpenAI for code | Code module already validated with OpenAI; Gemini multimodal needed for non-code |
|
|
166
|
+
| Both modules in v1 | User wants full web+chat scope from the start |
|
|
167
|
+
| Crawl4AI primary, agent-browser for dynamic | Crawl4AI handles most cases; JS-heavy sites need Playwright/agent-browser |
|
|
168
|
+
| Brave Search API (configurable) | User's preference; other options possible via config |
|
|
169
|
+
| CLI + MCP (existing pattern) | Extends what works; universal adapter layer is future vision |
|
|
170
|
+
| YOLO + parallel + balanced model | User's explicit selections during workflow setup |
|
|
171
|
+
|
|
172
|
+
---
|
|
173
|
+
|
|
174
|
+
## GSD Workflow State
|
|
175
|
+
|
|
176
|
+
**Phase:** New Project Initialization
|
|
177
|
+
**Step:** Research Synthesis (post-research, pre-requirements)
|
|
178
|
+
|
|
179
|
+
The workflow context when the session ended: research agents had completed but couldn't write files. Files were manually created. Next is synthesizer → requirements → roadmap.
|
|
180
|
+
|
|
181
|
+
**Workflow: `/gsd:new-project`**
|
|
182
|
+
- [x] Deep questioning
|
|
183
|
+
- [x] PROJECT.md created
|
|
184
|
+
- [x] config.json created
|
|
185
|
+
- [x] 4 research agents spawned
|
|
186
|
+
- [x] Research files written (manually, due to permission issue)
|
|
187
|
+
- [ ] SUMMARY.md synthesized
|
|
188
|
+
- [ ] Requirements defined
|
|
189
|
+
- [ ] Roadmap created
|
|
190
|
+
- [ ] STATE.md initialized
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# Agentic Memory - Environment Configuration
|
|
2
|
+
# Copy this file to .env and fill in your values
|
|
3
|
+
|
|
4
|
+
# ============================================================================
|
|
5
|
+
# NEO4J DATABASE
|
|
6
|
+
# ============================================================================
|
|
7
|
+
|
|
8
|
+
# Neo4j connection URI
|
|
9
|
+
# - Local Neo4j: bolt://localhost:7687
|
|
10
|
+
# - Neo4j Aura: neo4j+s://your-instance.databases.neo4j.io
|
|
11
|
+
NEO4J_URI=bolt://localhost:7687
|
|
12
|
+
|
|
13
|
+
# Neo4j authentication
|
|
14
|
+
NEO4J_USER=neo4j
|
|
15
|
+
NEO4J_PASSWORD=your_neo4j_password
|
|
16
|
+
|
|
17
|
+
# Note: Neo4j 5.18+ is required for vector index support
|
|
18
|
+
|
|
19
|
+
# ============================================================================
|
|
20
|
+
# OPENAI API
|
|
21
|
+
# ============================================================================
|
|
22
|
+
|
|
23
|
+
# OpenAI API key for embeddings (text-embedding-3-large)
|
|
24
|
+
# Get yours at: https://platform.openai.com/api-keys
|
|
25
|
+
OPENAI_API_KEY=sk-your-openai-api-key-here
|
|
26
|
+
|
|
27
|
+
# Optional: Override embedding model (default: text-embedding-3-large)
|
|
28
|
+
# EMBEDDING_MODEL=text-embedding-3-large
|
|
29
|
+
|
|
30
|
+
# ============================================================================
|
|
31
|
+
# INGESTION SETTINGS
|
|
32
|
+
# ============================================================================
|
|
33
|
+
|
|
34
|
+
# Optional: Repository path to index (can also be passed as CLI argument)
|
|
35
|
+
# REPO_PATH=/path/to/your/codebase
|
|
36
|
+
|
|
37
|
+
# Optional: Supported file extensions (comma-separated)
|
|
38
|
+
# SUPPORTED_EXTENSIONS=.py,.js,.ts,.tsx,.jsx
|
|
39
|
+
|
|
40
|
+
# Optional: Directories to ignore during indexing (comma-separated)
|
|
41
|
+
# IGNORE_DIRS=node_modules,__pycache__,.git,dist,build,.venv,venv
|
|
42
|
+
|
|
43
|
+
# Optional: Logging level (DEBUG, INFO, WARNING, ERROR)
|
|
44
|
+
# LOG_LEVEL=INFO
|
|
45
|
+
|
|
46
|
+
# ============================================================================
|
|
47
|
+
# MCP SERVER
|
|
48
|
+
# ============================================================================
|
|
49
|
+
|
|
50
|
+
# Optional: Port for MCP server (default: varies by client)
|
|
51
|
+
# MCP_PORT=8000
|
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
name: CI
|
|
2
|
+
|
|
3
|
+
on:
|
|
4
|
+
push:
|
|
5
|
+
branches: [main, master]
|
|
6
|
+
pull_request:
|
|
7
|
+
branches: [main, master]
|
|
8
|
+
|
|
9
|
+
jobs:
|
|
10
|
+
lint-and-format:
|
|
11
|
+
runs-on: ubuntu-latest
|
|
12
|
+
steps:
|
|
13
|
+
- uses: actions/checkout@v4
|
|
14
|
+
|
|
15
|
+
- name: Set up Python
|
|
16
|
+
uses: actions/setup-python@v5
|
|
17
|
+
with:
|
|
18
|
+
python-version: '3.11'
|
|
19
|
+
|
|
20
|
+
- name: Install dependencies
|
|
21
|
+
run: |
|
|
22
|
+
python -m pip install --upgrade pip
|
|
23
|
+
pip install -e ".[dev]"
|
|
24
|
+
|
|
25
|
+
- name: Run Ruff linter
|
|
26
|
+
run: ruff check src/
|
|
27
|
+
|
|
28
|
+
- name: Run Black format check
|
|
29
|
+
run: black --check src/ tests/
|
|
30
|
+
|
|
31
|
+
- name: Run MyPy type check
|
|
32
|
+
run: mypy src/
|
|
33
|
+
|
|
34
|
+
test:
|
|
35
|
+
runs-on: ubuntu-latest
|
|
36
|
+
strategy:
|
|
37
|
+
matrix:
|
|
38
|
+
python-version: ['3.10', '3.11', '3.12']
|
|
39
|
+
|
|
40
|
+
steps:
|
|
41
|
+
- uses: actions/checkout@v4
|
|
42
|
+
|
|
43
|
+
- name: Set up Python ${{ matrix.python-version }}
|
|
44
|
+
uses: actions/setup-python@v5
|
|
45
|
+
with:
|
|
46
|
+
python-version: ${{ matrix.python-version }}
|
|
47
|
+
|
|
48
|
+
- name: Install dependencies
|
|
49
|
+
run: |
|
|
50
|
+
python -m pip install --upgrade pip
|
|
51
|
+
pip install -e ".[dev]"
|
|
52
|
+
|
|
53
|
+
- name: Run unit tests with coverage
|
|
54
|
+
run: pytest --cov=codememory --cov-report=xml --cov-report=term-missing -m "not integration" tests/
|
|
55
|
+
|
|
56
|
+
- name: Upload coverage to Codecov
|
|
57
|
+
uses: codecov/codecov-action@v3
|
|
58
|
+
with:
|
|
59
|
+
files: ./coverage.xml
|
|
60
|
+
fail_ci_if_error: false
|
|
61
|
+
verbose: true
|
|
62
|
+
|
|
63
|
+
integration-test:
|
|
64
|
+
runs-on: ubuntu-latest
|
|
65
|
+
services:
|
|
66
|
+
neo4j:
|
|
67
|
+
image: neo4j:5.18-community
|
|
68
|
+
env:
|
|
69
|
+
NEO4J_AUTH: neo4j/testpassword
|
|
70
|
+
NEO4J_PLUGINS: '["apoc"]'
|
|
71
|
+
ports:
|
|
72
|
+
- 7687:7687
|
|
73
|
+
- 7474:7474
|
|
74
|
+
options: >-
|
|
75
|
+
--health-cmd "cypher-shell -u neo4j -p testpassword 'RETURN 1'"
|
|
76
|
+
--health-interval 10s
|
|
77
|
+
--health-timeout 5s
|
|
78
|
+
--health-retries 5
|
|
79
|
+
|
|
80
|
+
steps:
|
|
81
|
+
- uses: actions/checkout@v4
|
|
82
|
+
|
|
83
|
+
- name: Set up Python
|
|
84
|
+
uses: actions/setup-python@v5
|
|
85
|
+
with:
|
|
86
|
+
python-version: '3.11'
|
|
87
|
+
|
|
88
|
+
- name: Install dependencies
|
|
89
|
+
run: |
|
|
90
|
+
python -m pip install --upgrade pip
|
|
91
|
+
pip install -e ".[dev]"
|
|
92
|
+
|
|
93
|
+
- name: Wait for Neo4j
|
|
94
|
+
run: |
|
|
95
|
+
for i in {1..30}; do
|
|
96
|
+
if curl -s http://localhost:7474/db/data/ > /dev/null 2>&1; then
|
|
97
|
+
echo "Neo4j is ready"
|
|
98
|
+
break
|
|
99
|
+
fi
|
|
100
|
+
echo "Waiting for Neo4j... ($i)"
|
|
101
|
+
sleep 2
|
|
102
|
+
done
|
|
103
|
+
|
|
104
|
+
- name: Run integration tests
|
|
105
|
+
env:
|
|
106
|
+
NEO4J_URI: bolt://localhost:7687
|
|
107
|
+
NEO4J_USER: neo4j
|
|
108
|
+
NEO4J_PASSWORD: testpassword
|
|
109
|
+
OPENAI_API_KEY: sk-test-key
|
|
110
|
+
run: pytest -m integration tests/
|
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
# Agentic Memory - Universal Knowledge Graph
|
|
2
|
+
|
|
3
|
+
## What This Is
|
|
4
|
+
|
|
5
|
+
A modular knowledge graph system that gives AI agents long-term memory across any content type. Currently handles code repositories via tree-sitter parsing and Neo4j graph storage. Expanding with two new modules: Web Research Memory for automated research pipelines (web search, crawling, PDFs) and Agent Conversation Memory for persistent chat/conversation context. Each module operates independently with its own database or optionally shares a unified graph. Agents access memory via MCP tools.
|
|
6
|
+
|
|
7
|
+
## Core Value
|
|
8
|
+
|
|
9
|
+
AI agents get seamless, persistent memory that works regardless of content type or AI system - making workflows feel magical and enabling deep, cumulative research over time.
|
|
10
|
+
|
|
11
|
+
## Requirements
|
|
12
|
+
|
|
13
|
+
### Validated
|
|
14
|
+
|
|
15
|
+
<!-- Existing code memory capabilities - proven and working -->
|
|
16
|
+
|
|
17
|
+
- ✓ Code repository indexing with tree-sitter (Python, JavaScript/TypeScript) — existing
|
|
18
|
+
- ✓ Multi-pass ingestion pipeline (structure scan → entities → relationships → embeddings) — existing
|
|
19
|
+
- ✓ Neo4j graph database with vector search — existing
|
|
20
|
+
- ✓ MCP server exposing search, dependency, and impact analysis tools — existing
|
|
21
|
+
- ✓ CLI interface (init, index, watch, serve, search, deps, impact) — existing
|
|
22
|
+
- ✓ Incremental file watching for code changes — existing
|
|
23
|
+
- ✓ Git history graph ingestion (commits, provenance tracking) — existing
|
|
24
|
+
- ✓ OpenAI text embeddings for semantic code search — existing
|
|
25
|
+
- ✓ Per-repository configuration with environment variable fallbacks — existing
|
|
26
|
+
|
|
27
|
+
### Active
|
|
28
|
+
|
|
29
|
+
<!-- v1 scope - building these now -->
|
|
30
|
+
|
|
31
|
+
**Web Research Memory Module:**
|
|
32
|
+
- [ ] Ingest web pages via URL (manual input)
|
|
33
|
+
- [ ] Auto-crawl from web search results (Brave Search API)
|
|
34
|
+
- [ ] Parse and index PDF documents
|
|
35
|
+
- [ ] Semantic search across all ingested web content
|
|
36
|
+
- [ ] Crawl4AI integration for robust web content extraction (primary)
|
|
37
|
+
- [ ] Vercel agent-browser fallback for JS-rendered/dynamic content (Playwright abstraction optimized for agent workflows — more efficient than raw Playwright)
|
|
38
|
+
- [ ] Smart scheduled research: prompt templates with variables; LLM fills variables each run based on past research graph + conversation history; avoids repeating covered topics
|
|
39
|
+
- [ ] Google Gemini multimodal embeddings (gemini-embedding-2-preview)
|
|
40
|
+
- [ ] Separate Neo4j database for web research content (port 7688)
|
|
41
|
+
- [ ] MCP tools: search_web_memory, ingest_url, schedule_research, run_research_session
|
|
42
|
+
|
|
43
|
+
**Agent Conversation Memory Module:**
|
|
44
|
+
- [ ] Ingest conversation logs and chat transcripts (manual import: JSON/JSONL)
|
|
45
|
+
- [ ] Fully automated set-and-forget capture: once configured, conversations are captured without user or agent intervention
|
|
46
|
+
- [ ] Provider-specific automatic integration: Claude Code stop-session hook; survey and implement equivalent zero-friction hooks for other major providers (ChatGPT, Cursor, Windsurf, etc.)
|
|
47
|
+
- [ ] MCP tool (add_message) as universal fallback for providers without native hook support
|
|
48
|
+
- [ ] Query conversational context (retrieve relevant past exchanges)
|
|
49
|
+
- [ ] Incremental message updates (add new messages without full re-index)
|
|
50
|
+
- [ ] User/session tracking (who said what, conversation boundaries, provider attribution)
|
|
51
|
+
- [ ] Google Gemini multimodal embeddings (gemini-embedding-2-preview)
|
|
52
|
+
- [ ] Separate Neo4j database for conversation content (port 7689)
|
|
53
|
+
- [ ] MCP tools: search_conversations, add_message, get_conversation_context
|
|
54
|
+
|
|
55
|
+
**Shared Infrastructure:**
|
|
56
|
+
- [ ] Modular architecture supporting independent or unified databases
|
|
57
|
+
- [ ] Configurable embedding model selection: Gemini, OpenAI, Nvidia Nemotron
|
|
58
|
+
- [ ] Config validation: warn if mixing embedding models in unified database
|
|
59
|
+
- [ ] CLI commands: web-init, web-ingest, web-search, chat-init, chat-ingest
|
|
60
|
+
- [ ] Documentation for module setup and configuration
|
|
61
|
+
|
|
62
|
+
### Out of Scope
|
|
63
|
+
|
|
64
|
+
- Web UI dashboard — Nice-to-have, not v1 priority
|
|
65
|
+
- IDE extensions (VS Code, Cursor) — Future, after proven via MCP
|
|
66
|
+
- Desktop Electron app — Future, CLI + MCP proven first
|
|
67
|
+
- Real-time collaboration features — Single-user focus for v1
|
|
68
|
+
- Advanced conversation analytics (sentiment, topic modeling) — Basic retrieval first
|
|
69
|
+
- Video/audio transcription — Rely on external tools, ingest transcripts only
|
|
70
|
+
- OpenClaw/Codex-specific adapters — Universal adapter layer is post-v1
|
|
71
|
+
- Simple cron scheduling (repeat same query) — Replaced by smart scheduled research with LLM-driven variable substitution
|
|
72
|
+
|
|
73
|
+
## Context
|
|
74
|
+
|
|
75
|
+
**Existing system:**
|
|
76
|
+
- Proven architecture with Neo4j + MCP + CLI for code memory
|
|
77
|
+
- Multi-pass ingestion pipeline adaptable to new content types
|
|
78
|
+
- Production telemetry system tracking tool usage for research
|
|
79
|
+
|
|
80
|
+
**User's immediate use case:**
|
|
81
|
+
- Research pipeline for deep topic exploration
|
|
82
|
+
- Daily automated research on evolving questions
|
|
83
|
+
- Build cumulative knowledge graph on specific domains
|
|
84
|
+
|
|
85
|
+
**Long-term vision:**
|
|
86
|
+
- One-click install for any AI workflow
|
|
87
|
+
- Universal adapter layer for OpenClaw, Claude Code, Codex, etc.
|
|
88
|
+
- Seamless integration regardless of which AI system users choose
|
|
89
|
+
|
|
90
|
+
**Technical foundation:**
|
|
91
|
+
- Tree-sitter works for code; Crawl4AI + agent-browser handle web/documents
|
|
92
|
+
- OpenAI embeddings proven for code; Google Gemini for multimodal content
|
|
93
|
+
- Separate databases by default prevents embedding model conflicts
|
|
94
|
+
|
|
95
|
+
## Constraints
|
|
96
|
+
|
|
97
|
+
- **Embedding consistency**: If unified database, all modules must use same embedding model
|
|
98
|
+
- **Existing code memory**: Must maintain full functionality of current code ingestion
|
|
99
|
+
- **Modular independence**: Each module works standalone (no hard cross-dependencies)
|
|
100
|
+
- **Tech stack**: Python 3.10+, Neo4j 5.18+, existing CLI/MCP patterns
|
|
101
|
+
- **API availability**: Requires Google Vertex AI access, Brave Search API key
|
|
102
|
+
- **One-click install**: Must be pip/CLI installable without complex setup
|
|
103
|
+
|
|
104
|
+
## Key Decisions
|
|
105
|
+
|
|
106
|
+
| Decision | Rationale | Outcome |
|
|
107
|
+
|----------|-----------|---------|
|
|
108
|
+
| Separate databases by default | Prevents embedding model conflicts (OpenAI 3072d vs Gemini 768d incompatible in same vector index) | ✓ Confirmed |
|
|
109
|
+
| Google Gemini embeddings for web/chat | Multimodal support (text, images, future video/audio); OpenAI stays for code module | ✓ Confirmed |
|
|
110
|
+
| Nvidia Nemotron in v1 | NIM API is OpenAI-compatible — ~20 line addition once abstraction layer exists; near-zero cost | ✓ Confirmed |
|
|
111
|
+
| Crawl4AI primary + agent-browser fallback | Crawl4AI handles static pages; Vercel agent-browser for JS-rendered dynamic content (more efficient than raw Playwright for agent workflows) | ✓ Confirmed |
|
|
112
|
+
| Brave Search API as default | Free tier available, good results, configurable for alternatives | ✓ Confirmed |
|
|
113
|
+
| Smart scheduled research (not simple cron) | Prompt templates with LLM-driven variable substitution; context-aware (no topic repetition); steered by past research + conversation history | ✓ Confirmed |
|
|
114
|
+
| Set-and-forget automated capture | UX goal: configure once, captures forever with zero friction; provider-native hooks where available (Claude Code confirmed); MCP tool as fallback for unsupported providers | ✓ Confirmed |
|
|
115
|
+
| Modular architecture | Each module independently usable, scales to future content types | ✓ Confirmed |
|
|
116
|
+
|
|
117
|
+
---
|
|
118
|
+
*Last updated: 2026-03-20 after requirements definition*
|
|
@@ -0,0 +1,188 @@
|
|
|
1
|
+
# Agentic Memory — v1 Roadmap
|
|
2
|
+
|
|
3
|
+
**Project:** Modular Knowledge Graph (Code + Web Research + Conversation Memory)
|
|
4
|
+
**Created:** 2026-03-20
|
|
5
|
+
**Status:** Planning
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Milestone: v1.0 — Full Multi-Module Memory System
|
|
10
|
+
|
|
11
|
+
**Goal:** Extend the existing code memory tool into a universal agent memory system with Web Research Memory and Agent Conversation Memory modules, accessible via CLI and MCP.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Phase 1: Foundation
|
|
16
|
+
|
|
17
|
+
**Goal:** Establish the shared infrastructure all modules build on. Must be done first — retrofitting these patterns later is costly.
|
|
18
|
+
|
|
19
|
+
**Deliverables:**
|
|
20
|
+
- Abstract ingestion base classes (`BaseIngestor`, `BaseEmbeddingService`, `BaseGraphWriter`)
|
|
21
|
+
- Embedding service abstraction layer supporting Gemini, OpenAI, and Nvidia Nemotron (NIM-compatible, OpenAI SDK with `base_url` override)
|
|
22
|
+
- Config validation system — detects embedding model mismatches across databases, warns loudly
|
|
23
|
+
- Multi-database connection manager (routes to :7687 code, :7688 web, :7689 chat)
|
|
24
|
+
- Docker Compose updated with web and chat Neo4j instances (ports 7688, 7689)
|
|
25
|
+
- CLI scaffolding for new commands (`web-init`, `web-ingest`, `web-search`, `chat-init`, `chat-ingest`) — structure only, not yet implemented
|
|
26
|
+
- Unit tests for embedding service abstraction and config validation
|
|
27
|
+
|
|
28
|
+
**Success Criteria:**
|
|
29
|
+
- All three Neo4j instances start cleanly via `docker-compose up`
|
|
30
|
+
- Embedding service abstraction passes correct model/dimensions to each database
|
|
31
|
+
- Config validation catches and rejects mixed embedding model configurations
|
|
32
|
+
- Existing code module continues to work unchanged
|
|
33
|
+
|
|
34
|
+
**Key Risks:**
|
|
35
|
+
- Gemini embedding API specifics (model name, dimensionality, auth method) — verify early
|
|
36
|
+
- Neo4j Community Edition multi-database support — confirm before designing connection manager
|
|
37
|
+
|
|
38
|
+
---
|
|
39
|
+
|
|
40
|
+
## Phase 2: Web Research Core
|
|
41
|
+
|
|
42
|
+
**Goal:** Functional web research ingestion — URLs, PDFs, and web search results land in the knowledge graph and are semantically searchable.
|
|
43
|
+
|
|
44
|
+
**Deliverables:**
|
|
45
|
+
- Crawl4AI integration: URL ingestion, content filtering (boilerplate removal), metadata extraction (title, author, date, source URL)
|
|
46
|
+
- PDF parsing via Crawl4AI built-in support
|
|
47
|
+
- Vercel agent-browser integration as fallback for JS-rendered/dynamic content (more efficient than raw Playwright for agent workflows)
|
|
48
|
+
- Brave Search API integration: web search → auto-ingest top results
|
|
49
|
+
- Gemini multimodal embedding service (gemini-embedding-2-preview) for web content
|
|
50
|
+
- Neo4j web database schema + vector indexes
|
|
51
|
+
- Content deduplication (hash-based, update vs create logic)
|
|
52
|
+
- MCP tools: `ingest_url`, `search_web_memory`
|
|
53
|
+
- CLI commands: `web-init`, `web-ingest`, `web-search` (fully functional)
|
|
54
|
+
|
|
55
|
+
**Success Criteria:**
|
|
56
|
+
- `codememory web-ingest <url>` ingests a static page and makes it searchable
|
|
57
|
+
- PDF documents ingested and retrievable via semantic search
|
|
58
|
+
- JS-rendered pages fall back to agent-browser automatically, transparently
|
|
59
|
+
- `codememory web-search "query"` runs Brave Search and auto-ingests results
|
|
60
|
+
- No duplicate entries for the same URL on re-ingest (updates instead)
|
|
61
|
+
- Semantic search returns relevant results across all ingested web content
|
|
62
|
+
|
|
63
|
+
**Key Risks:**
|
|
64
|
+
- Crawl4AI version stability and JS rendering reliability
|
|
65
|
+
- Vercel agent-browser API surface — verify current documentation
|
|
66
|
+
- Brave Search API rate limits and response schema
|
|
67
|
+
- Gemini embedding API access (Vertex AI vs AI Studio auth)
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
## Phase 3: Web Research Scheduling
|
|
72
|
+
|
|
73
|
+
**Goal:** Smart automated research pipeline — set a research template, system runs it on a schedule with LLM-driven variation, building cumulative knowledge over time.
|
|
74
|
+
|
|
75
|
+
**Deliverables:**
|
|
76
|
+
- Prompt template system with variable placeholders (e.g. `{topic}`, `{angle}`, `{timeframe}`)
|
|
77
|
+
- LLM-driven variable substitution each run: reads existing research graph + conversation history to select variable values that explore new angles, avoids repeating covered topics
|
|
78
|
+
- Topic coverage tracker: graph-based record of what has been researched, used to steer future runs
|
|
79
|
+
- Schedule management: cron-based execution, configurable frequency (daily, weekly, custom)
|
|
80
|
+
- Research session orchestrator: template → variable fill → search → ingest → update coverage
|
|
81
|
+
- Circuit breakers: rate limit handling, cost caps, graceful degradation on API failures
|
|
82
|
+
- MCP tools: `schedule_research`, `run_research_session`, `list_research_schedules`
|
|
83
|
+
- CLI commands: `web-schedule`, `web-run-research`
|
|
84
|
+
|
|
85
|
+
**Success Criteria:**
|
|
86
|
+
- User defines a research template once; system runs autonomously on schedule
|
|
87
|
+
- Each run produces meaningfully different queries based on what's already in the graph
|
|
88
|
+
- Coverage tracker correctly identifies and avoids already-researched topics
|
|
89
|
+
- Failed runs (API errors, rate limits) are logged and retried gracefully
|
|
90
|
+
- Research output is cumulative — graph grows richer over time without duplication
|
|
91
|
+
|
|
92
|
+
**Key Risks:**
|
|
93
|
+
- LLM variable substitution quality — prompt engineering for consistent, useful variation
|
|
94
|
+
- Cost management for automated LLM calls on schedule
|
|
95
|
+
- Scheduler library choice (APScheduler vs system cron vs custom)
|
|
96
|
+
|
|
97
|
+
---
|
|
98
|
+
|
|
99
|
+
## Phase 4: Conversation Memory
|
|
100
|
+
|
|
101
|
+
**Goal:** Set-and-forget conversation capture — configure once, all conversations are automatically stored and semantically searchable across providers.
|
|
102
|
+
|
|
103
|
+
**Deliverables:**
|
|
104
|
+
- Neo4j conversation database schema: conversations, messages, participants, sessions (port 7689)
|
|
105
|
+
- Gemini embeddings for conversation content
|
|
106
|
+
- Claude Code integration: stop-session hook auto-exports and ingests conversation on session end
|
|
107
|
+
- Provider survey: research hook/integration mechanisms for ChatGPT, Cursor, Windsurf, and other major agent platforms
|
|
108
|
+
- Provider-specific integrations for surveyed platforms (wherever native hooks exist)
|
|
109
|
+
- Manual import fallback: JSON/JSONL conversation log ingestion
|
|
110
|
+
- MCP tool fallback: `add_message()` for providers with no native hook support
|
|
111
|
+
- Incremental message updates (append-only, no full re-index on new messages)
|
|
112
|
+
- User/session tracking: provider attribution, conversation boundaries, role tagging (user/assistant/system)
|
|
113
|
+
- MCP tools: `search_conversations`, `add_message`, `get_conversation_context`
|
|
114
|
+
- CLI commands: `chat-init`, `chat-ingest`, `chat-search`
|
|
115
|
+
|
|
116
|
+
**Success Criteria:**
|
|
117
|
+
- Claude Code sessions captured automatically with zero user action after initial setup
|
|
118
|
+
- At least two additional providers integrated with native hooks
|
|
119
|
+
- Manual import handles real-world conversation export formats
|
|
120
|
+
- Semantic search retrieves relevant past exchanges across all captured conversations
|
|
121
|
+
- `get_conversation_context` returns ranked relevant history for a given query
|
|
122
|
+
- Provider attribution is correct (no mixing conversations across providers)
|
|
123
|
+
|
|
124
|
+
**Key Risks:**
|
|
125
|
+
- Provider hook availability varies significantly — some may have no hook mechanism
|
|
126
|
+
- Conversation data privacy — clear scoping of what gets captured vs excluded
|
|
127
|
+
- Schema must be locked before first ingest (hard to migrate conversation graph later)
|
|
128
|
+
|
|
129
|
+
---
|
|
130
|
+
|
|
131
|
+
## Phase 5: Cross-Module Integration & Hardening
|
|
132
|
+
|
|
133
|
+
**Goal:** Unified agent interface across all three modules, Nvidia Nemotron embedding support, production hardening.
|
|
134
|
+
|
|
135
|
+
**Deliverables:**
|
|
136
|
+
- Unified MCP router: single server aggregates code + web + conversation results
|
|
137
|
+
- Cross-module search: `search_all_memory` queries all databases, merges and ranks results
|
|
138
|
+
- Nvidia Nemotron embedding service (NIM API, OpenAI-compatible — ~20 lines via existing abstraction)
|
|
139
|
+
- Structured logging and observability across all modules
|
|
140
|
+
- Error recovery and retry logic standardized across modules
|
|
141
|
+
- Documentation: setup guides, MCP tool reference, provider integration guides
|
|
142
|
+
- End-to-end integration tests across all three modules
|
|
143
|
+
|
|
144
|
+
**Success Criteria:**
|
|
145
|
+
- Single MCP server exposes all tools from all three modules
|
|
146
|
+
- `search_all_memory` returns coherent ranked results across code, web, and conversation content
|
|
147
|
+
- Nvidia Nemotron can be selected as embedding model via config
|
|
148
|
+
- All three modules pass integration tests end-to-end
|
|
149
|
+
- Setup guide enables a new user to have all three modules running in under 30 minutes
|
|
150
|
+
|
|
151
|
+
**Key Risks:**
|
|
152
|
+
- Cross-module result ranking/merging quality
|
|
153
|
+
- MCP server routing complexity with many tools
|
|
154
|
+
- Neo4j Community Edition limits on concurrent connections across 3 databases
|
|
155
|
+
|
|
156
|
+
---
|
|
157
|
+
|
|
158
|
+
## Phase Dependencies
|
|
159
|
+
|
|
160
|
+
```
|
|
161
|
+
Phase 1 (Foundation)
|
|
162
|
+
└── Phase 2 (Web Research Core)
|
|
163
|
+
└── Phase 3 (Web Research Scheduling)
|
|
164
|
+
└── Phase 4 (Conversation Memory)
|
|
165
|
+
Phase 2 + Phase 4
|
|
166
|
+
└── Phase 5 (Cross-Module Integration)
|
|
167
|
+
```
|
|
168
|
+
|
|
169
|
+
Phases 2 and 4 can run in parallel after Phase 1 completes.
|
|
170
|
+
Phase 3 depends on Phase 2 (requires working ingestion pipeline).
|
|
171
|
+
Phase 5 depends on all prior phases.
|
|
172
|
+
|
|
173
|
+
---
|
|
174
|
+
|
|
175
|
+
## Open Research Questions (Pre-Implementation)
|
|
176
|
+
|
|
177
|
+
| Question | Blocks | Priority |
|
|
178
|
+
|----------|--------|----------|
|
|
179
|
+
| Gemini embedding API: model name, dimensionality, auth (Vertex AI vs AI Studio) | Phase 1, 2 | Critical |
|
|
180
|
+
| Neo4j Community Edition: multi-database support on single instance | Phase 1 | Critical |
|
|
181
|
+
| Vercel agent-browser: current API surface, install method, JS rendering reliability | Phase 2 | High |
|
|
182
|
+
| Crawl4AI: current stable version, PDF support status | Phase 2 | High |
|
|
183
|
+
| Brave Search: rate limits, response schema, free tier constraints | Phase 2, 3 | High |
|
|
184
|
+
| Cursor/Windsurf/ChatGPT: available hooks or integration points for conversation capture | Phase 4 | Medium |
|
|
185
|
+
| APScheduler vs system cron vs custom: best fit for research scheduling | Phase 3 | Medium |
|
|
186
|
+
|
|
187
|
+
---
|
|
188
|
+
*Last updated: 2026-03-20 after requirements definition*
|