@graph-tl/graph 0.1.5 → 0.1.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +74 -94
- package/dist/chunk-ILTJI4ZN.js +122 -0
- package/dist/chunk-ILTJI4ZN.js.map +1 -0
- package/dist/{chunk-NWCUIW6D.js → chunk-TWT5GUXW.js} +19 -3
- package/dist/chunk-TWT5GUXW.js.map +1 -0
- package/dist/index.js +2 -2
- package/dist/init-PPICS5PC.js +61 -0
- package/dist/init-PPICS5PC.js.map +1 -0
- package/dist/{nodes-7UZATPPU.js → nodes-4OJBNDHG.js} +2 -2
- package/dist/{server-4T5WYSAE.js → server-6TNHZCUW.js} +188 -114
- package/dist/server-6TNHZCUW.js.map +1 -0
- package/package.json +1 -1
- package/dist/chunk-NWCUIW6D.js.map +0 -1
- package/dist/init-JYP72OZQ.js +0 -39
- package/dist/init-JYP72OZQ.js.map +0 -1
- package/dist/server-4T5WYSAE.js.map +0 -1
- /package/dist/{nodes-7UZATPPU.js.map → nodes-4OJBNDHG.js.map} +0 -0
package/README.md
CHANGED
|
@@ -2,9 +2,28 @@
|
|
|
2
2
|
|
|
3
3
|
A task tracker built for AI agents, not humans.
|
|
4
4
|
|
|
5
|
-
Graph is an MCP server that
|
|
5
|
+
Graph is an MCP server that gives agents persistent memory across sessions. They decompose work into dependency trees, claim tasks, record evidence of what they did, and hand off to the next agent automatically.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
## Install
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
npx -y @graph-tl/graph init
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
Restart Claude Code. That's it.
|
|
14
|
+
|
|
15
|
+
## See it work
|
|
16
|
+
|
|
17
|
+
Tell your agent: "Use graph to plan building a REST API with auth and tests."
|
|
18
|
+
|
|
19
|
+
The agent will:
|
|
20
|
+
1. Create a project (`graph_open`)
|
|
21
|
+
2. Interview you about scope (`discovery`)
|
|
22
|
+
3. Decompose into a dependency tree (`graph_plan`)
|
|
23
|
+
4. Claim and work on tasks one by one (`graph_next` → work → `graph_update`)
|
|
24
|
+
5. When you start a new session, the next agent picks up exactly where the last one left off (`graph_onboard`)
|
|
25
|
+
|
|
26
|
+
No copy-pasting context. No re-explaining what was done. The graph carries it forward.
|
|
8
27
|
|
|
9
28
|
## Why
|
|
10
29
|
|
|
@@ -15,22 +34,10 @@ Graph gives agents what they actually need:
|
|
|
15
34
|
- **Arbitrary nesting** — decompose work as deep as needed
|
|
16
35
|
- **Dependencies with cycle detection** — the engine knows what's blocked and what's ready
|
|
17
36
|
- **Server-side ranking** — one call to get the highest-priority actionable task
|
|
18
|
-
- **Evidence trail** — agents record decisions, commits, and test results
|
|
37
|
+
- **Evidence trail** — agents record decisions, commits, and test results so the next agent inherits that knowledge
|
|
19
38
|
- **~450 tokens** for a full claim-work-resolve cycle (vs ~5000+ with Linear MCP)
|
|
20
39
|
|
|
21
|
-
##
|
|
22
|
-
|
|
23
|
-
**Multi-session projects.** You tell Claude Code to build a feature. It plans the work into a graph, finishes 3 of 5 tasks, and hits the context limit. You start a new session — the agent calls `graph_onboard`, sees what was done, what's left, and picks up the next task with full context. No copy-pasting, no re-explaining.
|
|
24
|
-
|
|
25
|
-
**Agent handoff.** Agent 1 builds the backend. Agent 2 starts a fresh session to work on the frontend. It calls `graph_next` and gets the highest-priority unblocked task, along with evidence from the tasks it depends on — what was implemented, what decisions were made, what files were touched.
|
|
26
|
-
|
|
27
|
-
**Complex decomposition.** You say "build me a CLI tool with auth, a database layer, and tests." The agent breaks this into a task tree with dependencies — tests depend on implementation, implementation depends on design. The engine tracks what's blocked and what's ready so the agent always works on the right thing.
|
|
28
|
-
|
|
29
|
-
**Replanning mid-flight.** Halfway through a project, requirements change. The agent uses `graph_restructure` to drop irrelevant tasks, add new ones, and reparent subtrees. Dependencies recalculate automatically.
|
|
30
|
-
|
|
31
|
-
## How It Works
|
|
32
|
-
|
|
33
|
-
An agent's workflow with Graph looks like this:
|
|
40
|
+
## How it works
|
|
34
41
|
|
|
35
42
|
```
|
|
36
43
|
1. graph_onboard → "What's the state of this project?"
|
|
@@ -41,13 +48,11 @@ An agent's workflow with Graph looks like this:
|
|
|
41
48
|
6. graph_next → "What's next?"
|
|
42
49
|
```
|
|
43
50
|
|
|
44
|
-
When a new agent joins, `graph_onboard` returns everything it needs in one call: project
|
|
51
|
+
When a new agent joins, `graph_onboard` returns everything it needs in one call: project goal, task tree, recent evidence, knowledge entries, what was recently resolved, and what's actionable now.
|
|
45
52
|
|
|
46
|
-
###
|
|
53
|
+
### Planning
|
|
47
54
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
The agent calls `graph_plan` to create this structure:
|
|
55
|
+
The agent calls `graph_plan` to create a dependency tree:
|
|
51
56
|
|
|
52
57
|
```
|
|
53
58
|
Build REST API
|
|
@@ -62,9 +67,9 @@ Build REST API
|
|
|
62
67
|
└── Integration tests (depends on: Unit tests)
|
|
63
68
|
```
|
|
64
69
|
|
|
65
|
-
`graph_next` immediately knows: "Write API spec" and "Database layer" are actionable. Everything else is blocked. When
|
|
70
|
+
`graph_next` immediately knows: "Write API spec" and "Database layer" are actionable. Everything else is blocked. When a task resolves, dependents unblock automatically.
|
|
66
71
|
|
|
67
|
-
###
|
|
72
|
+
### Agent handoff
|
|
68
73
|
|
|
69
74
|
Session 1 ends after completing 3 tasks. Session 2 starts:
|
|
70
75
|
|
|
@@ -72,13 +77,15 @@ Session 1 ends after completing 3 tasks. Session 2 starts:
|
|
|
72
77
|
→ graph_onboard("my-project")
|
|
73
78
|
|
|
74
79
|
← {
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
...
|
|
80
|
+
goal: "Build REST API",
|
|
81
|
+
hint: "2 actionable task(s) ready. 3 resolved recently.",
|
|
82
|
+
summary: { total: 8, resolved: 3, actionable: 2 },
|
|
83
|
+
recently_resolved: [
|
|
84
|
+
{ summary: "Auth module", agent: "claude-code", resolved_at: "..." },
|
|
85
|
+
],
|
|
86
|
+
knowledge: [
|
|
87
|
+
{ key: "auth-decisions", content: "JWT with RS256, keys in /config" },
|
|
80
88
|
],
|
|
81
|
-
context_links: ["src/auth.ts", "src/db.ts", "config/keys.json"],
|
|
82
89
|
actionable: [
|
|
83
90
|
{ summary: "Routes", priority: 8 },
|
|
84
91
|
{ summary: "Database layer", priority: 7 },
|
|
@@ -86,13 +93,11 @@ Session 1 ends after completing 3 tasks. Session 2 starts:
|
|
|
86
93
|
}
|
|
87
94
|
```
|
|
88
95
|
|
|
89
|
-
The new agent knows what was built,
|
|
90
|
-
|
|
91
|
-
### Code annotations: from static comments to traceable history
|
|
96
|
+
The new agent knows what was built, what decisions were made, and what to do next.
|
|
92
97
|
|
|
93
|
-
Code
|
|
98
|
+
### Code annotations
|
|
94
99
|
|
|
95
|
-
|
|
100
|
+
Agents annotate key changes with `// [sl:nodeId]`:
|
|
96
101
|
|
|
97
102
|
```typescript
|
|
98
103
|
// [sl:OZ0or-q5TserCEfWUeMVv] Require evidence when resolving
|
|
@@ -105,28 +110,30 @@ if (input.resolved === true && !node.resolved) {
|
|
|
105
110
|
}
|
|
106
111
|
```
|
|
107
112
|
|
|
108
|
-
That node ID links to a task in the graph.
|
|
109
|
-
|
|
110
|
-
- **What the task was** — "Enforce evidence on resolve"
|
|
111
|
-
- **Why it was done** — the evidence trail: design decisions, alternatives considered
|
|
112
|
-
- **What else changed** — context links to every file modified for that task
|
|
113
|
-
- **Who did it and when** — the full audit log
|
|
114
|
-
|
|
115
|
-
Comments are a snapshot. Graph turns your codebase into a traceable history of decisions.
|
|
116
|
-
|
|
117
|
-
## Install
|
|
118
|
-
|
|
119
|
-
Run this in your project directory:
|
|
120
|
-
|
|
121
|
-
```bash
|
|
122
|
-
npx -y @graph-tl/graph init
|
|
123
|
-
```
|
|
124
|
-
|
|
125
|
-
This adds Graph to your `.mcp.json`. Restart Claude Code and you're done.
|
|
113
|
+
That node ID links to a task in the graph. Call `graph_context` or `graph_history` on it to see what the task was, why it was done, what files were touched, and who did it.
|
|
126
114
|
|
|
127
|
-
|
|
115
|
+
## Tools
|
|
128
116
|
|
|
129
|
-
|
|
117
|
+
| Tool | Purpose |
|
|
118
|
+
|---|---|
|
|
119
|
+
| **graph_onboard** | Single-call orientation: project summary, tree, evidence, knowledge, actionable tasks. Omit project to auto-select |
|
|
120
|
+
| **graph_open** | Open or create a project. No args = list all projects |
|
|
121
|
+
| **graph_plan** | Batch create tasks with dependencies. Atomic |
|
|
122
|
+
| **graph_next** | Get next actionable task, ranked by priority/depth/recency. Optional claim |
|
|
123
|
+
| **graph_tree** | Full project tree visualization with resolve status |
|
|
124
|
+
| **graph_context** | Deep-read a task: ancestors, children, dependency graph |
|
|
125
|
+
| **graph_update** | Resolve tasks, add evidence. Reports newly unblocked tasks. Auto-resolves parents when all children complete |
|
|
126
|
+
| **graph_connect** | Add/remove dependency edges with cycle detection |
|
|
127
|
+
| **graph_query** | Search and filter by state, properties, text, ancestry |
|
|
128
|
+
| **graph_restructure** | Move, merge, or drop tasks for replanning |
|
|
129
|
+
| **graph_history** | Audit trail: who changed what, when |
|
|
130
|
+
| **graph_knowledge_write** | Store persistent project knowledge (architecture decisions, conventions) |
|
|
131
|
+
| **graph_knowledge_read** | Read knowledge entries or list all |
|
|
132
|
+
| **graph_knowledge_search** | Search knowledge by substring |
|
|
133
|
+
|
|
134
|
+
## Configuration
|
|
135
|
+
|
|
136
|
+
Add to `.mcp.json` (or run `npx -y @graph-tl/graph init`):
|
|
130
137
|
|
|
131
138
|
```json
|
|
132
139
|
{
|
|
@@ -142,62 +149,35 @@ If you prefer to configure it yourself, add this to `.mcp.json` in your project
|
|
|
142
149
|
}
|
|
143
150
|
```
|
|
144
151
|
|
|
145
|
-
### Configuration
|
|
146
|
-
|
|
147
152
|
Environment variables (all optional):
|
|
148
153
|
|
|
149
154
|
| Variable | Default | Description |
|
|
150
155
|
|---|---|---|
|
|
151
|
-
| `GRAPH_AGENT` | `default-agent` | Agent identity
|
|
152
|
-
| `GRAPH_DB` | `~/.graph/db/<hash>/graph.db` |
|
|
156
|
+
| `GRAPH_AGENT` | `default-agent` | Agent identity for audit trail |
|
|
157
|
+
| `GRAPH_DB` | `~/.graph/db/<hash>/graph.db` | Database path (per-project, outside your repo) |
|
|
153
158
|
| `GRAPH_CLAIM_TTL` | `60` | Soft claim expiry in minutes |
|
|
154
159
|
|
|
155
|
-
##
|
|
156
|
-
|
|
157
|
-
| Tool | Purpose |
|
|
158
|
-
|---|---|
|
|
159
|
-
| **graph_onboard** | Single-call orientation for a new agent joining a project. Returns project summary, task tree, recent evidence, context links, and actionable tasks. |
|
|
160
|
-
| **graph_open** | Open or create a project. No args = list all projects. |
|
|
161
|
-
| **graph_plan** | Batch create tasks with parent-child and dependency relationships. Atomic. |
|
|
162
|
-
| **graph_next** | Get the next actionable task, ranked by priority/depth/recency. Optional scope to a subtree. Optional soft claim. |
|
|
163
|
-
| **graph_context** | Deep-read a task: ancestors, children tree, dependency graph. |
|
|
164
|
-
| **graph_update** | Resolve tasks, add evidence/context links. Reports newly unblocked tasks. |
|
|
165
|
-
| **graph_connect** | Add or remove dependency edges. Cycle detection on `depends_on`. |
|
|
166
|
-
| **graph_query** | Search and filter tasks by state, properties, text, ancestry, actionability. |
|
|
167
|
-
| **graph_restructure** | Move, merge, or drop tasks. For replanning. |
|
|
168
|
-
| **graph_history** | Audit trail for a task — who changed what, when. |
|
|
169
|
-
|
|
170
|
-
See [TOOLS.md](TOOLS.md) for full schemas and response shapes.
|
|
171
|
-
|
|
172
|
-
## Token Efficiency
|
|
173
|
-
|
|
174
|
-
Every response is compact JSON — no UI chrome, no avatar URLs, no pagination boilerplate. Measured against real Claude Code sessions:
|
|
160
|
+
## Token efficiency
|
|
175
161
|
|
|
176
162
|
| Operation | Tokens | Round trips |
|
|
177
163
|
|---|---|---|
|
|
178
|
-
| Onboard
|
|
164
|
+
| Onboard to a 30-task project | ~500 | 1 |
|
|
179
165
|
| Plan 4 tasks with dependencies | ~220 | 1 |
|
|
180
|
-
| Get next actionable task
|
|
181
|
-
| Resolve
|
|
166
|
+
| Get next actionable task | ~300 | 1 |
|
|
167
|
+
| Resolve + see what unblocked | ~120 | 1 |
|
|
182
168
|
| **Full claim-work-resolve cycle** | **~450** | **3** |
|
|
183
169
|
|
|
184
|
-
|
|
170
|
+
~90% fewer tokens and ~50% fewer round trips vs traditional tracker MCP integrations.
|
|
185
171
|
|
|
186
|
-
## Data &
|
|
172
|
+
## Data & security
|
|
187
173
|
|
|
188
174
|
Graph is fully local. Your data never leaves your machine.
|
|
189
175
|
|
|
190
|
-
- **Single SQLite file**
|
|
191
|
-
- **No network calls** —
|
|
192
|
-
- **No secrets
|
|
193
|
-
- **You
|
|
194
|
-
|
|
195
|
-
## Design
|
|
196
|
-
|
|
197
|
-
- **`resolved` boolean** is the only field the engine interprets. Drives dependency computation. `state` is freeform for agent semantics.
|
|
198
|
-
- **Evidence model** — hints, notes, commits, test results are all evidence entries with a `type` field. One mechanism.
|
|
199
|
-
- **Linked context** — nodes store pointers to files/commits/docs, not content blobs.
|
|
176
|
+
- **Single SQLite file** in `~/.graph/db/` — outside your repo, nothing to gitignore
|
|
177
|
+
- **No network calls** — stdio MCP server, no telemetry, no cloud sync
|
|
178
|
+
- **No secrets stored** — task summaries, evidence notes, and file path references only
|
|
179
|
+
- **You own your data** — back it up, delete it, move it between machines
|
|
200
180
|
|
|
201
181
|
## License
|
|
202
182
|
|
|
203
|
-
MIT
|
|
183
|
+
MIT — free and open source.
|
|
@@ -0,0 +1,122 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
// src/tools/agent-config.ts
|
|
4
|
+
var AGENT_PROMPT = `---
|
|
5
|
+
name: graph
|
|
6
|
+
description: Use this agent for tasks tracked in Graph. Enforces the claim-work-resolve workflow \u2014 always checks graph_next before working, adds new work to the graph before executing, and resolves with evidence.
|
|
7
|
+
tools: Read, Edit, Write, Bash, Glob, Grep, Task(Explore), AskUserQuestion
|
|
8
|
+
model: sonnet
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
You are a graph-optimized agent. You execute tasks tracked in a Graph project. Follow this workflow strictly. The human directs, you execute through the graph.
|
|
12
|
+
|
|
13
|
+
# Workflow
|
|
14
|
+
|
|
15
|
+
## 1. ORIENT
|
|
16
|
+
On your first call, orient yourself:
|
|
17
|
+
\`\`\`
|
|
18
|
+
graph_onboard({ project: "<project-name>" })
|
|
19
|
+
\`\`\`
|
|
20
|
+
Read the \`hint\` field first \u2014 it tells you exactly what to do next. Then read the summary, evidence, knowledge, and actionable tasks.
|
|
21
|
+
|
|
22
|
+
**First-run:** If the tree is empty and discovery is \`"pending"\`, this is a brand new project. Jump directly to DISCOVER below. Do not call graph_next on an empty project.
|
|
23
|
+
|
|
24
|
+
## 2. DISCOVER (when discovery is pending)
|
|
25
|
+
If the project root or a task node has \`discovery: "pending"\`, you must complete discovery before decomposing it. Discovery is an interview with the user to understand what needs to happen.
|
|
26
|
+
|
|
27
|
+
Use AskUserQuestion to cover these areas (adapt to what's relevant \u2014 skip what's obvious):
|
|
28
|
+
- **Scope** \u2014 What exactly needs to happen? What's explicitly out of scope?
|
|
29
|
+
- **Existing patterns** \u2014 How does the codebase currently handle similar things? (explore first, then confirm)
|
|
30
|
+
- **Technical approach** \u2014 What libraries, APIs, or patterns should we use?
|
|
31
|
+
- **Acceptance criteria** \u2014 How will we know it's done? What does success look like?
|
|
32
|
+
|
|
33
|
+
After the interview:
|
|
34
|
+
1. Write findings as knowledge: \`graph_knowledge_write({ project, key: "discovery-<topic>", content: "..." })\`
|
|
35
|
+
2. Flip discovery to done: \`graph_update({ updates: [{ node_id: "<id>", discovery: "done" }] })\`
|
|
36
|
+
3. NOW decompose with graph_plan
|
|
37
|
+
|
|
38
|
+
Do NOT skip discovery. If you try to add children to a node with \`discovery: "pending"\`, graph_plan will reject it.
|
|
39
|
+
|
|
40
|
+
## 3. CLAIM
|
|
41
|
+
Get your next task:
|
|
42
|
+
\`\`\`
|
|
43
|
+
graph_next({ project: "<project-name>", claim: true })
|
|
44
|
+
\`\`\`
|
|
45
|
+
Read the task summary, ancestor chain (for scope), resolved dependencies (for context on what was done before you), and context links (for files to look at).
|
|
46
|
+
|
|
47
|
+
## 4. PLAN
|
|
48
|
+
If you discover work that isn't in the graph, add it BEFORE executing:
|
|
49
|
+
\`\`\`
|
|
50
|
+
graph_plan({ nodes: [{ ref: "new-work", parent_ref: "<parent-id>", summary: "..." }] })
|
|
51
|
+
\`\`\`
|
|
52
|
+
Never execute ad-hoc work. The graph is the source of truth.
|
|
53
|
+
|
|
54
|
+
When decomposing work:
|
|
55
|
+
- Set dependencies on LEAF nodes, not parent nodes. If "Page A" depends on "Layout", the dependency is from "Page A" to "Layout", not from the "Pages" parent to "Layout".
|
|
56
|
+
- Keep tasks small and specific. A task should be completable in one session.
|
|
57
|
+
- Parent nodes are organizational \u2014 they resolve when all children resolve. Don't put work in parent nodes.
|
|
58
|
+
|
|
59
|
+
## 5. WORK
|
|
60
|
+
Execute the claimed task. While working:
|
|
61
|
+
- Annotate key code changes with \`// [sl:nodeId]\` where nodeId is the task you're working on
|
|
62
|
+
- This creates a traceable link from code back to the task, its evidence, and its history
|
|
63
|
+
- Build and run tests before considering a task done
|
|
64
|
+
|
|
65
|
+
## 6. RESOLVE
|
|
66
|
+
When done, resolve the task with evidence:
|
|
67
|
+
\`\`\`
|
|
68
|
+
graph_update({ updates: [{
|
|
69
|
+
node_id: "<task-id>",
|
|
70
|
+
resolved: true,
|
|
71
|
+
add_evidence: [
|
|
72
|
+
{ type: "note", ref: "What you did and why" },
|
|
73
|
+
{ type: "git", ref: "<commit-hash> \u2014 <summary>" },
|
|
74
|
+
{ type: "test", ref: "Test results" }
|
|
75
|
+
],
|
|
76
|
+
add_context_links: ["path/to/files/you/touched"]
|
|
77
|
+
}] })
|
|
78
|
+
\`\`\`
|
|
79
|
+
Evidence is mandatory. At minimum, include one note explaining what you did.
|
|
80
|
+
|
|
81
|
+
## 7. PAUSE
|
|
82
|
+
After resolving a task, STOP. Tell the user:
|
|
83
|
+
- What you just completed
|
|
84
|
+
- What the next actionable task is
|
|
85
|
+
- Wait for the user to say "continue" before claiming the next task
|
|
86
|
+
|
|
87
|
+
The user controls the pace. Do not auto-claim the next task.
|
|
88
|
+
|
|
89
|
+
# Rules
|
|
90
|
+
|
|
91
|
+
- NEVER start work without a claimed task
|
|
92
|
+
- NEVER resolve without evidence
|
|
93
|
+
- NEVER execute ad-hoc work \u2014 add it to the graph first via graph_plan
|
|
94
|
+
- NEVER auto-continue to the next task \u2014 pause and let the user decide
|
|
95
|
+
- ALWAYS build and test before resolving
|
|
96
|
+
- ALWAYS include context_links for files you modified when resolving
|
|
97
|
+
- Parent nodes auto-resolve when all their children are resolved \u2014 you don't need to manually resolve them
|
|
98
|
+
- NEVER skip discovery on nodes with discovery:pending \u2014 the system will block you from decomposing
|
|
99
|
+
- If you're approaching context limits, ensure your current task's state is captured (update with evidence even if not fully resolved) so the next agent can pick up where you left off
|
|
100
|
+
|
|
101
|
+
# Common mistakes to avoid
|
|
102
|
+
|
|
103
|
+
- Setting dependencies on parent nodes instead of leaf nodes
|
|
104
|
+
- Running project scaffolding tools (create-next-app, etc.) before planning in the graph
|
|
105
|
+
- Resolving tasks without running tests
|
|
106
|
+
- Doing work that isn't tracked in the graph
|
|
107
|
+
- Continuing to the next task without pausing for user review
|
|
108
|
+
- Trying to decompose a node without completing discovery first
|
|
109
|
+
- Not writing knowledge entries during discovery \u2014 future agents need this context
|
|
110
|
+
`;
|
|
111
|
+
function handleAgentConfig() {
|
|
112
|
+
return {
|
|
113
|
+
agent_file: AGENT_PROMPT,
|
|
114
|
+
install_path: ".claude/agents/graph.md",
|
|
115
|
+
instructions: "Save the agent_file content to .claude/agents/graph.md in your project root. Claude Code will automatically discover it and use it when tasks match the agent description."
|
|
116
|
+
};
|
|
117
|
+
}
|
|
118
|
+
|
|
119
|
+
export {
|
|
120
|
+
handleAgentConfig
|
|
121
|
+
};
|
|
122
|
+
//# sourceMappingURL=chunk-ILTJI4ZN.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/tools/agent-config.ts"],"sourcesContent":["// [sl:fV9I7Vel3xT5d_Ws2YHul] Subagent delivery — free for all (retention hook)\n\nconst AGENT_PROMPT = `---\nname: graph\ndescription: Use this agent for tasks tracked in Graph. Enforces the claim-work-resolve workflow — always checks graph_next before working, adds new work to the graph before executing, and resolves with evidence.\ntools: Read, Edit, Write, Bash, Glob, Grep, Task(Explore), AskUserQuestion\nmodel: sonnet\n---\n\nYou are a graph-optimized agent. You execute tasks tracked in a Graph project. Follow this workflow strictly. The human directs, you execute through the graph.\n\n# Workflow\n\n## 1. ORIENT\nOn your first call, orient yourself:\n\\`\\`\\`\ngraph_onboard({ project: \"<project-name>\" })\n\\`\\`\\`\nRead the \\`hint\\` field first — it tells you exactly what to do next. Then read the summary, evidence, knowledge, and actionable tasks.\n\n**First-run:** If the tree is empty and discovery is \\`\"pending\"\\`, this is a brand new project. Jump directly to DISCOVER below. Do not call graph_next on an empty project.\n\n## 2. DISCOVER (when discovery is pending)\nIf the project root or a task node has \\`discovery: \"pending\"\\`, you must complete discovery before decomposing it. Discovery is an interview with the user to understand what needs to happen.\n\nUse AskUserQuestion to cover these areas (adapt to what's relevant — skip what's obvious):\n- **Scope** — What exactly needs to happen? What's explicitly out of scope?\n- **Existing patterns** — How does the codebase currently handle similar things? (explore first, then confirm)\n- **Technical approach** — What libraries, APIs, or patterns should we use?\n- **Acceptance criteria** — How will we know it's done? What does success look like?\n\nAfter the interview:\n1. Write findings as knowledge: \\`graph_knowledge_write({ project, key: \"discovery-<topic>\", content: \"...\" })\\`\n2. Flip discovery to done: \\`graph_update({ updates: [{ node_id: \"<id>\", discovery: \"done\" }] })\\`\n3. NOW decompose with graph_plan\n\nDo NOT skip discovery. If you try to add children to a node with \\`discovery: \"pending\"\\`, graph_plan will reject it.\n\n## 3. CLAIM\nGet your next task:\n\\`\\`\\`\ngraph_next({ project: \"<project-name>\", claim: true })\n\\`\\`\\`\nRead the task summary, ancestor chain (for scope), resolved dependencies (for context on what was done before you), and context links (for files to look at).\n\n## 4. PLAN\nIf you discover work that isn't in the graph, add it BEFORE executing:\n\\`\\`\\`\ngraph_plan({ nodes: [{ ref: \"new-work\", parent_ref: \"<parent-id>\", summary: \"...\" }] })\n\\`\\`\\`\nNever execute ad-hoc work. The graph is the source of truth.\n\nWhen decomposing work:\n- Set dependencies on LEAF nodes, not parent nodes. If \"Page A\" depends on \"Layout\", the dependency is from \"Page A\" to \"Layout\", not from the \"Pages\" parent to \"Layout\".\n- Keep tasks small and specific. A task should be completable in one session.\n- Parent nodes are organizational — they resolve when all children resolve. Don't put work in parent nodes.\n\n## 5. WORK\nExecute the claimed task. While working:\n- Annotate key code changes with \\`// [sl:nodeId]\\` where nodeId is the task you're working on\n- This creates a traceable link from code back to the task, its evidence, and its history\n- Build and run tests before considering a task done\n\n## 6. RESOLVE\nWhen done, resolve the task with evidence:\n\\`\\`\\`\ngraph_update({ updates: [{\n node_id: \"<task-id>\",\n resolved: true,\n add_evidence: [\n { type: \"note\", ref: \"What you did and why\" },\n { type: \"git\", ref: \"<commit-hash> — <summary>\" },\n { type: \"test\", ref: \"Test results\" }\n ],\n add_context_links: [\"path/to/files/you/touched\"]\n}] })\n\\`\\`\\`\nEvidence is mandatory. At minimum, include one note explaining what you did.\n\n## 7. PAUSE\nAfter resolving a task, STOP. Tell the user:\n- What you just completed\n- What the next actionable task is\n- Wait for the user to say \"continue\" before claiming the next task\n\nThe user controls the pace. Do not auto-claim the next task.\n\n# Rules\n\n- NEVER start work without a claimed task\n- NEVER resolve without evidence\n- NEVER execute ad-hoc work — add it to the graph first via graph_plan\n- NEVER auto-continue to the next task — pause and let the user decide\n- ALWAYS build and test before resolving\n- ALWAYS include context_links for files you modified when resolving\n- Parent nodes auto-resolve when all their children are resolved — you don't need to manually resolve them\n- NEVER skip discovery on nodes with discovery:pending — the system will block you from decomposing\n- If you're approaching context limits, ensure your current task's state is captured (update with evidence even if not fully resolved) so the next agent can pick up where you left off\n\n# Common mistakes to avoid\n\n- Setting dependencies on parent nodes instead of leaf nodes\n- Running project scaffolding tools (create-next-app, etc.) before planning in the graph\n- Resolving tasks without running tests\n- Doing work that isn't tracked in the graph\n- Continuing to the next task without pausing for user review\n- Trying to decompose a node without completing discovery first\n- Not writing knowledge entries during discovery — future agents need this context\n`;\n\nexport interface AgentConfigResult {\n agent_file: string;\n install_path: string;\n instructions: string;\n}\n\nexport function handleAgentConfig(): AgentConfigResult {\n return {\n agent_file: AGENT_PROMPT,\n install_path: \".claude/agents/graph.md\",\n instructions:\n \"Save the agent_file content to .claude/agents/graph.md in your project root. \" +\n \"Claude Code will automatically discover it and use it when tasks match the agent description.\",\n };\n}\n"],"mappings":";;;AAEA,IAAM,eAAe;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAkHd,SAAS,oBAAuC;AACrD,SAAO;AAAA,IACL,YAAY;AAAA,IACZ,cAAc;AAAA,IACd,cACE;AAAA,EAEJ;AACF;","names":[]}
|
|
@@ -95,6 +95,12 @@ function migrate(db2) {
|
|
|
95
95
|
UPDATE nodes SET depth = (SELECT depth FROM tree WHERE tree.id = nodes.id)
|
|
96
96
|
`);
|
|
97
97
|
}
|
|
98
|
+
const hasDiscovery = db2.prepare(
|
|
99
|
+
"SELECT COUNT(*) as cnt FROM pragma_table_info('nodes') WHERE name = 'discovery'"
|
|
100
|
+
).get();
|
|
101
|
+
if (hasDiscovery.cnt === 0) {
|
|
102
|
+
db2.exec("ALTER TABLE nodes ADD COLUMN discovery TEXT DEFAULT NULL");
|
|
103
|
+
}
|
|
98
104
|
}
|
|
99
105
|
function checkpointDb() {
|
|
100
106
|
if (db) {
|
|
@@ -238,6 +244,7 @@ function rowToNode(row) {
|
|
|
238
244
|
summary: row.summary,
|
|
239
245
|
resolved: row.resolved === 1,
|
|
240
246
|
depth: row.depth,
|
|
247
|
+
discovery: row.discovery ?? null,
|
|
241
248
|
state: row.state ? JSON.parse(row.state) : null,
|
|
242
249
|
properties: JSON.parse(row.properties),
|
|
243
250
|
context_links: JSON.parse(row.context_links),
|
|
@@ -264,6 +271,7 @@ function createNode(input) {
|
|
|
264
271
|
summary: input.summary,
|
|
265
272
|
resolved: false,
|
|
266
273
|
depth,
|
|
274
|
+
discovery: input.discovery ?? null,
|
|
267
275
|
state: input.state ?? null,
|
|
268
276
|
properties: input.properties ?? {},
|
|
269
277
|
context_links: input.context_links ?? [],
|
|
@@ -273,8 +281,8 @@ function createNode(input) {
|
|
|
273
281
|
updated_at: now
|
|
274
282
|
};
|
|
275
283
|
db2.prepare(`
|
|
276
|
-
INSERT INTO nodes (id, rev, parent, project, summary, resolved, depth, state, properties, context_links, evidence, created_by, created_at, updated_at)
|
|
277
|
-
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
|
284
|
+
INSERT INTO nodes (id, rev, parent, project, summary, resolved, depth, discovery, state, properties, context_links, evidence, created_by, created_at, updated_at)
|
|
285
|
+
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
|
278
286
|
`).run(
|
|
279
287
|
node.id,
|
|
280
288
|
node.rev,
|
|
@@ -283,6 +291,7 @@ function createNode(input) {
|
|
|
283
291
|
node.summary,
|
|
284
292
|
0,
|
|
285
293
|
node.depth,
|
|
294
|
+
node.discovery,
|
|
286
295
|
node.state !== null ? JSON.stringify(node.state) : null,
|
|
287
296
|
JSON.stringify(node.properties),
|
|
288
297
|
JSON.stringify(node.context_links),
|
|
@@ -356,6 +365,7 @@ function updateNode(input) {
|
|
|
356
365
|
const changes = [];
|
|
357
366
|
const now = (/* @__PURE__ */ new Date()).toISOString();
|
|
358
367
|
let newResolved = node.resolved;
|
|
368
|
+
let newDiscovery = node.discovery;
|
|
359
369
|
let newState = node.state;
|
|
360
370
|
let newSummary = node.summary;
|
|
361
371
|
let newProperties = { ...node.properties };
|
|
@@ -375,6 +385,10 @@ function updateNode(input) {
|
|
|
375
385
|
changes.push({ field: "resolved", before: node.resolved, after: input.resolved });
|
|
376
386
|
newResolved = input.resolved;
|
|
377
387
|
}
|
|
388
|
+
if (input.discovery !== void 0 && input.discovery !== node.discovery) {
|
|
389
|
+
changes.push({ field: "discovery", before: node.discovery, after: input.discovery });
|
|
390
|
+
newDiscovery = input.discovery;
|
|
391
|
+
}
|
|
378
392
|
if (input.state !== void 0) {
|
|
379
393
|
changes.push({ field: "state", before: node.state, after: input.state });
|
|
380
394
|
newState = input.state;
|
|
@@ -433,6 +447,7 @@ function updateNode(input) {
|
|
|
433
447
|
UPDATE nodes SET
|
|
434
448
|
rev = ?,
|
|
435
449
|
resolved = ?,
|
|
450
|
+
discovery = ?,
|
|
436
451
|
state = ?,
|
|
437
452
|
summary = ?,
|
|
438
453
|
properties = ?,
|
|
@@ -443,6 +458,7 @@ function updateNode(input) {
|
|
|
443
458
|
`).run(
|
|
444
459
|
newRev,
|
|
445
460
|
newResolved ? 1 : 0,
|
|
461
|
+
newDiscovery,
|
|
446
462
|
newState !== null ? JSON.stringify(newState) : null,
|
|
447
463
|
newSummary,
|
|
448
464
|
JSON.stringify(newProperties),
|
|
@@ -515,4 +531,4 @@ export {
|
|
|
515
531
|
updateNode,
|
|
516
532
|
getProjectSummary
|
|
517
533
|
};
|
|
518
|
-
//# sourceMappingURL=chunk-
|
|
534
|
+
//# sourceMappingURL=chunk-TWT5GUXW.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/nodes.ts","../src/db.ts","../src/events.ts","../src/validate.ts"],"sourcesContent":["import { nanoid } from \"nanoid\";\nimport { getDb } from \"./db.js\";\nimport { logEvent } from \"./events.js\";\nimport { EngineError } from \"./validate.js\";\nimport type { Node, NodeRow, Evidence, FieldChange } from \"./types.js\";\n\n// --- Row <-> Node conversion ---\n\nfunction rowToNode(row: NodeRow): Node {\n return {\n id: row.id,\n rev: row.rev,\n parent: row.parent,\n project: row.project,\n summary: row.summary,\n resolved: row.resolved === 1,\n depth: row.depth,\n discovery: row.discovery ?? null,\n state: row.state ? JSON.parse(row.state) : null,\n properties: JSON.parse(row.properties),\n context_links: JSON.parse(row.context_links),\n evidence: JSON.parse(row.evidence),\n created_by: row.created_by,\n created_at: row.created_at,\n updated_at: row.updated_at,\n };\n}\n\n// --- Create ---\n\nexport interface CreateNodeInput {\n parent?: string;\n project: string;\n summary: string;\n discovery?: string | null;\n state?: unknown;\n properties?: Record<string, unknown>;\n context_links?: string[];\n agent: string;\n}\n\nexport function createNode(input: CreateNodeInput): Node {\n const db = getDb();\n const now = new Date().toISOString();\n const id = nanoid();\n\n // [sl:yBBVr4wcgVfWA_w8U8hQo] Compute depth from parent\n let depth = 0;\n if (input.parent) {\n const parentRow = db.prepare(\"SELECT depth FROM nodes WHERE id = ?\").get(input.parent) as { depth: number } | undefined;\n if (parentRow) depth = parentRow.depth + 1;\n }\n\n const node: Node = {\n id,\n rev: 1,\n parent: input.parent ?? null,\n project: input.project,\n summary: input.summary,\n resolved: false,\n depth,\n discovery: input.discovery ?? null,\n state: input.state ?? null,\n properties: input.properties ?? {},\n context_links: input.context_links ?? [],\n evidence: [],\n created_by: input.agent,\n created_at: now,\n updated_at: now,\n };\n\n db.prepare(`\n INSERT INTO nodes (id, rev, parent, project, summary, resolved, depth, discovery, state, properties, context_links, evidence, created_by, created_at, updated_at)\n VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n `).run(\n node.id,\n node.rev,\n node.parent,\n node.project,\n node.summary,\n 0,\n node.depth,\n node.discovery,\n node.state !== null ? JSON.stringify(node.state) : null,\n JSON.stringify(node.properties),\n JSON.stringify(node.context_links),\n JSON.stringify(node.evidence),\n node.created_by,\n node.created_at,\n node.updated_at\n );\n\n logEvent(node.id, input.agent, \"created\", [\n { field: \"summary\", before: null, after: node.summary },\n ]);\n\n return node;\n}\n\n// --- Read ---\n\nexport function getNode(id: string): Node | null {\n const db = getDb();\n const row = db.prepare(\"SELECT * FROM nodes WHERE id = ?\").get(id) as\n | NodeRow\n | undefined;\n return row ? rowToNode(row) : null;\n}\n\nexport function getNodeOrThrow(id: string): Node {\n const node = getNode(id);\n if (!node) {\n throw new EngineError(\"node_not_found\", `Node not found: ${id}. Verify the ID is correct and the node hasn't been deleted.`);\n }\n return node;\n}\n\nexport function getChildren(parentId: string): Node[] {\n const db = getDb();\n const rows = db\n .prepare(\"SELECT * FROM nodes WHERE parent = ?\")\n .all(parentId) as NodeRow[];\n return rows.map(rowToNode);\n}\n\nexport function getAncestors(nodeId: string): Array<{ id: string; summary: string; resolved: boolean }> {\n const ancestors: Array<{ id: string; summary: string; resolved: boolean }> = [];\n let current = getNode(nodeId);\n\n while (current?.parent) {\n const parent = getNode(current.parent);\n if (!parent) break;\n ancestors.unshift({ id: parent.id, summary: parent.summary, resolved: parent.resolved });\n current = parent;\n }\n\n return ancestors;\n}\n\nexport function getProjectRoot(project: string): Node | null {\n const db = getDb();\n const row = db\n .prepare(\"SELECT * FROM nodes WHERE project = ? AND parent IS NULL\")\n .get(project) as NodeRow | undefined;\n return row ? rowToNode(row) : null;\n}\n\nexport function listProjects(): Array<{\n project: string;\n id: string;\n summary: string;\n total: number;\n resolved: number;\n unresolved: number;\n updated_at: string;\n}> {\n const db = getDb();\n\n const roots = db\n .prepare(\"SELECT * FROM nodes WHERE parent IS NULL\")\n .all() as NodeRow[];\n\n return roots.map((root) => {\n const counts = db\n .prepare(\n `SELECT\n COUNT(*) as total,\n SUM(CASE WHEN resolved = 1 THEN 1 ELSE 0 END) as resolved\n FROM nodes WHERE project = ?`\n )\n .get(root.project) as { total: number; resolved: number };\n\n return {\n project: root.project,\n id: root.id,\n summary: root.summary,\n total: counts.total,\n resolved: counts.resolved,\n unresolved: counts.total - counts.resolved,\n updated_at: root.updated_at,\n };\n });\n}\n\n// --- Update ---\n\nexport interface UpdateNodeInput {\n node_id: string;\n agent: string;\n resolved?: boolean;\n discovery?: string | null;\n state?: unknown;\n summary?: string;\n properties?: Record<string, unknown>;\n add_context_links?: string[];\n remove_context_links?: string[];\n add_evidence?: Array<{ type: string; ref: string }>;\n}\n\nexport function updateNode(input: UpdateNodeInput): Node {\n const db = getDb();\n const node = getNodeOrThrow(input.node_id);\n const changes: FieldChange[] = [];\n const now = new Date().toISOString();\n\n let newResolved = node.resolved;\n let newDiscovery = node.discovery;\n let newState = node.state;\n let newSummary = node.summary;\n let newProperties = { ...node.properties };\n let newContextLinks = [...node.context_links];\n let newEvidence = [...node.evidence];\n\n // [sl:OZ0or-q5TserCEfWUeMVv] Require evidence when resolving\n if (input.resolved === true && !node.resolved) {\n const hasExistingEvidence = node.evidence.length > 0;\n const hasNewEvidence = input.add_evidence && input.add_evidence.length > 0;\n if (!hasExistingEvidence && !hasNewEvidence) {\n throw new EngineError(\n \"evidence_required\",\n `Cannot resolve node ${input.node_id} without evidence. Add at least one add_evidence entry (type: 'git', 'note', 'test', etc.) explaining what was done.`\n );\n }\n }\n\n if (input.resolved !== undefined && input.resolved !== node.resolved) {\n changes.push({ field: \"resolved\", before: node.resolved, after: input.resolved });\n newResolved = input.resolved;\n }\n\n if (input.discovery !== undefined && input.discovery !== node.discovery) {\n changes.push({ field: \"discovery\", before: node.discovery, after: input.discovery });\n newDiscovery = input.discovery;\n }\n\n if (input.state !== undefined) {\n changes.push({ field: \"state\", before: node.state, after: input.state });\n newState = input.state;\n }\n\n if (input.summary !== undefined && input.summary !== node.summary) {\n changes.push({ field: \"summary\", before: node.summary, after: input.summary });\n newSummary = input.summary;\n }\n\n if (input.properties) {\n for (const [key, value] of Object.entries(input.properties)) {\n if (value === null) {\n if (key in newProperties) {\n changes.push({ field: `properties.${key}`, before: newProperties[key], after: null });\n delete newProperties[key];\n }\n } else {\n changes.push({ field: `properties.${key}`, before: newProperties[key] ?? null, after: value });\n newProperties[key] = value;\n }\n }\n }\n\n if (input.add_context_links) {\n for (const link of input.add_context_links) {\n if (!newContextLinks.includes(link)) {\n newContextLinks.push(link);\n changes.push({ field: \"context_links\", before: null, after: link });\n }\n }\n }\n\n if (input.remove_context_links) {\n for (const link of input.remove_context_links) {\n const idx = newContextLinks.indexOf(link);\n if (idx !== -1) {\n newContextLinks.splice(idx, 1);\n changes.push({ field: \"context_links\", before: link, after: null });\n }\n }\n }\n\n if (input.add_evidence) {\n for (const ev of input.add_evidence) {\n const evidence: Evidence = {\n type: ev.type,\n ref: ev.ref,\n agent: input.agent,\n timestamp: now,\n };\n newEvidence.push(evidence);\n changes.push({ field: \"evidence\", before: null, after: evidence });\n }\n }\n\n if (changes.length === 0) {\n return node;\n }\n\n const newRev = node.rev + 1;\n\n db.prepare(`\n UPDATE nodes SET\n rev = ?,\n resolved = ?,\n discovery = ?,\n state = ?,\n summary = ?,\n properties = ?,\n context_links = ?,\n evidence = ?,\n updated_at = ?\n WHERE id = ?\n `).run(\n newRev,\n newResolved ? 1 : 0,\n newDiscovery,\n newState !== null ? JSON.stringify(newState) : null,\n newSummary,\n JSON.stringify(newProperties),\n JSON.stringify(newContextLinks),\n JSON.stringify(newEvidence),\n now,\n input.node_id\n );\n\n const action = input.resolved === true ? \"resolved\" : \"updated\";\n logEvent(input.node_id, input.agent, action, changes);\n\n return getNodeOrThrow(input.node_id);\n}\n\n// --- Query helpers ---\n\nexport function getProjectSummary(project: string): {\n total: number;\n resolved: number;\n unresolved: number;\n blocked: number;\n actionable: number;\n} {\n const db = getDb();\n\n const counts = db\n .prepare(\n `SELECT\n COUNT(*) as total,\n SUM(CASE WHEN resolved = 1 THEN 1 ELSE 0 END) as resolved\n FROM nodes WHERE project = ?`\n )\n .get(project) as { total: number; resolved: number };\n\n // Blocked: unresolved nodes that have at least one unresolved dependency\n const blocked = db\n .prepare(\n `SELECT COUNT(DISTINCT n.id) as count\n FROM nodes n\n JOIN edges e ON e.from_node = n.id AND e.type = 'depends_on'\n JOIN nodes dep ON dep.id = e.to_node AND dep.resolved = 0\n WHERE n.project = ? AND n.resolved = 0`\n )\n .get(project) as { count: number };\n\n // Actionable: unresolved leaves (no unresolved children) with all deps resolved\n const actionable = db\n .prepare(\n `SELECT COUNT(*) as count FROM nodes n\n WHERE n.project = ? AND n.resolved = 0\n AND NOT EXISTS (\n SELECT 1 FROM nodes child WHERE child.parent = n.id AND child.resolved = 0\n )\n AND NOT EXISTS (\n SELECT 1 FROM edges e\n JOIN nodes dep ON dep.id = e.to_node AND dep.resolved = 0\n WHERE e.from_node = n.id AND e.type = 'depends_on'\n )`\n )\n .get(project) as { count: number };\n\n return {\n total: counts.total,\n resolved: counts.resolved,\n unresolved: counts.total - counts.resolved,\n blocked: blocked.count,\n actionable: actionable.count,\n };\n}\n","import Database from \"better-sqlite3\";\nimport path from \"path\";\n\nlet db: Database.Database;\nlet dbPath: string;\n\nexport function setDbPath(p: string): void {\n dbPath = p;\n}\n\nexport function getDb(): Database.Database {\n if (!db) {\n const resolvedPath = dbPath ?? path.resolve(\"graph.db\");\n db = new Database(resolvedPath);\n db.pragma(\"journal_mode = WAL\");\n db.pragma(\"synchronous = FULL\");\n db.pragma(\"foreign_keys = ON\");\n migrate(db);\n }\n return db;\n}\n\nexport function initDb(p?: string): Database.Database {\n // Close existing db if any (used by tests to reset state)\n if (db) {\n db.close();\n db = undefined!;\n }\n if (p) dbPath = p;\n return getDb();\n}\n\nfunction migrate(db: Database.Database): void {\n db.exec(`\n CREATE TABLE IF NOT EXISTS nodes (\n id TEXT PRIMARY KEY,\n rev INTEGER NOT NULL DEFAULT 1,\n parent TEXT REFERENCES nodes(id),\n project TEXT NOT NULL,\n summary TEXT NOT NULL,\n resolved INTEGER NOT NULL DEFAULT 0,\n depth INTEGER NOT NULL DEFAULT 0,\n state TEXT,\n properties TEXT NOT NULL DEFAULT '{}',\n context_links TEXT NOT NULL DEFAULT '[]',\n evidence TEXT NOT NULL DEFAULT '[]',\n created_by TEXT NOT NULL,\n created_at TEXT NOT NULL,\n updated_at TEXT NOT NULL\n );\n\n CREATE TABLE IF NOT EXISTS edges (\n id TEXT PRIMARY KEY,\n from_node TEXT NOT NULL REFERENCES nodes(id),\n to_node TEXT NOT NULL REFERENCES nodes(id),\n type TEXT NOT NULL,\n created_at TEXT NOT NULL,\n UNIQUE(from_node, to_node, type)\n );\n\n CREATE TABLE IF NOT EXISTS events (\n id TEXT PRIMARY KEY,\n node_id TEXT NOT NULL REFERENCES nodes(id),\n agent TEXT NOT NULL,\n action TEXT NOT NULL,\n changes TEXT NOT NULL,\n timestamp TEXT NOT NULL\n );\n\n CREATE INDEX IF NOT EXISTS idx_nodes_project ON nodes(project);\n CREATE INDEX IF NOT EXISTS idx_nodes_parent ON nodes(parent);\n CREATE INDEX IF NOT EXISTS idx_nodes_resolved ON nodes(project, resolved);\n CREATE INDEX IF NOT EXISTS idx_edges_from ON edges(from_node);\n CREATE INDEX IF NOT EXISTS idx_edges_to ON edges(to_node);\n CREATE INDEX IF NOT EXISTS idx_edges_type ON edges(from_node, type);\n CREATE INDEX IF NOT EXISTS idx_events_node ON events(node_id);\n\n CREATE TABLE IF NOT EXISTS knowledge (\n id TEXT PRIMARY KEY,\n project TEXT NOT NULL,\n key TEXT NOT NULL,\n content TEXT NOT NULL,\n created_by TEXT NOT NULL,\n created_at TEXT NOT NULL,\n updated_at TEXT NOT NULL,\n UNIQUE(project, key)\n );\n\n CREATE INDEX IF NOT EXISTS idx_knowledge_project ON knowledge(project);\n `);\n\n // [sl:yBBVr4wcgVfWA_w8U8hQo] Migration: add depth column if it doesn't exist\n const hasDepth = db.prepare(\n \"SELECT COUNT(*) as cnt FROM pragma_table_info('nodes') WHERE name = 'depth'\"\n ).get() as { cnt: number };\n\n if (hasDepth.cnt === 0) {\n db.exec(\"ALTER TABLE nodes ADD COLUMN depth INTEGER NOT NULL DEFAULT 0\");\n // Backfill depths using recursive CTE\n db.exec(`\n WITH RECURSIVE tree(id, depth) AS (\n SELECT id, 0 FROM nodes WHERE parent IS NULL\n UNION ALL\n SELECT n.id, t.depth + 1\n FROM nodes n JOIN tree t ON n.parent = t.id\n )\n UPDATE nodes SET depth = (SELECT depth FROM tree WHERE tree.id = nodes.id)\n `);\n }\n\n // [sl:AOXqUIhpW2-gdMqWATf66] Migration: add discovery column if it doesn't exist\n const hasDiscovery = db.prepare(\n \"SELECT COUNT(*) as cnt FROM pragma_table_info('nodes') WHERE name = 'discovery'\"\n ).get() as { cnt: number };\n\n if (hasDiscovery.cnt === 0) {\n db.exec(\"ALTER TABLE nodes ADD COLUMN discovery TEXT DEFAULT NULL\");\n }\n}\n\nexport function checkpointDb(): void {\n if (db) {\n db.pragma(\"wal_checkpoint(TRUNCATE)\");\n }\n}\n\nexport function closeDb(): void {\n if (db) {\n checkpointDb();\n db.close();\n }\n}\n","import { nanoid } from \"nanoid\";\nimport { getDb } from \"./db.js\";\nimport type { FieldChange, Event } from \"./types.js\";\n\nconst INSERT_EVENT = `\n INSERT INTO events (id, node_id, agent, action, changes, timestamp)\n VALUES (?, ?, ?, ?, ?, ?)\n`;\n\nexport function logEvent(\n nodeId: string,\n agent: string,\n action: string,\n changes: FieldChange[]\n): Event {\n const db = getDb();\n const event: Event = {\n id: nanoid(),\n node_id: nodeId,\n agent,\n action,\n changes,\n timestamp: new Date().toISOString(),\n };\n\n db.prepare(INSERT_EVENT).run(\n event.id,\n event.node_id,\n event.agent,\n event.action,\n JSON.stringify(event.changes),\n event.timestamp\n );\n\n return event;\n}\n\nexport function getEvents(\n nodeId: string,\n limit: number = 20,\n cursor?: string\n): { events: Event[]; next_cursor: string | null } {\n const db = getDb();\n\n let query: string;\n let params: unknown[];\n\n if (cursor) {\n query = `\n SELECT * FROM events\n WHERE node_id = ? AND timestamp < ?\n ORDER BY timestamp DESC\n LIMIT ?\n `;\n params = [nodeId, cursor, limit + 1];\n } else {\n query = `\n SELECT * FROM events\n WHERE node_id = ?\n ORDER BY timestamp DESC\n LIMIT ?\n `;\n params = [nodeId, limit + 1];\n }\n\n const rows = db.prepare(query).all(...params) as Array<{\n id: string;\n node_id: string;\n agent: string;\n action: string;\n changes: string;\n timestamp: string;\n }>;\n\n const hasMore = rows.length > limit;\n const slice = hasMore ? rows.slice(0, limit) : rows;\n\n const events: Event[] = slice.map((row) => ({\n id: row.id,\n node_id: row.node_id,\n agent: row.agent,\n action: row.action,\n changes: JSON.parse(row.changes),\n timestamp: row.timestamp,\n }));\n\n return {\n events,\n next_cursor: hasMore ? slice[slice.length - 1].timestamp : null,\n };\n}\n","export class ValidationError extends Error {\n code = \"validation_error\";\n constructor(message: string) {\n super(message);\n this.name = \"ValidationError\";\n }\n}\n\nexport class EngineError extends Error {\n code: string;\n constructor(code: string, message: string) {\n super(message);\n this.name = \"EngineError\";\n this.code = code;\n }\n}\n\nexport function requireString(value: unknown, field: string): string {\n if (typeof value !== \"string\" || value.trim().length === 0) {\n throw new ValidationError(`${field} is required and must be a non-empty string`);\n }\n return value.trim();\n}\n\nexport function optionalString(value: unknown, field: string): string | undefined {\n if (value === undefined || value === null) return undefined;\n if (typeof value !== \"string\") {\n throw new ValidationError(`${field} must be a string`);\n }\n return value;\n}\n\nexport function requireArray<T>(value: unknown, field: string): T[] {\n if (!Array.isArray(value) || value.length === 0) {\n throw new ValidationError(`${field} is required and must be a non-empty array`);\n }\n return value as T[];\n}\n\nexport function optionalArray<T>(value: unknown, field: string): T[] | undefined {\n if (value === undefined || value === null) return undefined;\n if (!Array.isArray(value)) {\n throw new ValidationError(`${field} must be an array`);\n }\n return value as T[];\n}\n\nexport function optionalNumber(value: unknown, field: string, min?: number, max?: number): number | undefined {\n if (value === undefined || value === null) return undefined;\n if (typeof value !== \"number\" || isNaN(value)) {\n throw new ValidationError(`${field} must be a number`);\n }\n if (min !== undefined && value < min) {\n throw new ValidationError(`${field} must be >= ${min}`);\n }\n if (max !== undefined && value > max) {\n throw new ValidationError(`${field} must be <= ${max}`);\n }\n return value;\n}\n\nexport function optionalBoolean(value: unknown, field: string): boolean | undefined {\n if (value === undefined || value === null) return undefined;\n if (typeof value !== \"boolean\") {\n throw new ValidationError(`${field} must be a boolean`);\n }\n return value;\n}\n\nexport function requireObject(value: unknown, field: string): Record<string, unknown> {\n if (value === null || typeof value !== \"object\" || Array.isArray(value)) {\n throw new ValidationError(`${field} is required and must be an object`);\n }\n return value as Record<string, unknown>;\n}\n"],"mappings":";;;AAAA,SAAS,UAAAA,eAAc;;;ACAvB,OAAO,cAAc;AACrB,OAAO,UAAU;AAEjB,IAAI;AACJ,IAAI;AAEG,SAAS,UAAU,GAAiB;AACzC,WAAS;AACX;AAEO,SAAS,QAA2B;AACzC,MAAI,CAAC,IAAI;AACP,UAAM,eAAe,UAAU,KAAK,QAAQ,UAAU;AACtD,SAAK,IAAI,SAAS,YAAY;AAC9B,OAAG,OAAO,oBAAoB;AAC9B,OAAG,OAAO,oBAAoB;AAC9B,OAAG,OAAO,mBAAmB;AAC7B,YAAQ,EAAE;AAAA,EACZ;AACA,SAAO;AACT;AAYA,SAAS,QAAQC,KAA6B;AAC5C,EAAAA,IAAG,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAwDP;AAGD,QAAM,WAAWA,IAAG;AAAA,IAClB;AAAA,EACF,EAAE,IAAI;AAEN,MAAI,SAAS,QAAQ,GAAG;AACtB,IAAAA,IAAG,KAAK,+DAA+D;AAEvE,IAAAA,IAAG,KAAK;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,KAQP;AAAA,EACH;AAGA,QAAM,eAAeA,IAAG;AAAA,IACtB;AAAA,EACF,EAAE,IAAI;AAEN,MAAI,aAAa,QAAQ,GAAG;AAC1B,IAAAA,IAAG,KAAK,0DAA0D;AAAA,EACpE;AACF;AAEO,SAAS,eAAqB;AACnC,MAAI,IAAI;AACN,OAAG,OAAO,0BAA0B;AAAA,EACtC;AACF;AAEO,SAAS,UAAgB;AAC9B,MAAI,IAAI;AACN,iBAAa;AACb,OAAG,MAAM;AAAA,EACX;AACF;;;ACnIA,SAAS,cAAc;AAIvB,IAAM,eAAe;AAAA;AAAA;AAAA;AAKd,SAAS,SACd,QACA,OACA,QACA,SACO;AACP,QAAMC,MAAK,MAAM;AACjB,QAAM,QAAe;AAAA,IACnB,IAAI,OAAO;AAAA,IACX,SAAS;AAAA,IACT;AAAA,IACA;AAAA,IACA;AAAA,IACA,YAAW,oBAAI,KAAK,GAAE,YAAY;AAAA,EACpC;AAEA,EAAAA,IAAG,QAAQ,YAAY,EAAE;AAAA,IACvB,MAAM;AAAA,IACN,MAAM;AAAA,IACN,MAAM;AAAA,IACN,MAAM;AAAA,IACN,KAAK,UAAU,MAAM,OAAO;AAAA,IAC5B,MAAM;AAAA,EACR;AAEA,SAAO;AACT;AAEO,SAAS,UACd,QACA,QAAgB,IAChB,QACiD;AACjD,QAAMA,MAAK,MAAM;AAEjB,MAAI;AACJ,MAAI;AAEJ,MAAI,QAAQ;AACV,YAAQ;AAAA;AAAA;AAAA;AAAA;AAAA;AAMR,aAAS,CAAC,QAAQ,QAAQ,QAAQ,CAAC;AAAA,EACrC,OAAO;AACL,YAAQ;AAAA;AAAA;AAAA;AAAA;AAAA;AAMR,aAAS,CAAC,QAAQ,QAAQ,CAAC;AAAA,EAC7B;AAEA,QAAM,OAAOA,IAAG,QAAQ,KAAK,EAAE,IAAI,GAAG,MAAM;AAS5C,QAAM,UAAU,KAAK,SAAS;AAC9B,QAAM,QAAQ,UAAU,KAAK,MAAM,GAAG,KAAK,IAAI;AAE/C,QAAM,SAAkB,MAAM,IAAI,CAAC,SAAS;AAAA,IAC1C,IAAI,IAAI;AAAA,IACR,SAAS,IAAI;AAAA,IACb,OAAO,IAAI;AAAA,IACX,QAAQ,IAAI;AAAA,IACZ,SAAS,KAAK,MAAM,IAAI,OAAO;AAAA,IAC/B,WAAW,IAAI;AAAA,EACjB,EAAE;AAEF,SAAO;AAAA,IACL;AAAA,IACA,aAAa,UAAU,MAAM,MAAM,SAAS,CAAC,EAAE,YAAY;AAAA,EAC7D;AACF;;;AC1FO,IAAM,kBAAN,cAA8B,MAAM;AAAA,EACzC,OAAO;AAAA,EACP,YAAY,SAAiB;AAC3B,UAAM,OAAO;AACb,SAAK,OAAO;AAAA,EACd;AACF;AAEO,IAAM,cAAN,cAA0B,MAAM;AAAA,EACrC;AAAA,EACA,YAAY,MAAc,SAAiB;AACzC,UAAM,OAAO;AACb,SAAK,OAAO;AACZ,SAAK,OAAO;AAAA,EACd;AACF;AAEO,SAAS,cAAc,OAAgB,OAAuB;AACnE,MAAI,OAAO,UAAU,YAAY,MAAM,KAAK,EAAE,WAAW,GAAG;AAC1D,UAAM,IAAI,gBAAgB,GAAG,KAAK,6CAA6C;AAAA,EACjF;AACA,SAAO,MAAM,KAAK;AACpB;AAEO,SAAS,eAAe,OAAgB,OAAmC;AAChF,MAAI,UAAU,UAAa,UAAU,KAAM,QAAO;AAClD,MAAI,OAAO,UAAU,UAAU;AAC7B,UAAM,IAAI,gBAAgB,GAAG,KAAK,mBAAmB;AAAA,EACvD;AACA,SAAO;AACT;AAEO,SAAS,aAAgB,OAAgB,OAAoB;AAClE,MAAI,CAAC,MAAM,QAAQ,KAAK,KAAK,MAAM,WAAW,GAAG;AAC/C,UAAM,IAAI,gBAAgB,GAAG,KAAK,4CAA4C;AAAA,EAChF;AACA,SAAO;AACT;AAUO,SAAS,eAAe,OAAgB,OAAe,KAAc,KAAkC;AAC5G,MAAI,UAAU,UAAa,UAAU,KAAM,QAAO;AAClD,MAAI,OAAO,UAAU,YAAY,MAAM,KAAK,GAAG;AAC7C,UAAM,IAAI,gBAAgB,GAAG,KAAK,mBAAmB;AAAA,EACvD;AACA,MAAI,QAAQ,UAAa,QAAQ,KAAK;AACpC,UAAM,IAAI,gBAAgB,GAAG,KAAK,eAAe,GAAG,EAAE;AAAA,EACxD;AACA,MAAI,QAAQ,UAAa,QAAQ,KAAK;AACpC,UAAM,IAAI,gBAAgB,GAAG,KAAK,eAAe,GAAG,EAAE;AAAA,EACxD;AACA,SAAO;AACT;AAEO,SAAS,gBAAgB,OAAgB,OAAoC;AAClF,MAAI,UAAU,UAAa,UAAU,KAAM,QAAO;AAClD,MAAI,OAAO,UAAU,WAAW;AAC9B,UAAM,IAAI,gBAAgB,GAAG,KAAK,oBAAoB;AAAA,EACxD;AACA,SAAO;AACT;;;AH3DA,SAAS,UAAU,KAAoB;AACrC,SAAO;AAAA,IACL,IAAI,IAAI;AAAA,IACR,KAAK,IAAI;AAAA,IACT,QAAQ,IAAI;AAAA,IACZ,SAAS,IAAI;AAAA,IACb,SAAS,IAAI;AAAA,IACb,UAAU,IAAI,aAAa;AAAA,IAC3B,OAAO,IAAI;AAAA,IACX,WAAW,IAAI,aAAa;AAAA,IAC5B,OAAO,IAAI,QAAQ,KAAK,MAAM,IAAI,KAAK,IAAI;AAAA,IAC3C,YAAY,KAAK,MAAM,IAAI,UAAU;AAAA,IACrC,eAAe,KAAK,MAAM,IAAI,aAAa;AAAA,IAC3C,UAAU,KAAK,MAAM,IAAI,QAAQ;AAAA,IACjC,YAAY,IAAI;AAAA,IAChB,YAAY,IAAI;AAAA,IAChB,YAAY,IAAI;AAAA,EAClB;AACF;AAeO,SAAS,WAAW,OAA8B;AACvD,QAAMC,MAAK,MAAM;AACjB,QAAM,OAAM,oBAAI,KAAK,GAAE,YAAY;AACnC,QAAM,KAAKC,QAAO;AAGlB,MAAI,QAAQ;AACZ,MAAI,MAAM,QAAQ;AAChB,UAAM,YAAYD,IAAG,QAAQ,sCAAsC,EAAE,IAAI,MAAM,MAAM;AACrF,QAAI,UAAW,SAAQ,UAAU,QAAQ;AAAA,EAC3C;AAEA,QAAM,OAAa;AAAA,IACjB;AAAA,IACA,KAAK;AAAA,IACL,QAAQ,MAAM,UAAU;AAAA,IACxB,SAAS,MAAM;AAAA,IACf,SAAS,MAAM;AAAA,IACf,UAAU;AAAA,IACV;AAAA,IACA,WAAW,MAAM,aAAa;AAAA,IAC9B,OAAO,MAAM,SAAS;AAAA,IACtB,YAAY,MAAM,cAAc,CAAC;AAAA,IACjC,eAAe,MAAM,iBAAiB,CAAC;AAAA,IACvC,UAAU,CAAC;AAAA,IACX,YAAY,MAAM;AAAA,IAClB,YAAY;AAAA,IACZ,YAAY;AAAA,EACd;AAEA,EAAAA,IAAG,QAAQ;AAAA;AAAA;AAAA,GAGV,EAAE;AAAA,IACD,KAAK;AAAA,IACL,KAAK;AAAA,IACL,KAAK;AAAA,IACL,KAAK;AAAA,IACL,KAAK;AAAA,IACL;AAAA,IACA,KAAK;AAAA,IACL,KAAK;AAAA,IACL,KAAK,UAAU,OAAO,KAAK,UAAU,KAAK,KAAK,IAAI;AAAA,IACnD,KAAK,UAAU,KAAK,UAAU;AAAA,IAC9B,KAAK,UAAU,KAAK,aAAa;AAAA,IACjC,KAAK,UAAU,KAAK,QAAQ;AAAA,IAC5B,KAAK;AAAA,IACL,KAAK;AAAA,IACL,KAAK;AAAA,EACP;AAEA,WAAS,KAAK,IAAI,MAAM,OAAO,WAAW;AAAA,IACxC,EAAE,OAAO,WAAW,QAAQ,MAAM,OAAO,KAAK,QAAQ;AAAA,EACxD,CAAC;AAED,SAAO;AACT;AAIO,SAAS,QAAQ,IAAyB;AAC/C,QAAMA,MAAK,MAAM;AACjB,QAAM,MAAMA,IAAG,QAAQ,kCAAkC,EAAE,IAAI,EAAE;AAGjE,SAAO,MAAM,UAAU,GAAG,IAAI;AAChC;AAEO,SAAS,eAAe,IAAkB;AAC/C,QAAM,OAAO,QAAQ,EAAE;AACvB,MAAI,CAAC,MAAM;AACT,UAAM,IAAI,YAAY,kBAAkB,mBAAmB,EAAE,8DAA8D;AAAA,EAC7H;AACA,SAAO;AACT;AAEO,SAAS,YAAY,UAA0B;AACpD,QAAMA,MAAK,MAAM;AACjB,QAAM,OAAOA,IACV,QAAQ,sCAAsC,EAC9C,IAAI,QAAQ;AACf,SAAO,KAAK,IAAI,SAAS;AAC3B;AAEO,SAAS,aAAa,QAA2E;AACtG,QAAM,YAAuE,CAAC;AAC9E,MAAI,UAAU,QAAQ,MAAM;AAE5B,SAAO,SAAS,QAAQ;AACtB,UAAM,SAAS,QAAQ,QAAQ,MAAM;AACrC,QAAI,CAAC,OAAQ;AACb,cAAU,QAAQ,EAAE,IAAI,OAAO,IAAI,SAAS,OAAO,SAAS,UAAU,OAAO,SAAS,CAAC;AACvF,cAAU;AAAA,EACZ;AAEA,SAAO;AACT;AAEO,SAAS,eAAe,SAA8B;AAC3D,QAAMA,MAAK,MAAM;AACjB,QAAM,MAAMA,IACT,QAAQ,0DAA0D,EAClE,IAAI,OAAO;AACd,SAAO,MAAM,UAAU,GAAG,IAAI;AAChC;AAEO,SAAS,eAQb;AACD,QAAMA,MAAK,MAAM;AAEjB,QAAM,QAAQA,IACX,QAAQ,0CAA0C,EAClD,IAAI;AAEP,SAAO,MAAM,IAAI,CAAC,SAAS;AACzB,UAAM,SAASA,IACZ;AAAA,MACC;AAAA;AAAA;AAAA;AAAA,IAIF,EACC,IAAI,KAAK,OAAO;AAEnB,WAAO;AAAA,MACL,SAAS,KAAK;AAAA,MACd,IAAI,KAAK;AAAA,MACT,SAAS,KAAK;AAAA,MACd,OAAO,OAAO;AAAA,MACd,UAAU,OAAO;AAAA,MACjB,YAAY,OAAO,QAAQ,OAAO;AAAA,MAClC,YAAY,KAAK;AAAA,IACnB;AAAA,EACF,CAAC;AACH;AAiBO,SAAS,WAAW,OAA8B;AACvD,QAAMA,MAAK,MAAM;AACjB,QAAM,OAAO,eAAe,MAAM,OAAO;AACzC,QAAM,UAAyB,CAAC;AAChC,QAAM,OAAM,oBAAI,KAAK,GAAE,YAAY;AAEnC,MAAI,cAAc,KAAK;AACvB,MAAI,eAAe,KAAK;AACxB,MAAI,WAAW,KAAK;AACpB,MAAI,aAAa,KAAK;AACtB,MAAI,gBAAgB,EAAE,GAAG,KAAK,WAAW;AACzC,MAAI,kBAAkB,CAAC,GAAG,KAAK,aAAa;AAC5C,MAAI,cAAc,CAAC,GAAG,KAAK,QAAQ;AAGnC,MAAI,MAAM,aAAa,QAAQ,CAAC,KAAK,UAAU;AAC7C,UAAM,sBAAsB,KAAK,SAAS,SAAS;AACnD,UAAM,iBAAiB,MAAM,gBAAgB,MAAM,aAAa,SAAS;AACzE,QAAI,CAAC,uBAAuB,CAAC,gBAAgB;AAC3C,YAAM,IAAI;AAAA,QACR;AAAA,QACA,uBAAuB,MAAM,OAAO;AAAA,MACtC;AAAA,IACF;AAAA,EACF;AAEA,MAAI,MAAM,aAAa,UAAa,MAAM,aAAa,KAAK,UAAU;AACpE,YAAQ,KAAK,EAAE,OAAO,YAAY,QAAQ,KAAK,UAAU,OAAO,MAAM,SAAS,CAAC;AAChF,kBAAc,MAAM;AAAA,EACtB;AAEA,MAAI,MAAM,cAAc,UAAa,MAAM,cAAc,KAAK,WAAW;AACvE,YAAQ,KAAK,EAAE,OAAO,aAAa,QAAQ,KAAK,WAAW,OAAO,MAAM,UAAU,CAAC;AACnF,mBAAe,MAAM;AAAA,EACvB;AAEA,MAAI,MAAM,UAAU,QAAW;AAC7B,YAAQ,KAAK,EAAE,OAAO,SAAS,QAAQ,KAAK,OAAO,OAAO,MAAM,MAAM,CAAC;AACvE,eAAW,MAAM;AAAA,EACnB;AAEA,MAAI,MAAM,YAAY,UAAa,MAAM,YAAY,KAAK,SAAS;AACjE,YAAQ,KAAK,EAAE,OAAO,WAAW,QAAQ,KAAK,SAAS,OAAO,MAAM,QAAQ,CAAC;AAC7E,iBAAa,MAAM;AAAA,EACrB;AAEA,MAAI,MAAM,YAAY;AACpB,eAAW,CAAC,KAAK,KAAK,KAAK,OAAO,QAAQ,MAAM,UAAU,GAAG;AAC3D,UAAI,UAAU,MAAM;AAClB,YAAI,OAAO,eAAe;AACxB,kBAAQ,KAAK,EAAE,OAAO,cAAc,GAAG,IAAI,QAAQ,cAAc,GAAG,GAAG,OAAO,KAAK,CAAC;AACpF,iBAAO,cAAc,GAAG;AAAA,QAC1B;AAAA,MACF,OAAO;AACL,gBAAQ,KAAK,EAAE,OAAO,cAAc,GAAG,IAAI,QAAQ,cAAc,GAAG,KAAK,MAAM,OAAO,MAAM,CAAC;AAC7F,sBAAc,GAAG,IAAI;AAAA,MACvB;AAAA,IACF;AAAA,EACF;AAEA,MAAI,MAAM,mBAAmB;AAC3B,eAAW,QAAQ,MAAM,mBAAmB;AAC1C,UAAI,CAAC,gBAAgB,SAAS,IAAI,GAAG;AACnC,wBAAgB,KAAK,IAAI;AACzB,gBAAQ,KAAK,EAAE,OAAO,iBAAiB,QAAQ,MAAM,OAAO,KAAK,CAAC;AAAA,MACpE;AAAA,IACF;AAAA,EACF;AAEA,MAAI,MAAM,sBAAsB;AAC9B,eAAW,QAAQ,MAAM,sBAAsB;AAC7C,YAAM,MAAM,gBAAgB,QAAQ,IAAI;AACxC,UAAI,QAAQ,IAAI;AACd,wBAAgB,OAAO,KAAK,CAAC;AAC7B,gBAAQ,KAAK,EAAE,OAAO,iBAAiB,QAAQ,MAAM,OAAO,KAAK,CAAC;AAAA,MACpE;AAAA,IACF;AAAA,EACF;AAEA,MAAI,MAAM,cAAc;AACtB,eAAW,MAAM,MAAM,cAAc;AACnC,YAAM,WAAqB;AAAA,QACzB,MAAM,GAAG;AAAA,QACT,KAAK,GAAG;AAAA,QACR,OAAO,MAAM;AAAA,QACb,WAAW;AAAA,MACb;AACA,kBAAY,KAAK,QAAQ;AACzB,cAAQ,KAAK,EAAE,OAAO,YAAY,QAAQ,MAAM,OAAO,SAAS,CAAC;AAAA,IACnE;AAAA,EACF;AAEA,MAAI,QAAQ,WAAW,GAAG;AACxB,WAAO;AAAA,EACT;AAEA,QAAM,SAAS,KAAK,MAAM;AAE1B,EAAAA,IAAG,QAAQ;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,GAYV,EAAE;AAAA,IACD;AAAA,IACA,cAAc,IAAI;AAAA,IAClB;AAAA,IACA,aAAa,OAAO,KAAK,UAAU,QAAQ,IAAI;AAAA,IAC/C;AAAA,IACA,KAAK,UAAU,aAAa;AAAA,IAC5B,KAAK,UAAU,eAAe;AAAA,IAC9B,KAAK,UAAU,WAAW;AAAA,IAC1B;AAAA,IACA,MAAM;AAAA,EACR;AAEA,QAAM,SAAS,MAAM,aAAa,OAAO,aAAa;AACtD,WAAS,MAAM,SAAS,MAAM,OAAO,QAAQ,OAAO;AAEpD,SAAO,eAAe,MAAM,OAAO;AACrC;AAIO,SAAS,kBAAkB,SAMhC;AACA,QAAMA,MAAK,MAAM;AAEjB,QAAM,SAASA,IACZ;AAAA,IACC;AAAA;AAAA;AAAA;AAAA,EAIF,EACC,IAAI,OAAO;AAGd,QAAM,UAAUA,IACb;AAAA,IACC;AAAA;AAAA;AAAA;AAAA;AAAA,EAKF,EACC,IAAI,OAAO;AAGd,QAAM,aAAaA,IAChB;AAAA,IACC;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA;AAAA,EAUF,EACC,IAAI,OAAO;AAEd,SAAO;AAAA,IACL,OAAO,OAAO;AAAA,IACd,UAAU,OAAO;AAAA,IACjB,YAAY,OAAO,QAAQ,OAAO;AAAA,IAClC,SAAS,QAAQ;AAAA,IACjB,YAAY,WAAW;AAAA,EACzB;AACF;","names":["nanoid","db","db","db","nanoid"]}
|
package/dist/index.js
CHANGED
|
@@ -6,10 +6,10 @@ if (args[0] === "activate") {
|
|
|
6
6
|
const { activate } = await import("./activate-DSDTR2EJ.js");
|
|
7
7
|
activate(args[1]);
|
|
8
8
|
} else if (args[0] === "init") {
|
|
9
|
-
const { init } = await import("./init-
|
|
9
|
+
const { init } = await import("./init-PPICS5PC.js");
|
|
10
10
|
init();
|
|
11
11
|
} else {
|
|
12
|
-
const { startServer } = await import("./server-
|
|
12
|
+
const { startServer } = await import("./server-6TNHZCUW.js");
|
|
13
13
|
startServer().catch((error) => {
|
|
14
14
|
console.error("Failed to start graph:", error);
|
|
15
15
|
process.exit(1);
|
|
@@ -0,0 +1,61 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import {
|
|
3
|
+
handleAgentConfig
|
|
4
|
+
} from "./chunk-ILTJI4ZN.js";
|
|
5
|
+
|
|
6
|
+
// src/init.ts
|
|
7
|
+
import { readFileSync, writeFileSync, existsSync, mkdirSync } from "fs";
|
|
8
|
+
import { join, dirname } from "path";
|
|
9
|
+
var MCP_CONFIG = {
|
|
10
|
+
command: "npx",
|
|
11
|
+
args: ["-y", "@graph-tl/graph"],
|
|
12
|
+
env: {
|
|
13
|
+
GRAPH_AGENT: "claude-code"
|
|
14
|
+
}
|
|
15
|
+
};
|
|
16
|
+
function init() {
|
|
17
|
+
const cwd = process.cwd();
|
|
18
|
+
let wrote = false;
|
|
19
|
+
const configPath = join(cwd, ".mcp.json");
|
|
20
|
+
if (existsSync(configPath)) {
|
|
21
|
+
try {
|
|
22
|
+
const config = JSON.parse(readFileSync(configPath, "utf8"));
|
|
23
|
+
if (config.mcpServers?.graph) {
|
|
24
|
+
console.log("\u2713 .mcp.json \u2014 graph already configured");
|
|
25
|
+
} else {
|
|
26
|
+
config.mcpServers = config.mcpServers ?? {};
|
|
27
|
+
config.mcpServers.graph = MCP_CONFIG;
|
|
28
|
+
writeFileSync(configPath, JSON.stringify(config, null, 2) + "\n", "utf8");
|
|
29
|
+
console.log("\u2713 .mcp.json \u2014 added graph server");
|
|
30
|
+
wrote = true;
|
|
31
|
+
}
|
|
32
|
+
} catch {
|
|
33
|
+
console.error(`\u2717 .mcp.json exists but is not valid JSON \u2014 skipping`);
|
|
34
|
+
}
|
|
35
|
+
} else {
|
|
36
|
+
const config = { mcpServers: { graph: MCP_CONFIG } };
|
|
37
|
+
writeFileSync(configPath, JSON.stringify(config, null, 2) + "\n", "utf8");
|
|
38
|
+
console.log("\u2713 .mcp.json \u2014 created with graph server");
|
|
39
|
+
wrote = true;
|
|
40
|
+
}
|
|
41
|
+
const agentPath = join(cwd, ".claude", "agents", "graph.md");
|
|
42
|
+
if (existsSync(agentPath)) {
|
|
43
|
+
console.log("\u2713 .claude/agents/graph.md \u2014 already exists");
|
|
44
|
+
} else {
|
|
45
|
+
const { agent_file } = handleAgentConfig();
|
|
46
|
+
mkdirSync(dirname(agentPath), { recursive: true });
|
|
47
|
+
writeFileSync(agentPath, agent_file, "utf8");
|
|
48
|
+
console.log("\u2713 .claude/agents/graph.md \u2014 created graph workflow agent");
|
|
49
|
+
wrote = true;
|
|
50
|
+
}
|
|
51
|
+
console.log("");
|
|
52
|
+
if (wrote) {
|
|
53
|
+
console.log("Graph is ready. Restart Claude Code to load the MCP server.");
|
|
54
|
+
} else {
|
|
55
|
+
console.log("Graph is already set up \u2014 nothing to do.");
|
|
56
|
+
}
|
|
57
|
+
}
|
|
58
|
+
export {
|
|
59
|
+
init
|
|
60
|
+
};
|
|
61
|
+
//# sourceMappingURL=init-PPICS5PC.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"sources":["../src/init.ts"],"sourcesContent":["import { readFileSync, writeFileSync, existsSync, mkdirSync } from \"fs\";\nimport { join, dirname } from \"path\";\nimport { handleAgentConfig } from \"./tools/agent-config.js\";\n\n// [sl:hy8oXisWnrZN1BfkonUqd] npx @graph-tl/graph init — zero friction onboarding\n\nconst MCP_CONFIG = {\n command: \"npx\",\n args: [\"-y\", \"@graph-tl/graph\"],\n env: {\n GRAPH_AGENT: \"claude-code\",\n },\n};\n\nexport function init(): void {\n const cwd = process.cwd();\n let wrote = false;\n\n // 1. Write .mcp.json\n const configPath = join(cwd, \".mcp.json\");\n if (existsSync(configPath)) {\n try {\n const config = JSON.parse(readFileSync(configPath, \"utf8\"));\n if (config.mcpServers?.graph) {\n console.log(\"✓ .mcp.json — graph already configured\");\n } else {\n config.mcpServers = config.mcpServers ?? {};\n config.mcpServers.graph = MCP_CONFIG;\n writeFileSync(configPath, JSON.stringify(config, null, 2) + \"\\n\", \"utf8\");\n console.log(\"✓ .mcp.json — added graph server\");\n wrote = true;\n }\n } catch {\n console.error(`✗ .mcp.json exists but is not valid JSON — skipping`);\n }\n } else {\n const config = { mcpServers: { graph: MCP_CONFIG } };\n writeFileSync(configPath, JSON.stringify(config, null, 2) + \"\\n\", \"utf8\");\n console.log(\"✓ .mcp.json — created with graph server\");\n wrote = true;\n }\n\n // 2. Write .claude/agents/graph.md\n const agentPath = join(cwd, \".claude\", \"agents\", \"graph.md\");\n if (existsSync(agentPath)) {\n console.log(\"✓ .claude/agents/graph.md — already exists\");\n } else {\n const { agent_file } = handleAgentConfig();\n mkdirSync(dirname(agentPath), { recursive: true });\n writeFileSync(agentPath, agent_file, \"utf8\");\n console.log(\"✓ .claude/agents/graph.md — created graph workflow agent\");\n wrote = true;\n }\n\n // 3. Summary\n console.log(\"\");\n if (wrote) {\n console.log(\"Graph is ready. Restart Claude Code to load the MCP server.\");\n } else {\n console.log(\"Graph is already set up — nothing to do.\");\n }\n}\n"],"mappings":";;;;;;AAAA,SAAS,cAAc,eAAe,YAAY,iBAAiB;AACnE,SAAS,MAAM,eAAe;AAK9B,IAAM,aAAa;AAAA,EACjB,SAAS;AAAA,EACT,MAAM,CAAC,MAAM,iBAAiB;AAAA,EAC9B,KAAK;AAAA,IACH,aAAa;AAAA,EACf;AACF;AAEO,SAAS,OAAa;AAC3B,QAAM,MAAM,QAAQ,IAAI;AACxB,MAAI,QAAQ;AAGZ,QAAM,aAAa,KAAK,KAAK,WAAW;AACxC,MAAI,WAAW,UAAU,GAAG;AAC1B,QAAI;AACF,YAAM,SAAS,KAAK,MAAM,aAAa,YAAY,MAAM,CAAC;AAC1D,UAAI,OAAO,YAAY,OAAO;AAC5B,gBAAQ,IAAI,kDAAwC;AAAA,MACtD,OAAO;AACL,eAAO,aAAa,OAAO,cAAc,CAAC;AAC1C,eAAO,WAAW,QAAQ;AAC1B,sBAAc,YAAY,KAAK,UAAU,QAAQ,MAAM,CAAC,IAAI,MAAM,MAAM;AACxE,gBAAQ,IAAI,4CAAkC;AAC9C,gBAAQ;AAAA,MACV;AAAA,IACF,QAAQ;AACN,cAAQ,MAAM,+DAAqD;AAAA,IACrE;AAAA,EACF,OAAO;AACL,UAAM,SAAS,EAAE,YAAY,EAAE,OAAO,WAAW,EAAE;AACnD,kBAAc,YAAY,KAAK,UAAU,QAAQ,MAAM,CAAC,IAAI,MAAM,MAAM;AACxE,YAAQ,IAAI,mDAAyC;AACrD,YAAQ;AAAA,EACV;AAGA,QAAM,YAAY,KAAK,KAAK,WAAW,UAAU,UAAU;AAC3D,MAAI,WAAW,SAAS,GAAG;AACzB,YAAQ,IAAI,sDAA4C;AAAA,EAC1D,OAAO;AACL,UAAM,EAAE,WAAW,IAAI,kBAAkB;AACzC,cAAU,QAAQ,SAAS,GAAG,EAAE,WAAW,KAAK,CAAC;AACjD,kBAAc,WAAW,YAAY,MAAM;AAC3C,YAAQ,IAAI,oEAA0D;AACtE,YAAQ;AAAA,EACV;AAGA,UAAQ,IAAI,EAAE;AACd,MAAI,OAAO;AACT,YAAQ,IAAI,6DAA6D;AAAA,EAC3E,OAAO;AACL,YAAQ,IAAI,+CAA0C;AAAA,EACxD;AACF;","names":[]}
|
|
@@ -9,7 +9,7 @@ import {
|
|
|
9
9
|
getProjectSummary,
|
|
10
10
|
listProjects,
|
|
11
11
|
updateNode
|
|
12
|
-
} from "./chunk-
|
|
12
|
+
} from "./chunk-TWT5GUXW.js";
|
|
13
13
|
export {
|
|
14
14
|
createNode,
|
|
15
15
|
getAncestors,
|
|
@@ -21,4 +21,4 @@ export {
|
|
|
21
21
|
listProjects,
|
|
22
22
|
updateNode
|
|
23
23
|
};
|
|
24
|
-
//# sourceMappingURL=nodes-
|
|
24
|
+
//# sourceMappingURL=nodes-4OJBNDHG.js.map
|