@kadoa/mcp 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +124 -16
  2. package/dist/index.js +6 -7
  3. package/package.json +7 -8
package/README.md CHANGED
@@ -1,20 +1,16 @@
1
1
  # Kadoa MCP Server
2
2
 
3
- Use [Kadoa](https://kadoa.com) from Claude Desktop, Cursor, Claude Code, and other MCP clients.
3
+ Use [Kadoa](https://kadoa.com) from Claude Code, Claude Desktop, Cursor, Codex, Gemini CLI, and other MCP clients.
4
4
 
5
5
  ## Quick Start
6
6
 
7
7
  ### Claude Code
8
8
 
9
9
  ```bash
10
- claude mcp add --transport stdio kadoa -- npx -y @kadoa/mcp
10
+ claude mcp add --transport stdio -e KADOA_API_KEY=tk-your_api_key kadoa -- npx -y @kadoa/mcp
11
11
  ```
12
12
 
13
- Set your API key:
14
-
15
- ```bash
16
- export KADOA_API_KEY=tk-your_api_key
17
- ```
13
+ Add `-s user` to enable for all projects. If you have the [Kadoa CLI](https://www.npmjs.com/package/@kadoa/cli) installed, you can skip the `-e` flag — just run `kadoa login` and the MCP will use your saved key automatically.
18
14
 
19
15
  ### Claude Desktop
20
16
 
@@ -25,7 +21,7 @@ Add to `~/.config/Claude/claude_desktop_config.json`:
25
21
  "mcpServers": {
26
22
  "kadoa": {
27
23
  "command": "npx",
28
- "args": ["-y", "kadoa-mcp"],
24
+ "args": ["-y", "@kadoa/mcp"],
29
25
  "env": {
30
26
  "KADOA_API_KEY": "tk-your_api_key"
31
27
  }
@@ -45,7 +41,46 @@ Add to `.cursor/mcp.json`:
45
41
  "mcpServers": {
46
42
  "kadoa": {
47
43
  "command": "npx",
48
- "args": ["-y", "kadoa-mcp"],
44
+ "args": ["-y", "@kadoa/mcp"],
45
+ "env": {
46
+ "KADOA_API_KEY": "tk-your_api_key"
47
+ }
48
+ }
49
+ }
50
+ }
51
+ ```
52
+
53
+ ### Codex
54
+
55
+ ```bash
56
+ codex mcp add kadoa -- npx -y @kadoa/mcp
57
+ ```
58
+
59
+ Or add to `~/.codex/config.toml`:
60
+
61
+ ```toml
62
+ [mcp_servers.kadoa]
63
+ command = "npx"
64
+ args = ["-y", "@kadoa/mcp"]
65
+
66
+ [mcp_servers.kadoa.env]
67
+ KADOA_API_KEY = "tk-your_api_key"
68
+ ```
69
+
70
+ ### Gemini CLI
71
+
72
+ ```bash
73
+ gemini mcp add -t stdio kadoa npx -- -y @kadoa/mcp
74
+ ```
75
+
76
+ Or add to `~/.gemini/settings.json`:
77
+
78
+ ```json
79
+ {
80
+ "mcpServers": {
81
+ "kadoa": {
82
+ "command": "npx",
83
+ "args": ["-y", "@kadoa/mcp"],
49
84
  "env": {
50
85
  "KADOA_API_KEY": "tk-your_api_key"
51
86
  }
@@ -73,17 +108,67 @@ Get your API key from [kadoa.com/settings](https://kadoa.com/settings).
73
108
 
74
109
  ## Usage Examples
75
110
 
76
- Ask your AI assistant:
111
+ Once the MCP server is configured, you can manage the full workflow lifecycle through natural conversation. Here are a few common operations shown as Claude Code sessions.
112
+
113
+ ### Create and run a workflow
114
+
115
+ ```
116
+ > You: Create a workflow to extract product names, prices, and ratings
117
+ from https://example-shop.com/products
118
+
119
+ Claude calls create_workflow and returns the workflow ID, proposed
120
+ navigation steps, and data schema for your review.
77
121
 
78
- - "List my Kadoa workflows"
79
- - "Create a workflow to extract product prices from example.com"
80
- - "Run workflow abc123 and show me the results"
81
- - "Update the schema for workflow abc123 to include a rating field"
122
+ > You: The schema looks good. Approve it and kick off a run.
123
+
124
+ Claude calls approve_workflow to activate the workflow, then
125
+ run_workflow to start extraction.
126
+
127
+ > You: Is the run done? Show me the results.
128
+
129
+ Claude checks the run status with get_workflow, then calls fetch_data
130
+ to retrieve the extracted records and display them as a table.
131
+ ```
132
+
133
+ ### Update a workflow and re-run
134
+
135
+ ```
136
+ > You: List my workflows.
137
+
138
+ Claude calls list_workflows and shows all workflows with their
139
+ current status (active, paused, draft, etc.).
140
+
141
+ > You: Update wf_abc123 — add an "availability" field to the schema
142
+ and rename "cost" to "price".
143
+
144
+ Claude calls update_workflow with the new schema, confirms the
145
+ changes, and shows the updated field list.
146
+
147
+ > You: Run it again with the new schema.
148
+
149
+ Claude calls run_workflow and waits for completion, then fetches
150
+ the latest data with fetch_data so you can verify the changes.
151
+ ```
152
+
153
+ ### Monitor and clean up
154
+
155
+ ```
156
+ > You: Show me all active workflows and their last run results.
157
+
158
+ Claude calls list_workflows, filters to active ones, then calls
159
+ fetch_data for each to summarize the latest extraction results.
160
+
161
+ > You: Delete the ones that haven't produced data in the last week.
162
+
163
+ Claude identifies stale workflows from the results and calls
164
+ delete_workflow for each, confirming before proceeding.
165
+ ```
82
166
 
83
167
  ## Troubleshooting
84
168
 
85
- **"KADOA_API_KEY environment variable required"**
86
- - Make sure `KADOA_API_KEY` is set in your MCP config or environment
169
+ **"No API key found"**
170
+ - Run `kadoa login` (requires `npm i -g @kadoa/cli`), or
171
+ - Set `KADOA_API_KEY` in your MCP config or environment
87
172
  - API keys start with `tk-`
88
173
 
89
174
  **Claude says "I don't have access to Kadoa"**
@@ -99,6 +184,29 @@ bun run test # Run tests
99
184
  bun run build # Build for distribution
100
185
  ```
101
186
 
187
+ ### Connecting to local services
188
+
189
+ To develop and test against a local Kadoa backend (instead of the production API), point the MCP at your local `public-api` service using the `KADOA_PUBLIC_API_URI` environment variable.
190
+
191
+ **Prerequisites:** the `public-api` service must be running locally (default port `12380`). You also need a local API key — check your backend seed data or API key table.
192
+
193
+ **Run the MCP server locally:**
194
+
195
+ ```bash
196
+ KADOA_PUBLIC_API_URI=http://localhost:12380 KADOA_API_KEY=tk-your_local_api_key bun src/index.ts
197
+ ```
198
+
199
+ **Add as a local MCP in Claude Code** (alongside the remote one):
200
+
201
+ ```bash
202
+ claude mcp add --transport stdio \
203
+ -e KADOA_PUBLIC_API_URI=http://localhost:12380 \
204
+ -e KADOA_API_KEY=tk-your_local_api_key \
205
+ kadoa-local -- bun /path/to/kadoa-mcp/src/index.ts
206
+ ```
207
+
208
+ This registers a `kadoa-local` server that coexists with the production `kadoa` server, so you can use both without conflicts (`mcp__kadoa__*` for prod, `mcp__kadoa-local__*` for local).
209
+
102
210
  ## License
103
211
 
104
212
  MIT
package/dist/index.js CHANGED
@@ -34250,7 +34250,7 @@ class KadoaMcpServer {
34250
34250
  tools: [
34251
34251
  {
34252
34252
  name: "create_workflow",
34253
- description: "Create a new agentic navigation workflow. If entity and schema are provided, they guide the extraction. Otherwise, the AI agent auto-detects the schema from the page based on the prompt.",
34253
+ description: "Create a new agentic navigation workflow. If entity and schema are provided, they guide the extraction. Otherwise, the AI agent auto-detects the schema from the page based on the prompt. The workflow runs asynchronously and may take several minutes. Do NOT poll or sleep-wait for completion. Instead, return the workflow ID to the user and let them check back later with get_workflow or fetch_data.",
34254
34254
  inputSchema: {
34255
34255
  type: "object",
34256
34256
  properties: {
@@ -34297,7 +34297,7 @@ class KadoaMcpServer {
34297
34297
  ]
34298
34298
  }
34299
34299
  },
34300
- required: ["name"]
34300
+ required: ["name", "example"]
34301
34301
  },
34302
34302
  description: "Extraction schema fields. If omitted, the AI agent auto-detects the schema."
34303
34303
  }
@@ -34340,7 +34340,7 @@ class KadoaMcpServer {
34340
34340
  },
34341
34341
  {
34342
34342
  name: "run_workflow",
34343
- description: "Run a workflow to extract fresh data",
34343
+ description: "Run a workflow to extract fresh data. The run is asynchronous and may take several minutes. Do NOT poll or sleep-wait for completion. Return the workflow ID to the user and let them check status with get_workflow or fetch results later with fetch_data.",
34344
34344
  inputSchema: {
34345
34345
  type: "object",
34346
34346
  properties: {
@@ -34359,7 +34359,7 @@ class KadoaMcpServer {
34359
34359
  },
34360
34360
  {
34361
34361
  name: "fetch_data",
34362
- description: "Get extracted data from a workflow",
34362
+ description: "Get extracted data from a workflow. Data is only available after the workflow run has completed (displayState is no longer RUNNING). Do NOT poll or sleep-wait for completion.",
34363
34363
  inputSchema: {
34364
34364
  type: "object",
34365
34365
  properties: {
@@ -34460,7 +34460,7 @@ class KadoaMcpServer {
34460
34460
  isRequired: { type: "boolean" },
34461
34461
  isUnique: { type: "boolean" }
34462
34462
  },
34463
- required: ["name"]
34463
+ required: ["name", "example"]
34464
34464
  },
34465
34465
  description: "New extraction schema fields"
34466
34466
  },
@@ -34662,8 +34662,7 @@ class KadoaMcpServer {
34662
34662
  console.error("Connected to Kadoa API");
34663
34663
  }
34664
34664
  }
34665
- var isMainModule = typeof Bun !== "undefined" && import.meta.url === Bun.main || import.meta.url === `file://${process.argv[1]}`;
34666
- if (isMainModule) {
34665
+ if (!process.env.VITEST && !process.env.BUN_TEST) {
34667
34666
  new KadoaMcpServer().run().catch((error2) => {
34668
34667
  console.error("Fatal error:", error2);
34669
34668
  process.exit(1);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kadoa/mcp",
3
- "version": "0.1.1",
3
+ "version": "0.1.3",
4
4
  "description": "Kadoa MCP Server — manage workflows from Claude Desktop, Cursor, and other MCP clients",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -18,8 +18,10 @@
18
18
  "dev": "bun src/index.ts",
19
19
  "build": "bun build src/index.ts --outdir=dist --target=node && node -e \"const f='dist/index.js';require('fs').writeFileSync(f,require('fs').readFileSync(f,'utf8').replace('#!/usr/bin/env bun','#!/usr/bin/env node'))\"",
20
20
  "check-types": "tsc --noEmit",
21
- "test": "bun test",
22
- "test:unit": "bun test tests/unit --timeout=120000",
21
+ "test": "BUN_TEST=1 bun test",
22
+ "test:unit": "BUN_TEST=1 bun test tests/unit --timeout=120000",
23
+ "test:e2e": "BUN_TEST=1 bun test tests/e2e --timeout=600000",
24
+ "test:eval": "BUN_TEST=1 bun test tests/evals --timeout=300000",
23
25
  "prepublishOnly": "bun run check-types && bun run test:unit && bun run build"
24
26
  },
25
27
  "dependencies": {
@@ -27,14 +29,11 @@
27
29
  "@modelcontextprotocol/sdk": "^1.26.0"
28
30
  },
29
31
  "devDependencies": {
32
+ "@anthropic-ai/sdk": "^0.78.0",
30
33
  "@types/node": "^25.0.3",
31
34
  "bun-types": "^1.3.3",
32
35
  "typescript": "^5.9.3"
33
36
  },
34
37
  "keywords": ["kadoa", "mcp", "model-context-protocol", "web-scraping", "data-extraction"],
35
- "license": "MIT",
36
- "repository": {
37
- "type": "git",
38
- "url": "git+https://github.com/kadoa-org/kadoa-mcp.git"
39
- }
38
+ "license": "MIT"
40
39
  }