hyperstack-mcp 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,150 @@
1
+ # šŸƒ HyperStack MCP Server
2
+
3
+ **Cloud memory for AI agents. One key. Zero dependencies. No LLM costs.**
4
+
5
+ Give Claude, Cursor, VS Code, or any MCP client persistent memory in 30 seconds.
6
+
7
+ ## Why HyperStack over alternatives?
8
+
9
+ | | HyperStack | Mem0 | Letta |
10
+ |---|---|---|---|
11
+ | **Env vars needed** | 1 (`API_KEY`) | 6+ (API key, OpenAI key, DB URL, LLM provider, model, embedding) | 3+ (URL, password, Node.js) |
12
+ | **LLM cost per memory op** | $0 | ~$0.002 (embedding call) | ~$0.002 |
13
+ | **Docker required** | No | Yes (self-hosted) | Yes |
14
+ | **Setup time** | 30 seconds | 5-15 minutes | 10+ minutes |
15
+ | **Token savings** | 94% (~350 vs ~6,000) | Up to 80% | Varies |
16
+ | **Works offline** | Cloud API (always available) | Depends on config | Needs running server |
17
+
18
+ ## Install
19
+
20
+ ### Claude Desktop / Claude Code
21
+
22
+ Add to `claude_desktop_config.json`:
23
+
24
+ ```json
25
+ {
26
+ "mcpServers": {
27
+ "hyperstack": {
28
+ "command": "npx",
29
+ "args": ["-y", "@cascadeai/hyperstack-mcp"],
30
+ "env": {
31
+ "HYPERSTACK_API_KEY": "hs_your_key_here"
32
+ }
33
+ }
34
+ }
35
+ }
36
+ ```
37
+
38
+ ### Cursor
39
+
40
+ Add to `.cursor/mcp.json`:
41
+
42
+ ```json
43
+ {
44
+ "mcpServers": {
45
+ "hyperstack": {
46
+ "command": "npx",
47
+ "args": ["-y", "@cascadeai/hyperstack-mcp"],
48
+ "env": {
49
+ "HYPERSTACK_API_KEY": "hs_your_key_here"
50
+ }
51
+ }
52
+ }
53
+ }
54
+ ```
55
+
56
+ ### VS Code (GitHub Copilot)
57
+
58
+ Add to `.vscode/mcp.json`:
59
+
60
+ ```json
61
+ {
62
+ "servers": {
63
+ "hyperstack": {
64
+ "command": "npx",
65
+ "args": ["-y", "@cascadeai/hyperstack-mcp"],
66
+ "env": {
67
+ "HYPERSTACK_API_KEY": "hs_your_key_here"
68
+ }
69
+ }
70
+ }
71
+ }
72
+ ```
73
+
74
+ ### Windsurf
75
+
76
+ Add to `~/.windsurf/mcp.json`:
77
+
78
+ ```json
79
+ {
80
+ "mcpServers": {
81
+ "hyperstack": {
82
+ "command": "npx",
83
+ "args": ["-y", "@cascadeai/hyperstack-mcp"],
84
+ "env": {
85
+ "HYPERSTACK_API_KEY": "hs_your_key_here"
86
+ }
87
+ }
88
+ }
89
+ }
90
+ ```
91
+
92
+ **That's it.** Get your free API key at [cascadeai.dev](https://cascadeai.dev).
93
+
94
+ ## Tools
95
+
96
+ | Tool | Description |
97
+ |------|-------------|
98
+ | `store_memory` | Save or update a memory card (upserts by slug) |
99
+ | `search_memory` | Search cards by keyword, title, or content |
100
+ | `list_memories` | List all cards, optionally filter by stack |
101
+ | `delete_memory` | Remove a card by slug |
102
+ | `memory_stats` | Usage summary, savings estimate, plan info |
103
+
104
+ ### Stacks (categories)
105
+
106
+ Organize memories into stacks for better retrieval:
107
+
108
+ - `projects` — Tech stacks, repos, architecture
109
+ - `people` — Teammates, contacts, roles
110
+ - `decisions` — Why you chose X over Y
111
+ - `preferences` — Editor, tools, coding style
112
+ - `workflows` — Deploy steps, review processes
113
+ - `general` — Everything else
114
+
115
+ ## How it works
116
+
117
+ Your agent stores small knowledge cards (~350 tokens each) instead of stuffing entire conversation histories into the prompt (~6,000 tokens). Before responding, it searches for relevant cards. Result: **94% less token usage**, which means real money saved on API bills.
118
+
119
+ ```
120
+ Without HyperStack: $270/mo (6,000 tokens Ɨ every message)
121
+ With HyperStack: $16/mo (350 tokens Ɨ only relevant cards)
122
+ ```
123
+
124
+ ## Environment Variables
125
+
126
+ | Variable | Required | Default | Description |
127
+ |----------|----------|---------|-------------|
128
+ | `HYPERSTACK_API_KEY` | āœ… | — | Your API key (starts with `hs_`) |
129
+ | `HYPERSTACK_WORKSPACE` | — | `default` | Workspace to use |
130
+ | `HYPERSTACK_API_URL` | — | `https://hyperstack-cloud.vercel.app` | API base URL |
131
+
132
+ ## Get a free API key
133
+
134
+ 1. Go to [cascadeai.dev](https://cascadeai.dev)
135
+ 2. Enter your email → create account
136
+ 3. Copy your `hs_` key from the dashboard
137
+ 4. Paste into config above
138
+
139
+ **50 cards free forever. No credit card. Upgrade to Pro ($15/mo) for unlimited.**
140
+
141
+ ## Links
142
+
143
+ - 🌐 [Website](https://cascadeai.dev)
144
+ - šŸ“– [API Docs](https://cascadeai.dev)
145
+ - šŸ’¬ [Discord](https://discord.gg/tdnXaV6e)
146
+ - šŸ› [Issues](https://github.com/cascadeai/hyperstack-mcp/issues)
147
+
148
+ ## License
149
+
150
+ MIT Ā© [CascadeAI](https://cascadeai.dev)
@@ -0,0 +1,30 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * HyperStack MCP Server
4
+ *
5
+ * Cloud memory for AI agents via Model Context Protocol.
6
+ *
7
+ * SETUP (30 seconds):
8
+ * 1. Get API key at https://cascadeai.dev
9
+ * 2. Add to your MCP client config:
10
+ * {
11
+ * "mcpServers": {
12
+ * "hyperstack": {
13
+ * "command": "npx",
14
+ * "args": ["-y", "@cascadeai/hyperstack-mcp"],
15
+ * "env": { "HYPERSTACK_API_KEY": "hs_your_key" }
16
+ * }
17
+ * }
18
+ * }
19
+ * 3. Done. Your agent now has persistent cloud memory.
20
+ *
21
+ * WHY HYPERSTACK:
22
+ * - 1 env var (vs Mem0's 6+)
23
+ * - No LLM costs for memory ops (Mem0 charges per embedding)
24
+ * - No Docker, no database, no OpenAI key needed
25
+ * - 94% token savings — ~350 tokens vs ~6,000 per message
26
+ *
27
+ * @see https://cascadeai.dev
28
+ * @see https://hyperstack-cloud.vercel.app
29
+ */
30
+ export {};
package/build/index.js ADDED
@@ -0,0 +1,274 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * HyperStack MCP Server
4
+ *
5
+ * Cloud memory for AI agents via Model Context Protocol.
6
+ *
7
+ * SETUP (30 seconds):
8
+ * 1. Get API key at https://cascadeai.dev
9
+ * 2. Add to your MCP client config:
10
+ * {
11
+ * "mcpServers": {
12
+ * "hyperstack": {
13
+ * "command": "npx",
14
+ * "args": ["-y", "@cascadeai/hyperstack-mcp"],
15
+ * "env": { "HYPERSTACK_API_KEY": "hs_your_key" }
16
+ * }
17
+ * }
18
+ * }
19
+ * 3. Done. Your agent now has persistent cloud memory.
20
+ *
21
+ * WHY HYPERSTACK:
22
+ * - 1 env var (vs Mem0's 6+)
23
+ * - No LLM costs for memory ops (Mem0 charges per embedding)
24
+ * - No Docker, no database, no OpenAI key needed
25
+ * - 94% token savings — ~350 tokens vs ~6,000 per message
26
+ *
27
+ * @see https://cascadeai.dev
28
+ * @see https://hyperstack-cloud.vercel.app
29
+ */
30
+ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
31
+ import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
32
+ import { z } from "zod";
33
+ // ═══════════════════════════════════════════════════════
34
+ // CONFIG
35
+ // ═══════════════════════════════════════════════════════
36
+ const API_KEY = process.env.HYPERSTACK_API_KEY;
37
+ const WORKSPACE = process.env.HYPERSTACK_WORKSPACE || "default";
38
+ const API_BASE = process.env.HYPERSTACK_API_URL || "https://hyperstack-cloud.vercel.app";
39
+ if (!API_KEY) {
40
+ console.error(`
41
+ ╔══════════════════════════════════════════════════════╗
42
+ ā•‘ HyperStack MCP — Missing API Key ā•‘
43
+ ╠══════════════════════════════════════════════════════╣
44
+ ā•‘ ā•‘
45
+ ā•‘ Set HYPERSTACK_API_KEY in your MCP config: ā•‘
46
+ ā•‘ ā•‘
47
+ ā•‘ "env": { ā•‘
48
+ ā•‘ "HYPERSTACK_API_KEY": "hs_your_key_here" ā•‘
49
+ ā•‘ } ā•‘
50
+ ā•‘ ā•‘
51
+ ā•‘ Get your free key at: https://cascadeai.dev ā•‘
52
+ ā•‘ 50 cards free. No credit card. ā•‘
53
+ ā•‘ ā•‘
54
+ ā•šā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•ā•
55
+ `);
56
+ process.exit(1);
57
+ }
58
+ async function api(endpoint, opts = {}) {
59
+ const url = new URL(`${API_BASE}/api/${endpoint}`);
60
+ url.searchParams.set("workspace", WORKSPACE);
61
+ if (opts.params) {
62
+ for (const [k, v] of Object.entries(opts.params)) {
63
+ url.searchParams.set(k, v);
64
+ }
65
+ }
66
+ const headers = {
67
+ "X-API-Key": API_KEY,
68
+ "Content-Type": "application/json",
69
+ };
70
+ const res = await fetch(url.toString(), {
71
+ method: opts.method || "GET",
72
+ headers,
73
+ body: opts.body ? JSON.stringify(opts.body) : undefined,
74
+ });
75
+ const data = await res.json();
76
+ if (!res.ok) {
77
+ throw new Error(data.error || `API error ${res.status}`);
78
+ }
79
+ return data;
80
+ }
81
+ // ═══════════════════════════════════════════════════════
82
+ // MCP SERVER
83
+ // ═══════════════════════════════════════════════════════
84
+ const server = new McpServer({
85
+ name: "hyperstack",
86
+ version: "1.0.0",
87
+ });
88
+ // ───────────────────────────────────────────────────────
89
+ // TOOL: store_memory
90
+ // ───────────────────────────────────────────────────────
91
+ server.tool("store_memory", `Store or update a memory card in HyperStack. Use this whenever you learn something worth remembering — a user preference, project detail, decision, person info, or workflow step. If a card with the same slug exists, it will be updated (upsert).
92
+
93
+ Stacks (categories): projects, people, decisions, preferences, workflows, general.
94
+ Cost: This saves money — instead of stuffing 6,000 tokens into every prompt, you store knowledge as ~350-token cards and only retrieve what's relevant.`, {
95
+ slug: z.string().describe("Unique ID for this memory (kebab-case, e.g. 'project-webapp', 'person-alice')"),
96
+ title: z.string().describe("Short human-readable title"),
97
+ body: z.string().describe("The knowledge to remember (markdown OK, keep concise — a few lines)"),
98
+ stack: z.enum(["projects", "people", "decisions", "preferences", "workflows", "general"])
99
+ .default("general")
100
+ .describe("Category for this memory"),
101
+ keywords: z.array(z.string()).default([])
102
+ .describe("Tags for search (e.g. ['react', 'vercel', 'auth'])"),
103
+ }, async ({ slug, title, body, stack, keywords }) => {
104
+ try {
105
+ const result = await api("cards", {
106
+ method: "POST",
107
+ body: { slug, title, body, stack, keywords },
108
+ });
109
+ const action = result.created ? "Created" : "Updated";
110
+ return {
111
+ content: [{
112
+ type: "text",
113
+ text: `āœ… ${action} card "${title}" (${slug}) in ${stack} stack. ${result.card?.tokens || 0} tokens stored.`,
114
+ }],
115
+ };
116
+ }
117
+ catch (e) {
118
+ return {
119
+ content: [{ type: "text", text: `āŒ Failed to store: ${e.message}` }],
120
+ isError: true,
121
+ };
122
+ }
123
+ });
124
+ // ───────────────────────────────────────────────────────
125
+ // TOOL: search_memory
126
+ // ───────────────────────────────────────────────────────
127
+ server.tool("search_memory", `Search your HyperStack memory for relevant cards. Use this BEFORE answering questions — check if you already know something about the topic. Searches titles, keywords, and body content.
128
+
129
+ Always search memory at the start of a conversation or when a new topic comes up. This is how you stay context-aware without wasting tokens.`, {
130
+ query: z.string().describe("Search term (e.g. 'auth', 'alice', 'deployment')"),
131
+ }, async ({ query }) => {
132
+ try {
133
+ const result = await api("search", { params: { q: query } });
134
+ const cards = result.cards || [];
135
+ if (cards.length === 0) {
136
+ return {
137
+ content: [{ type: "text", text: `No memories found for "${query}". You may want to store_memory if you learn something new.` }],
138
+ };
139
+ }
140
+ const formatted = cards.map((c) => `šŸ“„ **${c.title}** (${c.stack})\n` +
141
+ ` slug: ${c.slug} | keywords: ${(c.keywords || []).join(", ")}\n` +
142
+ ` ${c.body}`).join("\n\n");
143
+ return {
144
+ content: [{
145
+ type: "text",
146
+ text: `Found ${cards.length} memor${cards.length === 1 ? "y" : "ies"} for "${query}":\n\n${formatted}`,
147
+ }],
148
+ };
149
+ }
150
+ catch (e) {
151
+ return {
152
+ content: [{ type: "text", text: `āŒ Search failed: ${e.message}` }],
153
+ isError: true,
154
+ };
155
+ }
156
+ });
157
+ // ───────────────────────────────────────────────────────
158
+ // TOOL: list_memories
159
+ // ───────────────────────────────────────────────────────
160
+ server.tool("list_memories", `List all memory cards in your workspace, optionally filtered by stack. Use this to get an overview of everything stored, find stale cards, or check what categories exist.`, {
161
+ stack: z.enum(["projects", "people", "decisions", "preferences", "workflows", "general", "all"])
162
+ .default("all")
163
+ .describe("Filter by stack category, or 'all' for everything"),
164
+ }, async ({ stack }) => {
165
+ try {
166
+ const result = await api("cards");
167
+ let cards = result.cards || [];
168
+ if (stack !== "all") {
169
+ cards = cards.filter((c) => c.stack === stack);
170
+ }
171
+ if (cards.length === 0) {
172
+ return {
173
+ content: [{ type: "text", text: stack === "all"
174
+ ? "No memories stored yet. Use store_memory to create your first card."
175
+ : `No memories in the "${stack}" stack.` }],
176
+ };
177
+ }
178
+ // Group by stack
179
+ const grouped = {};
180
+ for (const c of cards) {
181
+ (grouped[c.stack] = grouped[c.stack] || []).push(c);
182
+ }
183
+ let text = `šŸ“š **${cards.length} memor${cards.length === 1 ? "y" : "ies"}** in workspace "${WORKSPACE}"`;
184
+ text += ` (${result.plan || "FREE"} plan, limit: ${result.limit || 50})\n\n`;
185
+ for (const [stackName, stackCards] of Object.entries(grouped)) {
186
+ text += `### ${stackName} (${stackCards.length})\n`;
187
+ for (const c of stackCards) {
188
+ const kw = (c.keywords || []).slice(0, 4).join(", ");
189
+ text += `- **${c.title}** \`${c.slug}\`${kw ? ` [${kw}]` : ""} — ${c.tokens || 0}t\n`;
190
+ }
191
+ text += "\n";
192
+ }
193
+ const totalTokens = cards.reduce((sum, c) => sum + (c.tokens || 0), 0);
194
+ text += `---\nTotal: ${totalTokens} tokens stored. Without HyperStack this would cost ~${cards.length * 6000} tokens per message.`;
195
+ return { content: [{ type: "text", text }] };
196
+ }
197
+ catch (e) {
198
+ return {
199
+ content: [{ type: "text", text: `āŒ Failed to list: ${e.message}` }],
200
+ isError: true,
201
+ };
202
+ }
203
+ });
204
+ // ───────────────────────────────────────────────────────
205
+ // TOOL: delete_memory
206
+ // ───────────────────────────────────────────────────────
207
+ server.tool("delete_memory", `Delete a memory card by its slug. Use this for outdated, incorrect, or duplicate memories. Keeping your memory clean improves search quality and saves tokens.`, {
208
+ slug: z.string().describe("The slug of the card to delete"),
209
+ }, async ({ slug }) => {
210
+ try {
211
+ await api("cards", { method: "DELETE", params: { id: slug } });
212
+ return {
213
+ content: [{ type: "text", text: `šŸ—‘ļø Deleted card "${slug}".` }],
214
+ };
215
+ }
216
+ catch (e) {
217
+ return {
218
+ content: [{ type: "text", text: `āŒ Failed to delete: ${e.message}` }],
219
+ isError: true,
220
+ };
221
+ }
222
+ });
223
+ // ───────────────────────────────────────────────────────
224
+ // TOOL: memory_stats
225
+ // ───────────────────────────────────────────────────────
226
+ server.tool("memory_stats", `Get a summary of your memory usage — card count, tokens stored, savings estimate, and breakdown by stack. Useful for understanding how much you're saving and whether you need to upgrade.`, {}, async () => {
227
+ try {
228
+ const result = await api("cards");
229
+ const cards = result.cards || [];
230
+ const totalTokens = cards.reduce((sum, c) => sum + (c.tokens || 0), 0);
231
+ const withoutHS = cards.length * 6000;
232
+ const savings = withoutHS > 0 ? Math.round((1 - totalTokens / withoutHS) * 100) : 0;
233
+ // Stack breakdown
234
+ const stacks = {};
235
+ for (const c of cards) {
236
+ stacks[c.stack] = (stacks[c.stack] || 0) + 1;
237
+ }
238
+ let text = `šŸ“Š **HyperStack Memory Stats**\n\n`;
239
+ text += `Cards: **${cards.length}** / ${result.limit || 50}\n`;
240
+ text += `Plan: **${result.plan || "FREE"}**\n`;
241
+ text += `Workspace: ${WORKSPACE}\n`;
242
+ text += `Tokens stored: ${totalTokens.toLocaleString()}\n`;
243
+ text += `Without HyperStack: ~${withoutHS.toLocaleString()} tokens/message\n`;
244
+ text += `**Savings: ${savings}%**\n\n`;
245
+ if (Object.keys(stacks).length > 0) {
246
+ text += `**By stack:**\n`;
247
+ for (const [s, count] of Object.entries(stacks).sort((a, b) => b[1] - a[1])) {
248
+ text += ` ${s}: ${count} card${count !== 1 ? "s" : ""}\n`;
249
+ }
250
+ }
251
+ if (result.plan === "FREE" && cards.length >= (result.limit || 50) * 0.8) {
252
+ text += `\nāš ļø You're at ${cards.length}/${result.limit || 50} cards. Consider upgrading to Pro for unlimited cards at https://cascadeai.dev`;
253
+ }
254
+ return { content: [{ type: "text", text }] };
255
+ }
256
+ catch (e) {
257
+ return {
258
+ content: [{ type: "text", text: `āŒ Failed to get stats: ${e.message}` }],
259
+ isError: true,
260
+ };
261
+ }
262
+ });
263
+ // ═══════════════════════════════════════════════════════
264
+ // START
265
+ // ═══════════════════════════════════════════════════════
266
+ async function main() {
267
+ const transport = new StdioServerTransport();
268
+ await server.connect(transport);
269
+ console.error("šŸƒ HyperStack MCP server running (workspace: %s)", WORKSPACE);
270
+ }
271
+ main().catch((err) => {
272
+ console.error("Fatal:", err);
273
+ process.exit(1);
274
+ });
package/package.json ADDED
@@ -0,0 +1,41 @@
1
+ {
2
+ "name": "hyperstack-mcp",
3
+ "version": "1.0.0",
4
+ "description": "MCP server for HyperStack — cloud memory for AI agents. One API key, zero dependencies, no LLM costs.",
5
+ "type": "module",
6
+ "bin": {
7
+ "hyperstack-mcp": "./build/index.js"
8
+ },
9
+ "files": [
10
+ "build"
11
+ ],
12
+ "scripts": {
13
+ "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
14
+ "prepare": "npm run build",
15
+ "dev": "tsc --watch"
16
+ },
17
+ "keywords": [
18
+ "mcp",
19
+ "model-context-protocol",
20
+ "ai-memory",
21
+ "hyperstack",
22
+ "claude",
23
+ "cursor",
24
+ "agent-memory",
25
+ "llm"
26
+ ],
27
+ "author": "CascadeAI <deeqyaqub@gmail.com>",
28
+ "license": "MIT",
29
+ "homepage": "https://cascadeai.dev",
30
+ "dependencies": {
31
+ "@modelcontextprotocol/sdk": "^1.12.1",
32
+ "zod": "^3.25.0"
33
+ },
34
+ "devDependencies": {
35
+ "@types/node": "^20.11.24",
36
+ "typescript": "^5.3.3"
37
+ },
38
+ "engines": {
39
+ "node": ">=18.0.0"
40
+ }
41
+ }