superkit-mcp-server 1.2.8 → 1.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +10 -0
- package/SUPERKIT.md +50 -160
- package/build/index.js +121 -16
- package/build/tools/contextManager.js +153 -0
- package/build/tools/contextVectorStore.js +131 -0
- package/build/tools/embedder.js +46 -0
- package/build/tools/markdownChunker.js +43 -0
- package/package.json +3 -2
- package/skills/meta/api-design/SKILL.md +5 -0
- package/skills/meta/docker/SKILL.md +5 -0
- package/skills/meta/mobile/SKILL.md +5 -0
- package/skills/meta/nextjs/SKILL.md +5 -0
- package/skills/meta/performance/SKILL.md +5 -0
- package/skills/meta/react-patterns/SKILL.md +5 -0
- package/skills/meta/security/SKILL.md +5 -0
- package/skills/meta/tailwind/SKILL.md +5 -0
- package/skills/tech/financial-modeling/skills/merger-model/SKILL.md +5 -2
package/README.md
CHANGED
|
@@ -8,6 +8,16 @@ While the primary purpose is to compund the knowledge of engineering, I will add
|
|
|
8
8
|
|
|
9
9
|
🔗 **NPM Package:** [superkit-mcp-server](https://www.npmjs.com/package/superkit-mcp-server)
|
|
10
10
|
|
|
11
|
+
## Install as Claude Code Plugin
|
|
12
|
+
|
|
13
|
+
1. Open Claude Code → `/plugin` → **Discover** → **Add source**
|
|
14
|
+
2. Enter: `github:dgkngk/super-kit`
|
|
15
|
+
3. Install: `/plugin install super-kit@dgkngk`
|
|
16
|
+
|
|
17
|
+
This configures the MCP server automatically and makes all skills, agents, and commands available.
|
|
18
|
+
|
|
19
|
+
**Requirements:** Node.js 18+ (for `npx superkit-mcp-server@latest`)
|
|
20
|
+
|
|
11
21
|
## Directory Structure
|
|
12
22
|
- `agents/`: Contains instructions and guidelines for specialized AI roles (e.g., `data-engineer`).
|
|
13
23
|
- `skills/`: Contains technology-specific or meta skills (patterns, best practices) the agent can load dynamically (e.g., `react-best-practices`).
|
package/SUPERKIT.md
CHANGED
|
@@ -1,101 +1,59 @@
|
|
|
1
1
|
# Super-Kit: Super Engineer Team
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
> 💡 Full agent/skill/workflow details indexed. Use `search_context` to retrieve relevant sections on demand.
|
|
4
4
|
|
|
5
|
-
|
|
6
|
-
|
|
7
|
-
You are an AI assistant that analyzes user requirements, assigns tasks to suitable agents, and ensures high-quality delivery adhering to project standards and patterns.
|
|
8
|
-
|
|
9
|
-
## Workflows
|
|
10
|
-
|
|
11
|
-
- Primary workflow: `./skills/workflows/primary-workflow.md`
|
|
12
|
-
- Development rules: `./skills/workflows/development-rules.md`
|
|
13
|
-
- Orchestration protocols: `./skills/workflows/orchestration-protocol.md`
|
|
14
|
-
- Documentation management: `./skills/workflows/documentation-management.md`
|
|
5
|
+
You are a member of the Super-Kit team — AI agents collaborating to deliver high-quality software.
|
|
15
6
|
|
|
16
7
|
## Team Members
|
|
17
8
|
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
|
21
|
-
|
|
22
|
-
|
|
|
23
|
-
|
|
|
24
|
-
|
|
|
25
|
-
|
|
|
26
|
-
|
|
|
27
|
-
|
|
|
28
|
-
|
|
|
29
|
-
|
|
|
30
|
-
|
|
|
31
|
-
|
|
|
32
|
-
|
|
|
33
|
-
|
|
|
34
|
-
|
|
|
35
|
-
|
|
|
36
|
-
|
|
|
37
|
-
|
|
|
38
|
-
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
5. **Review** - Code review before commit
|
|
49
|
-
|
|
50
|
-
## Communication
|
|
51
|
-
|
|
52
|
-
- Concise, clear
|
|
53
|
-
- Use code blocks for code
|
|
54
|
-
- Explain reasoning
|
|
55
|
-
- Ask when clarification is needed
|
|
56
|
-
|
|
57
|
-
## 🧠 Learning System (IMPORTANT!)
|
|
58
|
-
|
|
59
|
-
You have the ability to **LEARN FROM USER FEEDBACK** to avoid repeating mistakes:
|
|
60
|
-
|
|
61
|
-
### When to save a learning?
|
|
62
|
-
- User corrects your code → **MUST** use `kit_save_learning`
|
|
63
|
-
- User says "incorrect", "wrong", "different style" → **MUST** save
|
|
64
|
-
- User explains preference → Save under category `preference`
|
|
65
|
-
|
|
66
|
-
### Categories
|
|
67
|
-
- `code_style` - Code style/formatting
|
|
68
|
-
- `bug` - Logic errors you often make
|
|
69
|
-
- `preference` - User preferences
|
|
70
|
-
- `pattern` - Patterns user wants to use
|
|
71
|
-
- `other` - Other
|
|
72
|
-
|
|
73
|
-
### Example
|
|
74
|
-
```
|
|
75
|
-
When user corrects: "Use arrow function, do not use regular function"
|
|
76
|
-
→ kit_save_learning(category: "code_style", lesson: "User prefers arrow functions over regular functions")
|
|
9
|
+
| Agent | Role |
|
|
10
|
+
|-------|------|
|
|
11
|
+
| Planner | Create detailed implementation plans |
|
|
12
|
+
| Scout | Explore codebase structure |
|
|
13
|
+
| Coder | Write clean, efficient code |
|
|
14
|
+
| Tester | Write tests, ensure quality |
|
|
15
|
+
| Reviewer | Review code, suggest improvements |
|
|
16
|
+
| Debugger | Analyze errors and bugs |
|
|
17
|
+
| Git Manager | Manage version control |
|
|
18
|
+
| Copywriter | Create marketing content |
|
|
19
|
+
| Database Admin | Manage database |
|
|
20
|
+
| Researcher | Research external resources |
|
|
21
|
+
| UI Designer | UI/UX Design |
|
|
22
|
+
| Docs Manager | Manage documentation |
|
|
23
|
+
| Brainstormer | Generate creative ideas |
|
|
24
|
+
| Fullstack Developer | Full-stack development |
|
|
25
|
+
| Project Manager | Project management |
|
|
26
|
+
| Security Auditor | Security audit, vulnerability scanning |
|
|
27
|
+
| Frontend Specialist | React, Next.js, UI/UX expert |
|
|
28
|
+
| Backend Specialist | API, Database, Docker expert |
|
|
29
|
+
| DevOps Engineer | CI/CD, Kubernetes, Infrastructure |
|
|
30
|
+
|
|
31
|
+
## Critical Workflow Rules
|
|
32
|
+
|
|
33
|
+
- **Plan first** — Always use /plan before coding
|
|
34
|
+
- **Scout first** — Understand codebase before making changes
|
|
35
|
+
- **Test** — Write and run tests after coding
|
|
36
|
+
- **Review** — Code review before commit
|
|
37
|
+
|
|
38
|
+
## Compound Loop
|
|
77
39
|
|
|
78
|
-
When user says: "Always use TypeScript strict mode"
|
|
79
|
-
→ kit_save_learning(category: "preference", lesson: "Always use TypeScript strict mode")
|
|
80
40
|
```
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
- Learnings will be injected into context automatically via hooks
|
|
84
|
-
- Read "🧠 Previous Learnings" section and **APPLY** them
|
|
41
|
+
/explore → /plan → /work → /review → /compound → /housekeeping → repeat
|
|
42
|
+
```
|
|
85
43
|
|
|
86
44
|
## Available Tools
|
|
87
45
|
|
|
88
|
-
**Super-Kit MCP Tools (Global
|
|
89
|
-
- `list_superkit_assets` -
|
|
90
|
-
- `load_superkit_agent` -
|
|
91
|
-
- `load_superkit_skill` -
|
|
92
|
-
- `load_superkit_workflow` -
|
|
46
|
+
**Super-Kit MCP Tools (Global):**
|
|
47
|
+
- `list_superkit_assets` - List all global agents, skills, and workflows
|
|
48
|
+
- `load_superkit_agent` - Load a global agent (e.g., `scout`)
|
|
49
|
+
- `load_superkit_skill` - Load a global skill (e.g., `tech`, `api-patterns`)
|
|
50
|
+
- `load_superkit_workflow` - Load a global workflow (e.g., `work`, `explore`)
|
|
93
51
|
|
|
94
52
|
**Project-Scoped MCP Tools:**
|
|
95
|
-
- `list_project_assets` -
|
|
96
|
-
- `load_project_agent` -
|
|
97
|
-
- `load_project_skill` -
|
|
98
|
-
- `load_project_workflow` -
|
|
53
|
+
- `list_project_assets` - List project-scoped assets from `.agents/` folder
|
|
54
|
+
- `load_project_agent` - Load a project-scoped agent
|
|
55
|
+
- `load_project_skill` - Load a project-scoped skill
|
|
56
|
+
- `load_project_workflow` - Load a project-scoped workflow
|
|
99
57
|
|
|
100
58
|
**Core Development Tools:**
|
|
101
59
|
- `kit_create_checkpoint` - Create checkpoint before changes
|
|
@@ -106,89 +64,21 @@ When user says: "Always use TypeScript strict mode"
|
|
|
106
64
|
- `kit_list_checkpoints` - List checkpoints
|
|
107
65
|
|
|
108
66
|
**Learning:**
|
|
109
|
-
- `kit_save_learning` -
|
|
67
|
+
- `kit_save_learning` - Save lesson from user feedback
|
|
110
68
|
- `kit_get_learnings` - Read saved learnings
|
|
111
69
|
|
|
112
|
-
##
|
|
113
|
-
|
|
114
|
-
Any project can define its own agents, skills, and workflows by creating a `.agents/` folder at the project root:
|
|
115
|
-
|
|
116
|
-
```
|
|
117
|
-
{project-root}/
|
|
118
|
-
└── .agents/
|
|
119
|
-
├── agents/ # Custom agent .md files (e.g., my-domain-expert.md)
|
|
120
|
-
├── skills/
|
|
121
|
-
│ ├── tech/ # Tech skill dirs, each containing a SKILL.md
|
|
122
|
-
│ └── meta/ # Meta skill dirs, each containing a SKILL.md
|
|
123
|
-
└── workflows/ # Custom workflow .md files (e.g., deploy-staging.md)
|
|
124
|
-
```
|
|
125
|
-
|
|
126
|
-
**Resolution rules:**
|
|
127
|
-
- Project assets have `"source": "project"` and **complement** (do not replace) global Super-Kit assets.
|
|
128
|
-
- Global Super-Kit assets always have `"source": "global"`.
|
|
129
|
-
- If `.agents/` does not exist, all project-scoped tools return empty results gracefully — no errors.
|
|
130
|
-
|
|
131
|
-
**When starting work on ANY project, ALWAYS:**
|
|
132
|
-
1. Call `list_project_assets` (or `list_superkit_assets` with `scope: "all"`) to discover project-specific agents, skills, and workflows.
|
|
133
|
-
2. Load project assets with `load_project_agent`, `load_project_skill`, or `load_project_workflow` before falling back to global equivalents.
|
|
134
|
-
3. Use global assets (`load_superkit_agent`, etc.) for anything not covered by the project's `.agents/` folder.
|
|
135
|
-
|
|
136
|
-
## Documentation Management
|
|
137
|
-
|
|
138
|
-
- Docs location: `./docs/`
|
|
139
|
-
- Update README.md when adding features
|
|
140
|
-
- Update CHANGELOG.md before release
|
|
141
|
-
- Keep docs in sync with code changes
|
|
142
|
-
|
|
143
|
-
## 🔄 Compound Behaviors (IMPORTANT!)
|
|
144
|
-
|
|
145
|
-
Each unit of work must make the next work **easier**, not harder.
|
|
70
|
+
## 🧠 Learning System
|
|
146
71
|
|
|
147
|
-
|
|
72
|
+
On user correction or preference feedback → **MUST** call `kit_save_learning`.
|
|
148
73
|
|
|
149
|
-
|
|
150
|
-
```bash
|
|
151
|
-
cat skills/session-resume/SKILL.md
|
|
152
|
-
```
|
|
153
|
-
|
|
154
|
-
### Search Before Solving
|
|
155
|
-
|
|
156
|
-
**BEFORE** solving a new problem:
|
|
157
|
-
```bash
|
|
158
|
-
Call MCP `call_tool_compound_manager` { action: "search", terms: ["{keywords}"] }
|
|
159
|
-
```
|
|
160
|
-
|
|
161
|
-
If solution found → Apply it, do not reinvent the wheel!
|
|
162
|
-
|
|
163
|
-
### Document After Solving
|
|
164
|
-
|
|
165
|
-
**AFTER** solving a problem successfully:
|
|
166
|
-
- Run `/compound` to document solution
|
|
167
|
-
- Solution will be saved to `docs/solutions/`
|
|
168
|
-
|
|
169
|
-
### Critical Patterns
|
|
170
|
-
|
|
171
|
-
**MUST** read before coding:
|
|
172
|
-
- `docs/solutions/patterns/critical-patterns.md` - 23 patterns to prevent repeated errors
|
|
74
|
+
**Categories:** `code_style` | `bug` | `preference` | `pattern` | `other`
|
|
173
75
|
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
Run daily:
|
|
177
|
-
```bash
|
|
178
|
-
Call MCP `call_tool_compound_manager` { action: "dashboard" }
|
|
179
|
-
```
|
|
180
|
-
**Target**: Grade B or higher
|
|
181
|
-
|
|
182
|
-
### Compound Loop
|
|
183
|
-
|
|
184
|
-
```
|
|
185
|
-
/explore → /plan → /work → /review → /compound → /housekeeping → repeat
|
|
186
|
-
```
|
|
76
|
+
Learnings are auto-injected into context. Read "🧠 Previous Learnings" and **APPLY** them.
|
|
187
77
|
|
|
188
78
|
## Important Directories
|
|
189
79
|
|
|
190
80
|
```
|
|
191
|
-
docs/solutions/ # Knowledge Base
|
|
81
|
+
docs/solutions/ # Knowledge Base — Persistent solutions
|
|
192
82
|
docs/decisions/ # Architecture Decision Records
|
|
193
83
|
docs/architecture/ # System architecture
|
|
194
84
|
docs/specs/ # Multi-session specifications
|
|
@@ -196,4 +86,4 @@ docs/explorations/ # Deep research artifacts
|
|
|
196
86
|
skills/ # Modular capabilities
|
|
197
87
|
plans/ # Implementation plans
|
|
198
88
|
todos/ # Tracked work items
|
|
199
|
-
```
|
|
89
|
+
```
|
package/build/index.js
CHANGED
|
@@ -3,6 +3,7 @@ import { Server } from "@modelcontextprotocol/sdk/server/index.js";
|
|
|
3
3
|
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
|
4
4
|
import { CallToolRequestSchema, ListToolsRequestSchema, ListPromptsRequestSchema, GetPromptRequestSchema, } from "@modelcontextprotocol/sdk/types.js";
|
|
5
5
|
import * as path from "path";
|
|
6
|
+
import * as os from "os";
|
|
6
7
|
import * as fs from "fs/promises";
|
|
7
8
|
import * as toml from "@iarna/toml";
|
|
8
9
|
import { fileURLToPath } from "url";
|
|
@@ -17,9 +18,18 @@ import { bootstrapFolderDocs, checkDocsFreshness, discoverUndocumentedFolders, v
|
|
|
17
18
|
import { generateChangelog, validateChangelog, archiveCompleted, prePushHousekeeping, } from "./tools/gitTools.js";
|
|
18
19
|
import { validateSpecConsistency, completePlan, validateArchitecture, syncSpec, updateSpecPhase, } from "./tools/archTools.js";
|
|
19
20
|
import { list_project_agents, list_project_skills, list_project_workflows, load_project_agent_file, load_project_skill_file, load_project_workflow_file, } from "./tools/ProjectAssets.js";
|
|
21
|
+
import { ContextManager } from "./tools/contextManager.js";
|
|
20
22
|
const __filename = fileURLToPath(import.meta.url);
|
|
21
23
|
const __dirname = path.dirname(__filename);
|
|
22
24
|
const superKitRoot = path.resolve(__dirname, "../");
|
|
25
|
+
const contextManager = new ContextManager({
|
|
26
|
+
assetsDir: superKitRoot,
|
|
27
|
+
storeDir: path.join(os.homedir(), ".superkit"),
|
|
28
|
+
});
|
|
29
|
+
// Index in background — does not block server startup
|
|
30
|
+
contextManager.indexAll().catch((err) => {
|
|
31
|
+
console.error("[superkit] Context indexing failed:", err);
|
|
32
|
+
});
|
|
23
33
|
const server = new Server({
|
|
24
34
|
name: "superkit-mcp-server",
|
|
25
35
|
version: "1.0.0",
|
|
@@ -120,22 +130,22 @@ const TOOLS = [
|
|
|
120
130
|
},
|
|
121
131
|
{
|
|
122
132
|
name: "call_tool_todo_manager",
|
|
123
|
-
description: `Manages todos. Per-action usage:
|
|
124
|
-
|
|
125
|
-
• nextId — Returns the next available todo ID. No extra params needed.
|
|
126
|
-
|
|
127
|
-
• create — Creates a new todo file. Required: title (string), description (1-2 sentence problem statement), priority ("p0"|"p1"|"p2"|"p3"), criteria (string array of acceptance criteria). Optional: projectPath.
|
|
128
|
-
Example: { action: "create", title: "Add auth", description: "Implement JWT login.", priority: "p2", criteria: ["User can log in", "Token is stored"], projectPath: "." }
|
|
129
|
-
|
|
130
|
-
• start — Marks a todo as in-progress. Required: todoId = RELATIVE PATH to the todo file, e.g. "todos/001-pending-p2-my-task.md". The file must exist at that path relative to projectPath.
|
|
131
|
-
Example: { action: "start", todoId: "todos/001-pending-p2-my-task.md", projectPath: "." }
|
|
132
|
-
|
|
133
|
-
• done — Marks a todo as done. Required: todoId = relative path (same format as start). All acceptance criteria checkboxes must be checked, or pass force: true to bypass.
|
|
134
|
-
Example: { action: "done", todoId: "todos/001-in-progress-p2-my-task.md", force: true, projectPath: "." }
|
|
135
|
-
|
|
136
|
-
• complete — Marks a todo as complete (final state). Required: todoId = relative path (same format as start/done).
|
|
137
|
-
Example: { action: "complete", todoId: "todos/001-done-p2-my-task.md", force: true, projectPath: "." }
|
|
138
|
-
|
|
133
|
+
description: `Manages todos. Per-action usage:
|
|
134
|
+
|
|
135
|
+
• nextId — Returns the next available todo ID. No extra params needed.
|
|
136
|
+
|
|
137
|
+
• create — Creates a new todo file. Required: title (string), description (1-2 sentence problem statement), priority ("p0"|"p1"|"p2"|"p3"), criteria (string array of acceptance criteria). Optional: projectPath.
|
|
138
|
+
Example: { action: "create", title: "Add auth", description: "Implement JWT login.", priority: "p2", criteria: ["User can log in", "Token is stored"], projectPath: "." }
|
|
139
|
+
|
|
140
|
+
• start — Marks a todo as in-progress. Required: todoId = RELATIVE PATH to the todo file, e.g. "todos/001-pending-p2-my-task.md". The file must exist at that path relative to projectPath.
|
|
141
|
+
Example: { action: "start", todoId: "todos/001-pending-p2-my-task.md", projectPath: "." }
|
|
142
|
+
|
|
143
|
+
• done — Marks a todo as done. Required: todoId = relative path (same format as start). All acceptance criteria checkboxes must be checked, or pass force: true to bypass.
|
|
144
|
+
Example: { action: "done", todoId: "todos/001-in-progress-p2-my-task.md", force: true, projectPath: "." }
|
|
145
|
+
|
|
146
|
+
• complete — Marks a todo as complete (final state). Required: todoId = relative path (same format as start/done).
|
|
147
|
+
Example: { action: "complete", todoId: "todos/001-done-p2-my-task.md", force: true, projectPath: "." }
|
|
148
|
+
|
|
139
149
|
⚠️ IMPORTANT: todoId must be the FULL RELATIVE FILE PATH (e.g. "todos/001-in-progress-p2-my-task.md"), NOT just the numeric ID ("001"). The filename changes with each status transition, so always use the current filename on disk.`,
|
|
140
150
|
inputSchema: {
|
|
141
151
|
type: "object",
|
|
@@ -404,6 +414,52 @@ const TOOLS = [
|
|
|
404
414
|
required: ["workflowName"],
|
|
405
415
|
},
|
|
406
416
|
},
|
|
417
|
+
{
|
|
418
|
+
name: "search_context",
|
|
419
|
+
description: "Semantic search over super-kit agents, skills, workflows, and SUPERKIT.md. Returns the most relevant heading-level chunks. Use BEFORE load_superkit_skill/agent/workflow to find what you need without loading full files.",
|
|
420
|
+
inputSchema: {
|
|
421
|
+
type: "object",
|
|
422
|
+
properties: {
|
|
423
|
+
query: { type: "string", description: "Natural language search query" },
|
|
424
|
+
topK: { type: "number", default: 5, description: "Number of results to return (1-20)" },
|
|
425
|
+
},
|
|
426
|
+
required: ["query"],
|
|
427
|
+
},
|
|
428
|
+
},
|
|
429
|
+
{
|
|
430
|
+
name: "index_context",
|
|
431
|
+
description: "Manually trigger re-indexing of all super-kit context (agents, skills, workflows). Useful after adding new files. Returns indexing stats.",
|
|
432
|
+
inputSchema: {
|
|
433
|
+
type: "object",
|
|
434
|
+
properties: {},
|
|
435
|
+
required: [],
|
|
436
|
+
},
|
|
437
|
+
},
|
|
438
|
+
{
|
|
439
|
+
name: "store_session_memory",
|
|
440
|
+
description: "Persist a cross-session memory to super-kit's vector store. Memories are retrieved by semantic similarity in future sessions via recall_memory.",
|
|
441
|
+
inputSchema: {
|
|
442
|
+
type: "object",
|
|
443
|
+
properties: {
|
|
444
|
+
text: { type: "string", description: "The memory to store (a fact, decision, or pattern)" },
|
|
445
|
+
tags: { type: "array", items: { type: "string" }, description: "Optional tags for filtering" },
|
|
446
|
+
ttl_days: { type: "number", enum: [30, 90], default: 30, description: "How long to keep this memory (30 or 90 days)" },
|
|
447
|
+
},
|
|
448
|
+
required: ["text"],
|
|
449
|
+
},
|
|
450
|
+
},
|
|
451
|
+
{
|
|
452
|
+
name: "recall_memory",
|
|
453
|
+
description: "Search cross-session memories by semantic similarity. Returns memories stored via store_session_memory, ordered by relevance.",
|
|
454
|
+
inputSchema: {
|
|
455
|
+
type: "object",
|
|
456
|
+
properties: {
|
|
457
|
+
query: { type: "string", description: "What to search for in past memories" },
|
|
458
|
+
topK: { type: "number", default: 5, description: "Number of memories to return" },
|
|
459
|
+
},
|
|
460
|
+
required: ["query"],
|
|
461
|
+
},
|
|
462
|
+
},
|
|
407
463
|
];
|
|
408
464
|
server.setRequestHandler(ListToolsRequestSchema, async () => {
|
|
409
465
|
return { tools: TOOLS };
|
|
@@ -826,6 +882,55 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
|
826
882
|
const content = await load_project_workflow_file(args.workflowName, args.projectPath);
|
|
827
883
|
return { content: [{ type: "text", text: content }] };
|
|
828
884
|
}
|
|
885
|
+
if (request.params.name === "search_context") {
|
|
886
|
+
const args = request.params.arguments;
|
|
887
|
+
const topK = Math.min(Math.max(1, args.topK ?? 5), 20);
|
|
888
|
+
const results = await contextManager.searchContext(args.query, topK);
|
|
889
|
+
const formatted = results
|
|
890
|
+
.map((r, i) => `[${i + 1}] ${r.sourceFile} › ${r.headingPath} (score: ${r.score.toFixed(3)})\n${r.content}`)
|
|
891
|
+
.join("\n\n---\n\n");
|
|
892
|
+
return { content: [{ type: "text", text: formatted || "No results found." }] };
|
|
893
|
+
}
|
|
894
|
+
if (request.params.name === "index_context") {
|
|
895
|
+
const stats = await contextManager.indexAll();
|
|
896
|
+
return {
|
|
897
|
+
content: [
|
|
898
|
+
{
|
|
899
|
+
type: "text",
|
|
900
|
+
text: `Context indexed. Files re-embedded: ${stats.indexed}, unchanged: ${stats.skipped}.`,
|
|
901
|
+
},
|
|
902
|
+
],
|
|
903
|
+
};
|
|
904
|
+
}
|
|
905
|
+
if (request.params.name === "store_session_memory") {
|
|
906
|
+
const args = request.params.arguments;
|
|
907
|
+
const ttl = args.ttl_days ?? 30;
|
|
908
|
+
await contextManager.storeMemory(args.text, args.tags ?? [], ttl);
|
|
909
|
+
return {
|
|
910
|
+
content: [
|
|
911
|
+
{
|
|
912
|
+
type: "text",
|
|
913
|
+
text: `Memory stored (TTL: ${ttl} days). It will be retrievable via recall_memory.`,
|
|
914
|
+
},
|
|
915
|
+
],
|
|
916
|
+
};
|
|
917
|
+
}
|
|
918
|
+
if (request.params.name === "recall_memory") {
|
|
919
|
+
const args = request.params.arguments;
|
|
920
|
+
const topK = Math.min(Math.max(1, args.topK ?? 5), 20);
|
|
921
|
+
const results = await contextManager.recallMemory(args.query, topK);
|
|
922
|
+
if (results.length === 0) {
|
|
923
|
+
return { content: [{ type: "text", text: "No memories found." }] };
|
|
924
|
+
}
|
|
925
|
+
const formatted = results
|
|
926
|
+
.map((r, i) => {
|
|
927
|
+
const expires = new Date(r.expiresAt).toISOString().slice(0, 10);
|
|
928
|
+
const tags = r.tags.length ? ` [${r.tags.join(", ")}]` : "";
|
|
929
|
+
return `[${i + 1}]${tags} (score: ${r.score.toFixed(3)}, expires: ${expires})\n${r.text}`;
|
|
930
|
+
})
|
|
931
|
+
.join("\n\n");
|
|
932
|
+
return { content: [{ type: "text", text: formatted }] };
|
|
933
|
+
}
|
|
829
934
|
throw new Error(`Unknown tool: ${request.params.name}`);
|
|
830
935
|
}
|
|
831
936
|
catch (error) {
|
|
@@ -0,0 +1,153 @@
|
|
|
1
|
+
import * as fs from 'fs/promises';
|
|
2
|
+
import * as path from 'path';
|
|
3
|
+
import { createHash } from 'crypto';
|
|
4
|
+
import { chunkMarkdown } from './markdownChunker.js';
|
|
5
|
+
import { embed, embedOne } from './embedder.js';
|
|
6
|
+
import { ContextVectorStore } from './contextVectorStore.js';
|
|
7
|
+
const VALID_TTLS = new Set([30, 90]);
|
|
8
|
+
export class ContextManager {
|
|
9
|
+
opts;
|
|
10
|
+
store;
|
|
11
|
+
ready = false;
|
|
12
|
+
constructor(opts) {
|
|
13
|
+
this.opts = opts;
|
|
14
|
+
this.store = new ContextVectorStore(opts.storeDir);
|
|
15
|
+
}
|
|
16
|
+
isReady() {
|
|
17
|
+
return this.ready;
|
|
18
|
+
}
|
|
19
|
+
/** Scan agents/, skills/, and SUPERKIT.md; return all markdown entries with mtimes. */
|
|
20
|
+
async discoverFiles() {
|
|
21
|
+
const entries = [];
|
|
22
|
+
const base = this.opts.assetsDir;
|
|
23
|
+
const walk = async (dir, sourceType) => {
|
|
24
|
+
let items;
|
|
25
|
+
try {
|
|
26
|
+
items = await fs.readdir(dir);
|
|
27
|
+
}
|
|
28
|
+
catch {
|
|
29
|
+
return;
|
|
30
|
+
}
|
|
31
|
+
for (const item of items) {
|
|
32
|
+
const abs = path.join(dir, item);
|
|
33
|
+
const stat = await fs.stat(abs).catch(() => null);
|
|
34
|
+
if (!stat)
|
|
35
|
+
continue;
|
|
36
|
+
if (stat.isDirectory()) {
|
|
37
|
+
await walk(abs, sourceType);
|
|
38
|
+
}
|
|
39
|
+
else if (item.endsWith('.md')) {
|
|
40
|
+
entries.push({
|
|
41
|
+
relativePath: path.relative(base, abs).replace(/\\/g, '/'),
|
|
42
|
+
absolutePath: abs,
|
|
43
|
+
sourceType,
|
|
44
|
+
mtime: stat.mtimeMs,
|
|
45
|
+
});
|
|
46
|
+
}
|
|
47
|
+
}
|
|
48
|
+
};
|
|
49
|
+
await walk(path.join(base, 'agents'), 'agent');
|
|
50
|
+
await walk(path.join(base, 'skills', 'tech'), 'skill');
|
|
51
|
+
await walk(path.join(base, 'skills', 'meta'), 'skill');
|
|
52
|
+
await walk(path.join(base, 'skills', 'workflows'), 'workflow');
|
|
53
|
+
// SUPERKIT.md itself
|
|
54
|
+
const superkitPath = path.join(base, 'SUPERKIT.md');
|
|
55
|
+
const superkitStat = await fs.stat(superkitPath).catch(() => null);
|
|
56
|
+
if (superkitStat) {
|
|
57
|
+
entries.push({ relativePath: 'SUPERKIT.md', absolutePath: superkitPath, sourceType: 'system', mtime: superkitStat.mtimeMs });
|
|
58
|
+
}
|
|
59
|
+
return entries;
|
|
60
|
+
}
|
|
61
|
+
/** Re-embed and store the given file entries. */
|
|
62
|
+
async indexFiles(files) {
|
|
63
|
+
if (files.length === 0)
|
|
64
|
+
return;
|
|
65
|
+
const allChunks = [];
|
|
66
|
+
const allTexts = [];
|
|
67
|
+
const chunkMeta = [];
|
|
68
|
+
for (const file of files) {
|
|
69
|
+
const content = await fs.readFile(file.absolutePath, 'utf-8').catch(() => '');
|
|
70
|
+
if (!content.trim())
|
|
71
|
+
continue;
|
|
72
|
+
const chunks = chunkMarkdown(content, file.relativePath, file.sourceType);
|
|
73
|
+
for (const chunk of chunks) {
|
|
74
|
+
chunkMeta.push({ chunk, mtime: file.mtime });
|
|
75
|
+
// Embed heading path + content for better semantic matching
|
|
76
|
+
allTexts.push(`${chunk.headingPath}\n\n${chunk.content}`);
|
|
77
|
+
}
|
|
78
|
+
}
|
|
79
|
+
if (allTexts.length === 0)
|
|
80
|
+
return;
|
|
81
|
+
const vectors = await embed(allTexts);
|
|
82
|
+
for (let i = 0; i < chunkMeta.length; i++) {
|
|
83
|
+
allChunks.push({ chunk: chunkMeta[i].chunk, vector: vectors[i], sourceMtime: chunkMeta[i].mtime });
|
|
84
|
+
}
|
|
85
|
+
const fileNames = files.map(f => f.relativePath);
|
|
86
|
+
await this.store.replaceChunksForFiles(fileNames, allChunks);
|
|
87
|
+
}
|
|
88
|
+
/**
|
|
89
|
+
* Index all markdown assets. Incremental: only re-indexes files whose mtime changed.
|
|
90
|
+
* Safe to call multiple times.
|
|
91
|
+
*/
|
|
92
|
+
async indexAll() {
|
|
93
|
+
const discovered = await this.discoverFiles();
|
|
94
|
+
const mtimeMap = {};
|
|
95
|
+
for (const f of discovered)
|
|
96
|
+
mtimeMap[f.relativePath] = f.mtime;
|
|
97
|
+
const staleRelPaths = await this.store.getStaleFiles(mtimeMap);
|
|
98
|
+
const staleSet = new Set(staleRelPaths);
|
|
99
|
+
const toIndex = discovered.filter(f => staleSet.has(f.relativePath));
|
|
100
|
+
await this.indexFiles(toIndex);
|
|
101
|
+
this.ready = true;
|
|
102
|
+
return { indexed: toIndex.length, skipped: discovered.length - toIndex.length };
|
|
103
|
+
}
|
|
104
|
+
async searchContext(query, topK = 5) {
|
|
105
|
+
if (!this.ready) {
|
|
106
|
+
return [{
|
|
107
|
+
sourceFile: 'system',
|
|
108
|
+
sourceType: 'system',
|
|
109
|
+
headingPath: 'Status',
|
|
110
|
+
content: 'Context indexing is in progress. Try again in a few seconds.',
|
|
111
|
+
score: 0,
|
|
112
|
+
}];
|
|
113
|
+
}
|
|
114
|
+
const vec = await embedOne(query);
|
|
115
|
+
const hits = await this.store.searchChunks(vec, topK);
|
|
116
|
+
return hits.map(h => ({
|
|
117
|
+
sourceFile: h.chunk.sourceFile,
|
|
118
|
+
sourceType: h.chunk.sourceType,
|
|
119
|
+
headingPath: h.chunk.headingPath,
|
|
120
|
+
content: h.chunk.content,
|
|
121
|
+
score: h.score,
|
|
122
|
+
}));
|
|
123
|
+
}
|
|
124
|
+
async storeMemory(text, tags, ttlDays) {
|
|
125
|
+
if (!VALID_TTLS.has(ttlDays))
|
|
126
|
+
throw new Error('TTL must be 30 or 90 days');
|
|
127
|
+
const id = createHash('sha1').update(`${Date.now()}::${text}`).digest('hex').slice(0, 16);
|
|
128
|
+
const vec = await embedOne(text);
|
|
129
|
+
const now = Date.now();
|
|
130
|
+
await this.store.upsertMemory({
|
|
131
|
+
id,
|
|
132
|
+
text,
|
|
133
|
+
tags,
|
|
134
|
+
vector: vec,
|
|
135
|
+
createdAt: now,
|
|
136
|
+
expiresAt: now + ttlDays * 24 * 60 * 60 * 1000,
|
|
137
|
+
});
|
|
138
|
+
}
|
|
139
|
+
async recallMemory(query, topK = 5) {
|
|
140
|
+
const vec = await embedOne(query);
|
|
141
|
+
const hits = await this.store.searchMemory(vec, topK);
|
|
142
|
+
return hits.map(h => ({
|
|
143
|
+
text: h.entry.text,
|
|
144
|
+
tags: h.entry.tags,
|
|
145
|
+
score: h.score,
|
|
146
|
+
createdAt: h.entry.createdAt,
|
|
147
|
+
expiresAt: h.entry.expiresAt,
|
|
148
|
+
}));
|
|
149
|
+
}
|
|
150
|
+
async pruneMemories() {
|
|
151
|
+
return this.store.pruneExpiredMemories();
|
|
152
|
+
}
|
|
153
|
+
}
|
|
@@ -0,0 +1,131 @@
|
|
|
1
|
+
import * as fs from 'fs/promises';
|
|
2
|
+
import * as path from 'path';
|
|
3
|
+
function cosine(a, b) {
|
|
4
|
+
let dot = 0, na = 0, nb = 0;
|
|
5
|
+
for (let i = 0; i < a.length; i++) {
|
|
6
|
+
dot += a[i] * b[i];
|
|
7
|
+
na += a[i] * a[i];
|
|
8
|
+
nb += b[i] * b[i];
|
|
9
|
+
}
|
|
10
|
+
const denom = Math.sqrt(na) * Math.sqrt(nb);
|
|
11
|
+
return denom === 0 ? 0 : dot / denom;
|
|
12
|
+
}
|
|
13
|
+
export class ContextVectorStore {
|
|
14
|
+
storeDir;
|
|
15
|
+
chunksPath;
|
|
16
|
+
memoriesPath;
|
|
17
|
+
chunksCache = null;
|
|
18
|
+
memoriesCache = null;
|
|
19
|
+
constructor(storeDir) {
|
|
20
|
+
this.storeDir = storeDir;
|
|
21
|
+
this.chunksPath = path.join(storeDir, 'context-index.json');
|
|
22
|
+
this.memoriesPath = path.join(storeDir, 'memory.json');
|
|
23
|
+
}
|
|
24
|
+
async loadChunks() {
|
|
25
|
+
if (this.chunksCache)
|
|
26
|
+
return this.chunksCache;
|
|
27
|
+
try {
|
|
28
|
+
const raw = await fs.readFile(this.chunksPath, 'utf-8');
|
|
29
|
+
this.chunksCache = JSON.parse(raw);
|
|
30
|
+
}
|
|
31
|
+
catch {
|
|
32
|
+
this.chunksCache = [];
|
|
33
|
+
}
|
|
34
|
+
return this.chunksCache;
|
|
35
|
+
}
|
|
36
|
+
async saveChunks(chunks) {
|
|
37
|
+
this.chunksCache = chunks;
|
|
38
|
+
await fs.mkdir(this.storeDir, { recursive: true });
|
|
39
|
+
await fs.writeFile(this.chunksPath, JSON.stringify(chunks), 'utf-8');
|
|
40
|
+
}
|
|
41
|
+
async loadMemories() {
|
|
42
|
+
if (this.memoriesCache)
|
|
43
|
+
return this.memoriesCache;
|
|
44
|
+
try {
|
|
45
|
+
const raw = await fs.readFile(this.memoriesPath, 'utf-8');
|
|
46
|
+
this.memoriesCache = JSON.parse(raw);
|
|
47
|
+
}
|
|
48
|
+
catch {
|
|
49
|
+
this.memoriesCache = [];
|
|
50
|
+
}
|
|
51
|
+
return this.memoriesCache;
|
|
52
|
+
}
|
|
53
|
+
async saveMemories(memories) {
|
|
54
|
+
this.memoriesCache = memories;
|
|
55
|
+
await fs.mkdir(this.storeDir, { recursive: true });
|
|
56
|
+
await fs.writeFile(this.memoriesPath, JSON.stringify(memories), 'utf-8');
|
|
57
|
+
}
|
|
58
|
+
/**
|
|
59
|
+
* Upserts chunks. Existing chunks with the same id are replaced.
|
|
60
|
+
*/
|
|
61
|
+
async upsertChunks(items) {
|
|
62
|
+
const existing = await this.loadChunks();
|
|
63
|
+
const byId = new Map(existing.map(c => [c.chunk.id, c]));
|
|
64
|
+
for (const item of items)
|
|
65
|
+
byId.set(item.chunk.id, item);
|
|
66
|
+
await this.saveChunks(Array.from(byId.values()));
|
|
67
|
+
}
|
|
68
|
+
/**
|
|
69
|
+
* Removes all chunks from the given source files and upserts the new ones.
|
|
70
|
+
* Used during incremental re-indexing.
|
|
71
|
+
*/
|
|
72
|
+
async replaceChunksForFiles(files, newChunks) {
|
|
73
|
+
const existing = await this.loadChunks();
|
|
74
|
+
const fileSet = new Set(files);
|
|
75
|
+
const kept = existing.filter(c => !fileSet.has(c.chunk.sourceFile));
|
|
76
|
+
await this.saveChunks([...kept, ...newChunks]);
|
|
77
|
+
}
|
|
78
|
+
async searchChunks(queryVector, topK) {
|
|
79
|
+
const all = await this.loadChunks();
|
|
80
|
+
return all
|
|
81
|
+
.map(c => ({ chunk: c.chunk, score: cosine(queryVector, c.vector) }))
|
|
82
|
+
.sort((a, b) => b.score - a.score)
|
|
83
|
+
.slice(0, topK);
|
|
84
|
+
}
|
|
85
|
+
/**
|
|
86
|
+
* Returns source files whose recorded mtime differs from the provided map,
|
|
87
|
+
* plus files in the map that have no chunks recorded.
|
|
88
|
+
*/
|
|
89
|
+
async getStaleFiles(currentMtimes) {
|
|
90
|
+
const all = await this.loadChunks();
|
|
91
|
+
// Latest recorded mtime per source file
|
|
92
|
+
const recorded = new Map();
|
|
93
|
+
for (const c of all) {
|
|
94
|
+
const prev = recorded.get(c.chunk.sourceFile) ?? 0;
|
|
95
|
+
if (c.sourceMtime > prev)
|
|
96
|
+
recorded.set(c.chunk.sourceFile, c.sourceMtime);
|
|
97
|
+
}
|
|
98
|
+
const stale = [];
|
|
99
|
+
for (const [file, mtime] of Object.entries(currentMtimes)) {
|
|
100
|
+
const prev = recorded.get(file);
|
|
101
|
+
if (prev === undefined || prev !== mtime)
|
|
102
|
+
stale.push(file);
|
|
103
|
+
}
|
|
104
|
+
return stale;
|
|
105
|
+
}
|
|
106
|
+
async upsertMemory(entry) {
|
|
107
|
+
const all = await this.loadMemories();
|
|
108
|
+
const idx = all.findIndex(m => m.id === entry.id);
|
|
109
|
+
if (idx >= 0)
|
|
110
|
+
all[idx] = entry;
|
|
111
|
+
else
|
|
112
|
+
all.push(entry);
|
|
113
|
+
await this.saveMemories(all);
|
|
114
|
+
}
|
|
115
|
+
async searchMemory(queryVector, topK) {
|
|
116
|
+
const all = await this.loadMemories();
|
|
117
|
+
const now = Date.now();
|
|
118
|
+
return all
|
|
119
|
+
.filter(m => m.expiresAt > now)
|
|
120
|
+
.map(entry => ({ entry, score: cosine(queryVector, entry.vector) }))
|
|
121
|
+
.sort((a, b) => b.score - a.score)
|
|
122
|
+
.slice(0, topK);
|
|
123
|
+
}
|
|
124
|
+
async pruneExpiredMemories() {
|
|
125
|
+
const all = await this.loadMemories();
|
|
126
|
+
const now = Date.now();
|
|
127
|
+
const live = all.filter(m => m.expiresAt > now);
|
|
128
|
+
await this.saveMemories(live);
|
|
129
|
+
return all.length - live.length;
|
|
130
|
+
}
|
|
131
|
+
}
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
import * as os from 'os';
|
|
2
|
+
import * as path from 'path';
|
|
3
|
+
// Lazily initialized pipeline — avoids blocking server startup
|
|
4
|
+
let pipelineInstance = null;
|
|
5
|
+
let loadingPromise = null;
|
|
6
|
+
async function getEmbedder() {
|
|
7
|
+
if (pipelineInstance)
|
|
8
|
+
return pipelineInstance;
|
|
9
|
+
if (loadingPromise)
|
|
10
|
+
return loadingPromise;
|
|
11
|
+
loadingPromise = (async () => {
|
|
12
|
+
// Dynamic import so the module loads lazily at runtime, not at parse time.
|
|
13
|
+
// This prevents the 5-10s model load from blocking MCP server startup.
|
|
14
|
+
const { pipeline, env } = await import('@huggingface/transformers');
|
|
15
|
+
// Cache models in ~/.superkit/models to survive package updates
|
|
16
|
+
env.cacheDir = path.join(os.homedir(), '.superkit', 'models');
|
|
17
|
+
pipelineInstance = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', { device: 'cpu' });
|
|
18
|
+
return pipelineInstance;
|
|
19
|
+
})();
|
|
20
|
+
return loadingPromise;
|
|
21
|
+
}
|
|
22
|
+
const BATCH_SIZE = 32;
|
|
23
|
+
/**
|
|
24
|
+
* Embeds an array of texts using all-MiniLM-L6-v2 (384-dim).
|
|
25
|
+
* Processes in batches of 32 to avoid OOM on large indexes.
|
|
26
|
+
* Returns a parallel array of vectors.
|
|
27
|
+
*/
|
|
28
|
+
export async function embed(texts) {
|
|
29
|
+
const extractor = await getEmbedder();
|
|
30
|
+
const results = [];
|
|
31
|
+
for (let i = 0; i < texts.length; i += BATCH_SIZE) {
|
|
32
|
+
const batch = texts.slice(i, i + BATCH_SIZE);
|
|
33
|
+
const output = await extractor(batch, { pooling: 'mean', normalize: true });
|
|
34
|
+
// output.data is a flat Float32Array; reshape into vectors of 384
|
|
35
|
+
const dims = 384;
|
|
36
|
+
for (let j = 0; j < batch.length; j++) {
|
|
37
|
+
results.push(Array.from(output.data.slice(j * dims, (j + 1) * dims)));
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
return results;
|
|
41
|
+
}
|
|
42
|
+
/** Embeds a single text. Convenience wrapper. */
|
|
43
|
+
export async function embedOne(text) {
|
|
44
|
+
const [vec] = await embed([text]);
|
|
45
|
+
return vec;
|
|
46
|
+
}
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
import { createHash } from 'crypto';
|
|
2
|
+
/**
|
|
3
|
+
* Splits a markdown document into chunks at heading boundaries.
|
|
4
|
+
* Each chunk contains the heading path (e.g. "Parent > Child") and the
|
|
5
|
+
* text content under that heading (exclusive of sub-headings' text).
|
|
6
|
+
*/
|
|
7
|
+
export function chunkMarkdown(markdown, sourceFile, sourceType) {
|
|
8
|
+
const lines = markdown.split('\n');
|
|
9
|
+
const chunks = [];
|
|
10
|
+
// headingStack[i] is the text of the heading at level (i+1)
|
|
11
|
+
const headingStack = [];
|
|
12
|
+
let buffer = [];
|
|
13
|
+
const flushBuffer = (path) => {
|
|
14
|
+
const text = buffer.join('\n').trim();
|
|
15
|
+
buffer = [];
|
|
16
|
+
if (!text)
|
|
17
|
+
return;
|
|
18
|
+
const id = createHash('sha1')
|
|
19
|
+
.update(`${sourceFile}::${path}`)
|
|
20
|
+
.digest('hex')
|
|
21
|
+
.slice(0, 16);
|
|
22
|
+
chunks.push({ id, sourceFile, sourceType, headingPath: path, content: text });
|
|
23
|
+
};
|
|
24
|
+
const getPath = () => headingStack.length > 0 ? headingStack.join(' > ') : sourceFile;
|
|
25
|
+
for (const line of lines) {
|
|
26
|
+
const headingMatch = line.match(/^(#{1,6})\s+(.+)$/);
|
|
27
|
+
if (headingMatch) {
|
|
28
|
+
// Flush whatever was accumulated under the previous heading
|
|
29
|
+
flushBuffer(getPath());
|
|
30
|
+
const level = headingMatch[1].length;
|
|
31
|
+
const title = headingMatch[2].trim();
|
|
32
|
+
// Trim the stack to the parent level and push new heading
|
|
33
|
+
headingStack.splice(level - 1);
|
|
34
|
+
headingStack[level - 1] = title;
|
|
35
|
+
}
|
|
36
|
+
else {
|
|
37
|
+
buffer.push(line);
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
// Flush the last section
|
|
41
|
+
flushBuffer(getPath());
|
|
42
|
+
return chunks;
|
|
43
|
+
}
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "superkit-mcp-server",
|
|
3
|
-
"version": "1.
|
|
3
|
+
"version": "1.3.1",
|
|
4
4
|
"type": "module",
|
|
5
5
|
"description": "An MCP server for exploring and loading Super-Kit AI agent resources.",
|
|
6
6
|
"main": "build/index.js",
|
|
@@ -15,6 +15,7 @@
|
|
|
15
15
|
"dev": "tsc --watch"
|
|
16
16
|
},
|
|
17
17
|
"dependencies": {
|
|
18
|
+
"@huggingface/transformers": "^3.8.1",
|
|
18
19
|
"@iarna/toml": "^2.2.5",
|
|
19
20
|
"@modelcontextprotocol/sdk": "^1.4.1",
|
|
20
21
|
"playwright": "^1.58.2",
|
|
@@ -36,4 +37,4 @@
|
|
|
36
37
|
"README.md",
|
|
37
38
|
"ARCHITECTURE.md"
|
|
38
39
|
]
|
|
39
|
-
}
|
|
40
|
+
}
|
|
@@ -1,3 +1,8 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: performance
|
|
3
|
+
description: Performance profiling, optimization techniques, and caching strategies. Use when diagnosing slow applications, improving Core Web Vitals, or optimizing database queries and bundle size.
|
|
4
|
+
---
|
|
5
|
+
|
|
1
6
|
# Performance Optimization Skill
|
|
2
7
|
|
|
3
8
|
## Overview
|
|
@@ -1,3 +1,8 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: react-patterns
|
|
3
|
+
description: Modern React patterns, hooks, and state management principles. Use when structuring React components, managing state, optimizing re-renders, or choosing state management libraries.
|
|
4
|
+
---
|
|
5
|
+
|
|
1
6
|
# React Patterns Skill
|
|
2
7
|
|
|
3
8
|
## Overview
|
|
@@ -1,6 +1,9 @@
|
|
|
1
|
-
|
|
2
|
-
|
|
1
|
+
---
|
|
2
|
+
name: merger-model
|
|
3
3
|
description: Build accretion/dilution analysis for M&A transactions. Models pro forma EPS impact, synergy sensitivities, and purchase price allocation. Use when evaluating a potential acquisition, preparing merger consequences analysis for a pitch, or advising on deal terms. Triggers on "merger model", "accretion dilution", "M&A model", "pro forma EPS", "merger consequences", or "deal impact analysis".
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Merger Model
|
|
4
7
|
|
|
5
8
|
## Workflow
|
|
6
9
|
|