cognitive-modules-cli 1.1.0 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,162 +1,94 @@
1
- # Cognitive Runtime
1
+ # Cognitive Modules CLI (Node.js)
2
2
 
3
- **Structured AI Task Execution**
3
+ [![npm version](https://badge.fury.io/js/cognitive-modules-cli.svg)](https://www.npmjs.com/package/cognitive-modules-cli)
4
4
 
5
- Cognitive Runtime is the next-generation execution engine for Cognitive Modules. It provides a clean, provider-agnostic runtime that treats LLMs as interchangeable backends.
5
+ Node.js/TypeScript 版本的 Cognitive Modules CLI。
6
6
 
7
- ## Philosophy
7
+ > 这是 [cognitive-modules](../../README.md) monorepo 的一部分。
8
8
 
9
- Following the **Cognitive Runtime + Provider** architecture:
10
-
11
- ```
12
- ┌─────────────────────────────────────┐
13
- │ Cognitive Runtime │
14
- │ ┌────────────────────────────────┐ │
15
- │ │ Module System │ │
16
- │ │ (load, parse, validate) │ │
17
- │ └────────────────────────────────┘ │
18
- │ ┌────────────────────────────────┐ │
19
- │ │ Execution Engine │ │
20
- │ │ (prompt, schema, contract) │ │
21
- │ └────────────────────────────────┘ │
22
- │ ┌────────────────────────────────┐ │
23
- │ │ Provider Abstraction │ │
24
- │ │ (gemini, openai, anthropic, │ │
25
- │ │ deepseek, minimax, qwen...) │ │
26
- │ └────────────────────────────────┘ │
27
- └─────────────────────────────────────┘
28
- ```
29
-
30
- ## Installation
9
+ ## 安装
31
10
 
32
11
  ```bash
33
- npm install -g cognitive-runtime
34
- ```
35
-
36
- Or run directly:
12
+ # 全局安装(推荐)
13
+ npm install -g cogn
37
14
 
38
- ```bash
39
- npx cognitive-runtime --help
15
+ # 或使用 npx 零安装
16
+ npx cogn --help
40
17
  ```
41
18
 
42
- ## Usage
43
-
44
- ### Run a Module
19
+ ## 快速开始
45
20
 
46
21
  ```bash
47
- cog run code-reviewer --args "def foo(): pass"
48
- ```
22
+ # 配置 LLM
23
+ export LLM_PROVIDER=openai
24
+ export OPENAI_API_KEY=sk-xxx
49
25
 
50
- ### List Modules
26
+ # 运行模块
27
+ cog run code-reviewer --args "def login(u,p): return db.query(f'SELECT * FROM users WHERE name={u}')"
51
28
 
52
- ```bash
29
+ # 列出模块
53
30
  cog list
54
- ```
55
-
56
- ### Pipe Mode (stdin/stdout)
57
31
 
58
- ```bash
32
+ # 管道模式
59
33
  echo "review this code" | cog pipe --module code-reviewer
60
34
  ```
61
35
 
62
- ### Check Configuration
63
-
64
- ```bash
65
- cog doctor
66
- ```
67
-
68
- ## Module Formats
69
-
70
- ### v2 (Recommended)
71
-
72
- ```
73
- my-module/
74
- ├── module.yaml # Machine-readable manifest
75
- ├── prompt.md # Human-readable prompt
76
- ├── schema.json # IO contract
77
- └── tests/
78
- ├── case1.input.json
79
- └── case1.expected.json
80
- ```
81
-
82
- **module.yaml**:
83
- ```yaml
84
- name: my-module
85
- version: 2.0.0
86
- responsibility: What this module does
87
- constraints:
88
- no_network: true
89
- no_side_effects: true
90
- output:
91
- mode: json_strict
92
- require_confidence: true
93
- require_rationale: true
94
- require_behavior_equivalence: true
95
- tools:
96
- allowed: []
97
- ```
98
-
99
- ### v1 (Legacy, still supported)
36
+ ## Python 版的功能对比
100
37
 
101
- ```
102
- my-module/
103
- ├── MODULE.md # Frontmatter + prompt combined
104
- └── schema.json
105
- ```
106
-
107
- ## Providers
108
-
109
- | Provider | Environment Variable | Default Model |
110
- |------------|------------------------|----------------------|
111
- | Gemini | `GEMINI_API_KEY` | `gemini-3-flash` |
112
- | OpenAI | `OPENAI_API_KEY` | `gpt-5.2` |
113
- | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4.5` |
114
- | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-v3.2` |
115
- | MiniMax | `MINIMAX_API_KEY` | `MiniMax-M2.1` |
116
- | Moonshot | `MOONSHOT_API_KEY` | `kimi-k2.5` |
117
- | Qwen | `DASHSCOPE_API_KEY` | `qwen3-max` |
118
- | Ollama | `OLLAMA_HOST` | `llama4` (local) |
38
+ | 功能 | Python (`cogn`) | Node.js (`cog`) |
39
+ |------|----------------|-----------------|
40
+ | 包名 | `cognitive-modules` | `cogn` / `cognitive-modules-cli` |
41
+ | 安装 | `pip install` | `npm install -g` |
42
+ | 子代理 | ✅ `@call:module` | ✅ `@call:module` |
43
+ | MCP Server | ✅ | ✅ |
44
+ | HTTP Server | ✅ | ✅ |
45
+ | v2.2 Envelope | ✅ | ✅ |
119
46
 
120
- ### Provider Aliases
47
+ 两个版本功能完全一致,共享相同的模块格式和 v2.2 规范。
121
48
 
122
- - `kimi` Moonshot
123
- - `tongyi` / `dashscope` → Qwen
124
- - `local` → Ollama
49
+ **推荐使用 Node.js 版**:零安装快速体验 `npx cogn run ...`
125
50
 
126
- ## Module Search Paths
51
+ ## 支持的 Provider
127
52
 
128
- Modules are searched in order:
53
+ | Provider | 环境变量 | 别名 |
54
+ |----------|----------|------|
55
+ | OpenAI | `OPENAI_API_KEY` | - |
56
+ | Anthropic | `ANTHROPIC_API_KEY` | - |
57
+ | Gemini | `GEMINI_API_KEY` | - |
58
+ | DeepSeek | `DEEPSEEK_API_KEY` | - |
59
+ | MiniMax | `MINIMAX_API_KEY` | - |
60
+ | Moonshot | `MOONSHOT_API_KEY` | `kimi` |
61
+ | Qwen | `DASHSCOPE_API_KEY` | `tongyi` |
62
+ | Ollama | `OLLAMA_HOST` | `local` |
129
63
 
130
- 1. `./cognitive/modules/` (project-local)
131
- 2. `./.cognitive/modules/` (project-local, hidden)
132
- 3. `~/.cognitive/modules/` (user-global)
64
+ ## 命令
133
65
 
134
- ## Programmatic API
135
-
136
- ```typescript
137
- import { getProvider, findModule, runModule } from 'cognitive-runtime';
138
-
139
- const provider = getProvider('gemini');
140
- const module = await findModule('code-reviewer', ['./cognitive/modules']);
141
-
142
- if (module) {
143
- const result = await runModule(module, provider, {
144
- args: 'def foo(): pass',
145
- });
146
- console.log(result.output);
147
- }
66
+ ```bash
67
+ # 模块操作
68
+ cog list # 列出模块
69
+ cog run <module> --args "..." # 运行模块
70
+ cog add <url> -m <module> # 从 GitHub 添加模块
71
+ cog update <module> # 更新模块
72
+ cog remove <module> # 删除模块
73
+ cog versions <url> # 查看可用版本
74
+ cog init <name> # 创建新模块
75
+ cog pipe --module <name> # 管道模式
76
+
77
+ # 服务器
78
+ cog serve --port 8000 # 启动 HTTP API 服务
79
+ cog mcp # 启动 MCP 服务(Claude Code / Cursor)
148
80
  ```
149
81
 
150
- ## Development
82
+ ## 开发
151
83
 
152
84
  ```bash
153
- # Install dependencies
85
+ # 安装依赖
154
86
  npm install
155
87
 
156
- # Build
88
+ # 构建
157
89
  npm run build
158
90
 
159
- # Run in development
91
+ # 开发模式运行
160
92
  npm run dev -- run code-reviewer --args "..."
161
93
  ```
162
94
 
package/dist/cli.js CHANGED
@@ -16,7 +16,7 @@
16
16
  import { parseArgs } from 'node:util';
17
17
  import { getProvider, listProviders } from './providers/index.js';
18
18
  import { run, list, pipe, init, add, update, remove, versions } from './commands/index.js';
19
- const VERSION = '1.0.1';
19
+ const VERSION = '1.3.0';
20
20
  async function main() {
21
21
  const args = process.argv.slice(2);
22
22
  const command = args[0];
@@ -45,6 +45,9 @@ async function main() {
45
45
  tag: { type: 'string', short: 't' },
46
46
  branch: { type: 'string', short: 'b' },
47
47
  limit: { type: 'string', short: 'l' },
48
+ // Server options
49
+ host: { type: 'string', short: 'H' },
50
+ port: { type: 'string', short: 'P' },
48
51
  },
49
52
  allowPositionals: true,
50
53
  });
@@ -249,6 +252,29 @@ async function main() {
249
252
  }
250
253
  break;
251
254
  }
255
+ case 'serve': {
256
+ const { serve } = await import('./server/http.js');
257
+ const port = values.port ? parseInt(values.port, 10) : 8000;
258
+ const host = values.host || '0.0.0.0';
259
+ console.log('Starting Cognitive Modules HTTP Server...');
260
+ await serve({ host, port, cwd: ctx.cwd });
261
+ break;
262
+ }
263
+ case 'mcp': {
264
+ try {
265
+ const { serve: serveMcp } = await import('./mcp/server.js');
266
+ await serveMcp();
267
+ }
268
+ catch (e) {
269
+ if (e instanceof Error && e.message.includes('Cannot find module')) {
270
+ console.error('MCP dependencies not installed.');
271
+ console.error('Install with: npm install @modelcontextprotocol/sdk');
272
+ process.exit(1);
273
+ }
274
+ throw e;
275
+ }
276
+ break;
277
+ }
252
278
  default:
253
279
  console.error(`Unknown command: ${command}`);
254
280
  console.error('Run "cog --help" for usage.');
@@ -280,6 +306,8 @@ COMMANDS:
280
306
  versions <url> List available versions
281
307
  pipe Pipe mode (stdin/stdout)
282
308
  init [name] Initialize project or create module
309
+ serve Start HTTP API server
310
+ mcp Start MCP server (for Claude Code, Cursor)
283
311
  doctor Check configuration
284
312
 
285
313
  OPTIONS:
@@ -294,6 +322,8 @@ OPTIONS:
294
322
  --pretty Pretty-print JSON output
295
323
  -V, --verbose Verbose output
296
324
  --no-validate Skip schema validation
325
+ -H, --host <host> Server host (default: 0.0.0.0)
326
+ -P, --port <port> Server port (default: 8000)
297
327
  -v, --version Show version
298
328
  -h, --help Show this help
299
329
 
@@ -312,6 +342,10 @@ EXAMPLES:
312
342
  cog run code-reviewer --provider openai --model gpt-4o --args "..."
313
343
  cog list
314
344
 
345
+ # Servers
346
+ cog serve --port 8080
347
+ cog mcp
348
+
315
349
  # Other
316
350
  echo "review this code" | cog pipe --module code-reviewer
317
351
  cog init my-module
@@ -0,0 +1,4 @@
1
+ /**
2
+ * MCP Server - Re-export all MCP functionality
3
+ */
4
+ export { serve } from './server.js';
@@ -0,0 +1,4 @@
1
+ /**
2
+ * MCP Server - Re-export all MCP functionality
3
+ */
4
+ export { serve } from './server.js';
@@ -0,0 +1,9 @@
1
+ /**
2
+ * Cognitive Modules MCP Server
3
+ *
4
+ * Provides MCP (Model Context Protocol) interface for Claude Code, Cursor, etc.
5
+ *
6
+ * Start with:
7
+ * cog mcp
8
+ */
9
+ export declare function serve(): Promise<void>;
@@ -0,0 +1,344 @@
1
+ /**
2
+ * Cognitive Modules MCP Server
3
+ *
4
+ * Provides MCP (Model Context Protocol) interface for Claude Code, Cursor, etc.
5
+ *
6
+ * Start with:
7
+ * cog mcp
8
+ */
9
+ import { Server } from '@modelcontextprotocol/sdk/server/index.js';
10
+ import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
11
+ import { CallToolRequestSchema, ListToolsRequestSchema, ListResourcesRequestSchema, ReadResourceRequestSchema, ListPromptsRequestSchema, GetPromptRequestSchema, } from '@modelcontextprotocol/sdk/types.js';
12
+ import { findModule, listModules, getDefaultSearchPaths } from '../modules/loader.js';
13
+ import { runModule } from '../modules/runner.js';
14
+ import { getProvider } from '../providers/index.js';
15
+ // =============================================================================
16
+ // Server Setup
17
+ // =============================================================================
18
+ const server = new Server({
19
+ name: 'cognitive-modules',
20
+ version: '1.2.0',
21
+ }, {
22
+ capabilities: {
23
+ tools: {},
24
+ resources: {},
25
+ prompts: {},
26
+ },
27
+ });
28
+ const cwd = process.cwd();
29
+ const searchPaths = getDefaultSearchPaths(cwd);
30
+ // =============================================================================
31
+ // Tools
32
+ // =============================================================================
33
+ server.setRequestHandler(ListToolsRequestSchema, async () => {
34
+ return {
35
+ tools: [
36
+ {
37
+ name: 'cognitive_run',
38
+ description: 'Run a Cognitive Module to get structured AI analysis results',
39
+ inputSchema: {
40
+ type: 'object',
41
+ properties: {
42
+ module: {
43
+ type: 'string',
44
+ description: 'Module name, e.g. "code-reviewer", "task-prioritizer"',
45
+ },
46
+ args: {
47
+ type: 'string',
48
+ description: 'Input arguments, e.g. code snippet or task list',
49
+ },
50
+ provider: {
51
+ type: 'string',
52
+ description: 'LLM provider (optional), e.g. "openai", "anthropic"',
53
+ },
54
+ model: {
55
+ type: 'string',
56
+ description: 'Model name (optional), e.g. "gpt-4o", "claude-3-5-sonnet"',
57
+ },
58
+ },
59
+ required: ['module', 'args'],
60
+ },
61
+ },
62
+ {
63
+ name: 'cognitive_list',
64
+ description: 'List all installed Cognitive Modules',
65
+ inputSchema: {
66
+ type: 'object',
67
+ properties: {},
68
+ },
69
+ },
70
+ {
71
+ name: 'cognitive_info',
72
+ description: 'Get detailed information about a Cognitive Module',
73
+ inputSchema: {
74
+ type: 'object',
75
+ properties: {
76
+ module: {
77
+ type: 'string',
78
+ description: 'Module name',
79
+ },
80
+ },
81
+ required: ['module'],
82
+ },
83
+ },
84
+ ],
85
+ };
86
+ });
87
+ server.setRequestHandler(CallToolRequestSchema, async (request) => {
88
+ const { name, arguments: args } = request.params;
89
+ try {
90
+ switch (name) {
91
+ case 'cognitive_run': {
92
+ const { module: moduleName, args: inputArgs, provider: providerName, model } = args;
93
+ // Find module
94
+ const moduleData = await findModule(moduleName, searchPaths);
95
+ if (!moduleData) {
96
+ return {
97
+ content: [
98
+ {
99
+ type: 'text',
100
+ text: JSON.stringify({ ok: false, error: `Module '${moduleName}' not found` }),
101
+ },
102
+ ],
103
+ };
104
+ }
105
+ // Create provider
106
+ const provider = getProvider(providerName, model);
107
+ // Run module
108
+ const result = await runModule(moduleData, provider, {
109
+ input: { query: inputArgs, code: inputArgs },
110
+ useV22: true,
111
+ });
112
+ return {
113
+ content: [
114
+ {
115
+ type: 'text',
116
+ text: JSON.stringify(result, null, 2),
117
+ },
118
+ ],
119
+ };
120
+ }
121
+ case 'cognitive_list': {
122
+ const modules = await listModules(searchPaths);
123
+ return {
124
+ content: [
125
+ {
126
+ type: 'text',
127
+ text: JSON.stringify({
128
+ modules: modules.map((m) => ({
129
+ name: m.name,
130
+ location: m.location,
131
+ format: m.format,
132
+ tier: m.tier,
133
+ })),
134
+ count: modules.length,
135
+ }, null, 2),
136
+ },
137
+ ],
138
+ };
139
+ }
140
+ case 'cognitive_info': {
141
+ const { module: moduleName } = args;
142
+ const moduleData = await findModule(moduleName, searchPaths);
143
+ if (!moduleData) {
144
+ return {
145
+ content: [
146
+ {
147
+ type: 'text',
148
+ text: JSON.stringify({ ok: false, error: `Module '${moduleName}' not found` }),
149
+ },
150
+ ],
151
+ };
152
+ }
153
+ return {
154
+ content: [
155
+ {
156
+ type: 'text',
157
+ text: JSON.stringify({
158
+ ok: true,
159
+ name: moduleData.name,
160
+ version: moduleData.version,
161
+ responsibility: moduleData.responsibility,
162
+ tier: moduleData.tier,
163
+ format: moduleData.format,
164
+ inputSchema: moduleData.inputSchema,
165
+ outputSchema: moduleData.outputSchema,
166
+ }, null, 2),
167
+ },
168
+ ],
169
+ };
170
+ }
171
+ default:
172
+ return {
173
+ content: [
174
+ {
175
+ type: 'text',
176
+ text: JSON.stringify({ ok: false, error: `Unknown tool: ${name}` }),
177
+ },
178
+ ],
179
+ };
180
+ }
181
+ }
182
+ catch (error) {
183
+ return {
184
+ content: [
185
+ {
186
+ type: 'text',
187
+ text: JSON.stringify({
188
+ ok: false,
189
+ error: error instanceof Error ? error.message : String(error),
190
+ }),
191
+ },
192
+ ],
193
+ };
194
+ }
195
+ });
196
+ // =============================================================================
197
+ // Resources
198
+ // =============================================================================
199
+ server.setRequestHandler(ListResourcesRequestSchema, async () => {
200
+ const modules = await listModules(searchPaths);
201
+ return {
202
+ resources: [
203
+ {
204
+ uri: 'cognitive://modules',
205
+ name: 'All Modules',
206
+ description: 'List of all installed Cognitive Modules',
207
+ mimeType: 'application/json',
208
+ },
209
+ ...modules.map((m) => ({
210
+ uri: `cognitive://module/${m.name}`,
211
+ name: m.name,
212
+ description: m.responsibility || `Cognitive Module: ${m.name}`,
213
+ mimeType: 'text/markdown',
214
+ })),
215
+ ],
216
+ };
217
+ });
218
+ server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
219
+ const { uri } = request.params;
220
+ if (uri === 'cognitive://modules') {
221
+ const modules = await listModules(searchPaths);
222
+ return {
223
+ contents: [
224
+ {
225
+ uri,
226
+ mimeType: 'application/json',
227
+ text: JSON.stringify(modules.map((m) => m.name), null, 2),
228
+ },
229
+ ],
230
+ };
231
+ }
232
+ const match = uri.match(/^cognitive:\/\/module\/(.+)$/);
233
+ if (match) {
234
+ const moduleName = match[1];
235
+ const moduleData = await findModule(moduleName, searchPaths);
236
+ if (!moduleData) {
237
+ return {
238
+ contents: [
239
+ {
240
+ uri,
241
+ mimeType: 'text/plain',
242
+ text: `Module '${moduleName}' not found`,
243
+ },
244
+ ],
245
+ };
246
+ }
247
+ return {
248
+ contents: [
249
+ {
250
+ uri,
251
+ mimeType: 'text/markdown',
252
+ text: moduleData.prompt,
253
+ },
254
+ ],
255
+ };
256
+ }
257
+ return {
258
+ contents: [
259
+ {
260
+ uri,
261
+ mimeType: 'text/plain',
262
+ text: `Unknown resource: ${uri}`,
263
+ },
264
+ ],
265
+ };
266
+ });
267
+ // =============================================================================
268
+ // Prompts
269
+ // =============================================================================
270
+ server.setRequestHandler(ListPromptsRequestSchema, async () => {
271
+ return {
272
+ prompts: [
273
+ {
274
+ name: 'code_review',
275
+ description: 'Generate a code review prompt',
276
+ arguments: [
277
+ {
278
+ name: 'code',
279
+ description: 'The code to review',
280
+ required: true,
281
+ },
282
+ ],
283
+ },
284
+ {
285
+ name: 'task_prioritize',
286
+ description: 'Generate a task prioritization prompt',
287
+ arguments: [
288
+ {
289
+ name: 'tasks',
290
+ description: 'The tasks to prioritize',
291
+ required: true,
292
+ },
293
+ ],
294
+ },
295
+ ],
296
+ };
297
+ });
298
+ server.setRequestHandler(GetPromptRequestSchema, async (request) => {
299
+ const { name, arguments: args } = request.params;
300
+ switch (name) {
301
+ case 'code_review': {
302
+ const code = args?.code ?? '';
303
+ return {
304
+ messages: [
305
+ {
306
+ role: 'user',
307
+ content: {
308
+ type: 'text',
309
+ text: `Please use the cognitive_run tool to review the following code:\n\n\`\`\`\n${code}\n\`\`\`\n\nCall: cognitive_run("code-reviewer", "${code.slice(0, 100)}...")`,
310
+ },
311
+ },
312
+ ],
313
+ };
314
+ }
315
+ case 'task_prioritize': {
316
+ const tasks = args?.tasks ?? '';
317
+ return {
318
+ messages: [
319
+ {
320
+ role: 'user',
321
+ content: {
322
+ type: 'text',
323
+ text: `Please use the cognitive_run tool to prioritize the following tasks:\n\n${tasks}\n\nCall: cognitive_run("task-prioritizer", "${tasks}")`,
324
+ },
325
+ },
326
+ ],
327
+ };
328
+ }
329
+ default:
330
+ throw new Error(`Unknown prompt: ${name}`);
331
+ }
332
+ });
333
+ // =============================================================================
334
+ // Server Start
335
+ // =============================================================================
336
+ export async function serve() {
337
+ const transport = new StdioServerTransport();
338
+ await server.connect(transport);
339
+ console.error('Cognitive Modules MCP Server started');
340
+ }
341
+ // Allow running directly
342
+ if (import.meta.url === `file://${process.argv[1]}`) {
343
+ serve().catch(console.error);
344
+ }
@@ -3,3 +3,4 @@
3
3
  */
4
4
  export * from './loader.js';
5
5
  export * from './runner.js';
6
+ export * from './subagent.js';
@@ -3,3 +3,4 @@
3
3
  */
4
4
  export * from './loader.js';
5
5
  export * from './runner.js';
6
+ export * from './subagent.js';