cognitive-modules-cli 1.0.1 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,162 +1,86 @@
1
- # Cognitive Runtime
1
+ # Cognitive Modules CLI (Node.js)
2
2
 
3
- **Structured AI Task Execution**
3
+ [![npm version](https://badge.fury.io/js/cognitive-modules-cli.svg)](https://www.npmjs.com/package/cognitive-modules-cli)
4
4
 
5
- Cognitive Runtime is the next-generation execution engine for Cognitive Modules. It provides a clean, provider-agnostic runtime that treats LLMs as interchangeable backends.
5
+ Node.js/TypeScript 版本的 Cognitive Modules CLI。
6
6
 
7
- ## Philosophy
7
+ > 这是 [cognitive-modules](../../README.md) monorepo 的一部分。
8
8
 
9
- Following the **Cognitive Runtime + Provider** architecture:
10
-
11
- ```
12
- ┌─────────────────────────────────────┐
13
- │ Cognitive Runtime │
14
- │ ┌────────────────────────────────┐ │
15
- │ │ Module System │ │
16
- │ │ (load, parse, validate) │ │
17
- │ └────────────────────────────────┘ │
18
- │ ┌────────────────────────────────┐ │
19
- │ │ Execution Engine │ │
20
- │ │ (prompt, schema, contract) │ │
21
- │ └────────────────────────────────┘ │
22
- │ ┌────────────────────────────────┐ │
23
- │ │ Provider Abstraction │ │
24
- │ │ (gemini, openai, anthropic, │ │
25
- │ │ deepseek, minimax, qwen...) │ │
26
- │ └────────────────────────────────┘ │
27
- └─────────────────────────────────────┘
28
- ```
29
-
30
- ## Installation
9
+ ## 安装
31
10
 
32
11
  ```bash
33
- npm install -g cognitive-runtime
34
- ```
12
+ # 全局安装(推荐)
13
+ npm install -g cogn
35
14
 
36
- Or run directly:
37
-
38
- ```bash
39
- npx cognitive-runtime --help
15
+ # 或使用 npx 零安装
16
+ npx cogn --help
40
17
  ```
41
18
 
42
- ## Usage
43
-
44
- ### Run a Module
19
+ ## 快速开始
45
20
 
46
21
  ```bash
47
- cog run code-reviewer --args "def foo(): pass"
48
- ```
22
+ # 配置 LLM
23
+ export LLM_PROVIDER=openai
24
+ export OPENAI_API_KEY=sk-xxx
49
25
 
50
- ### List Modules
26
+ # 运行模块
27
+ cog run code-reviewer --args "def login(u,p): return db.query(f'SELECT * FROM users WHERE name={u}')"
51
28
 
52
- ```bash
29
+ # 列出模块
53
30
  cog list
54
- ```
55
31
 
56
- ### Pipe Mode (stdin/stdout)
57
-
58
- ```bash
32
+ # 管道模式
59
33
  echo "review this code" | cog pipe --module code-reviewer
60
34
  ```
61
35
 
62
- ### Check Configuration
36
+ ## Python 版的区别
63
37
 
64
- ```bash
65
- cog doctor
66
- ```
38
+ | 功能 | Python (`cogn`) | Node.js (`cog`) |
39
+ |------|----------------|-----------------|
40
+ | 包名 | `cognitive-modules` | `cognitive-modules-cli` |
41
+ | 安装 | `pip install` | `npm install -g` |
42
+ | 子代理 | ✅ `@call:module` | ❌ 暂不支持 |
43
+ | MCP Server | ✅ | ❌ 暂不支持 |
44
+ | HTTP Server | ✅ | ❌ 暂不支持 |
67
45
 
68
- ## Module Formats
46
+ 两个版本共享相同的模块格式和 v2.2 规范。
69
47
 
70
- ### v2 (Recommended)
48
+ ## 支持的 Provider
71
49
 
72
- ```
73
- my-module/
74
- ├── module.yaml # Machine-readable manifest
75
- ├── prompt.md # Human-readable prompt
76
- ├── schema.json # IO contract
77
- └── tests/
78
- ├── case1.input.json
79
- └── case1.expected.json
80
- ```
50
+ | Provider | 环境变量 | 别名 |
51
+ |----------|----------|------|
52
+ | OpenAI | `OPENAI_API_KEY` | - |
53
+ | Anthropic | `ANTHROPIC_API_KEY` | - |
54
+ | Gemini | `GEMINI_API_KEY` | - |
55
+ | DeepSeek | `DEEPSEEK_API_KEY` | - |
56
+ | MiniMax | `MINIMAX_API_KEY` | - |
57
+ | Moonshot | `MOONSHOT_API_KEY` | `kimi` |
58
+ | Qwen | `DASHSCOPE_API_KEY` | `tongyi` |
59
+ | Ollama | `OLLAMA_HOST` | `local` |
81
60
 
82
- **module.yaml**:
83
- ```yaml
84
- name: my-module
85
- version: 2.0.0
86
- responsibility: What this module does
87
- constraints:
88
- no_network: true
89
- no_side_effects: true
90
- output:
91
- mode: json_strict
92
- require_confidence: true
93
- require_rationale: true
94
- require_behavior_equivalence: true
95
- tools:
96
- allowed: []
97
- ```
98
-
99
- ### v1 (Legacy, still supported)
100
-
101
- ```
102
- my-module/
103
- ├── MODULE.md # Frontmatter + prompt combined
104
- └── schema.json
105
- ```
106
-
107
- ## Providers
108
-
109
- | Provider | Environment Variable | Default Model |
110
- |------------|------------------------|----------------------|
111
- | Gemini | `GEMINI_API_KEY` | `gemini-3-flash` |
112
- | OpenAI | `OPENAI_API_KEY` | `gpt-5.2` |
113
- | Anthropic | `ANTHROPIC_API_KEY` | `claude-sonnet-4.5` |
114
- | DeepSeek | `DEEPSEEK_API_KEY` | `deepseek-v3.2` |
115
- | MiniMax | `MINIMAX_API_KEY` | `MiniMax-M2.1` |
116
- | Moonshot | `MOONSHOT_API_KEY` | `kimi-k2.5` |
117
- | Qwen | `DASHSCOPE_API_KEY` | `qwen3-max` |
118
- | Ollama | `OLLAMA_HOST` | `llama4` (local) |
119
-
120
- ### Provider Aliases
61
+ ## 命令
121
62
 
122
- - `kimi` → Moonshot
123
- - `tongyi` / `dashscope` → Qwen
124
- - `local` Ollama
125
-
126
- ## Module Search Paths
127
-
128
- Modules are searched in order:
129
-
130
- 1. `./cognitive/modules/` (project-local)
131
- 2. `./.cognitive/modules/` (project-local, hidden)
132
- 3. `~/.cognitive/modules/` (user-global)
133
-
134
- ## Programmatic API
135
-
136
- ```typescript
137
- import { getProvider, findModule, runModule } from 'cognitive-runtime';
138
-
139
- const provider = getProvider('gemini');
140
- const module = await findModule('code-reviewer', ['./cognitive/modules']);
141
-
142
- if (module) {
143
- const result = await runModule(module, provider, {
144
- args: 'def foo(): pass',
145
- });
146
- console.log(result.output);
147
- }
63
+ ```bash
64
+ cog list # 列出模块
65
+ cog run <module> --args "..." # 运行模块
66
+ cog add <url> -m <module> # 从 GitHub 添加模块
67
+ cog update <module> # 更新模块
68
+ cog remove <module> # 删除模块
69
+ cog versions <url> # 查看可用版本
70
+ cog init <name> # 创建新模块
71
+ cog pipe --module <name> # 管道模式
148
72
  ```
149
73
 
150
- ## Development
74
+ ## 开发
151
75
 
152
76
  ```bash
153
- # Install dependencies
77
+ # 安装依赖
154
78
  npm install
155
79
 
156
- # Build
80
+ # 构建
157
81
  npm run build
158
82
 
159
- # Run in development
83
+ # 开发模式运行
160
84
  npm run dev -- run code-reviewer --args "..."
161
85
  ```
162
86
 
@@ -19,6 +19,18 @@ async function detectFormat(modulePath) {
19
19
  return 'v1';
20
20
  }
21
21
  }
22
+ /**
23
+ * Detect v2.x sub-version from manifest
24
+ */
25
+ function detectV2Version(manifest) {
26
+ if (manifest.tier || manifest.overflow || manifest.enums) {
27
+ return 'v2.2';
28
+ }
29
+ if (manifest.policies || manifest.failure) {
30
+ return 'v2.1';
31
+ }
32
+ return 'v2.0';
33
+ }
22
34
  /**
23
35
  * Load v2 format module (module.yaml + prompt.md)
24
36
  */
@@ -29,6 +41,8 @@ async function loadModuleV2(modulePath) {
29
41
  // Read module.yaml
30
42
  const manifestContent = await fs.readFile(manifestFile, 'utf-8');
31
43
  const manifest = yaml.load(manifestContent);
44
+ // Detect v2.x version
45
+ const formatVersion = detectV2Version(manifest);
32
46
  // Read prompt.md
33
47
  let prompt = '';
34
48
  try {
@@ -40,17 +54,61 @@ async function loadModuleV2(modulePath) {
40
54
  // Read schema.json
41
55
  let inputSchema;
42
56
  let outputSchema;
57
+ let dataSchema;
58
+ let metaSchema;
43
59
  let errorSchema;
44
60
  try {
45
61
  const schemaContent = await fs.readFile(schemaFile, 'utf-8');
46
62
  const schema = JSON.parse(schemaContent);
47
63
  inputSchema = schema.input;
48
- outputSchema = schema.output;
64
+ // Support both "data" (v2.2) and "output" (v2.1) aliases
65
+ dataSchema = schema.data || schema.output;
66
+ outputSchema = dataSchema; // Backward compat
67
+ metaSchema = schema.meta;
49
68
  errorSchema = schema.error;
50
69
  }
51
70
  catch {
52
71
  // Schema file is optional but recommended
53
72
  }
73
+ // Extract v2.2 fields
74
+ const tier = manifest.tier;
75
+ const schemaStrictness = manifest.schema_strictness || 'medium';
76
+ // Determine default max_items based on strictness (SPEC-v2.2)
77
+ const strictnessMaxItems = {
78
+ high: 0,
79
+ medium: 5,
80
+ low: 20
81
+ };
82
+ const defaultMaxItems = strictnessMaxItems[schemaStrictness] ?? 5;
83
+ const defaultEnabled = schemaStrictness !== 'high';
84
+ // Parse overflow config with strictness-based defaults
85
+ const overflowRaw = manifest.overflow || {};
86
+ const overflow = {
87
+ enabled: overflowRaw.enabled ?? defaultEnabled,
88
+ recoverable: overflowRaw.recoverable ?? true,
89
+ max_items: overflowRaw.max_items ?? defaultMaxItems,
90
+ require_suggested_mapping: overflowRaw.require_suggested_mapping ?? true
91
+ };
92
+ // Parse enums config
93
+ const enumsRaw = manifest.enums || {};
94
+ const enums = {
95
+ strategy: enumsRaw.strategy ??
96
+ (tier === 'exec' ? 'strict' : 'extensible'),
97
+ unknown_tag: enumsRaw.unknown_tag ?? 'custom'
98
+ };
99
+ // Parse compat config
100
+ const compatRaw = manifest.compat || {};
101
+ const compat = {
102
+ accepts_v21_payload: compatRaw.accepts_v21_payload ?? true,
103
+ runtime_auto_wrap: compatRaw.runtime_auto_wrap ?? true,
104
+ schema_output_alias: compatRaw.schema_output_alias ?? 'data'
105
+ };
106
+ // Parse meta config (including risk_rule)
107
+ const metaRaw = manifest.meta || {};
108
+ const metaConfig = {
109
+ required: metaRaw.required,
110
+ risk_rule: metaRaw.risk_rule,
111
+ };
54
112
  return {
55
113
  name: manifest.name || path.basename(modulePath),
56
114
  version: manifest.version || '1.0.0',
@@ -62,13 +120,26 @@ async function loadModuleV2(modulePath) {
62
120
  output: manifest.output,
63
121
  failure: manifest.failure,
64
122
  runtimeRequirements: manifest.runtime_requirements,
123
+ // v2.2 fields
124
+ tier,
125
+ schemaStrictness,
126
+ overflow,
127
+ enums,
128
+ compat,
129
+ metaConfig,
130
+ // Context and prompt
65
131
  context: manifest.context,
66
132
  prompt,
133
+ // Schemas
67
134
  inputSchema,
68
135
  outputSchema,
136
+ dataSchema,
137
+ metaSchema,
69
138
  errorSchema,
139
+ // Metadata
70
140
  location: modulePath,
71
141
  format: 'v2',
142
+ formatVersion,
72
143
  };
73
144
  }
74
145
  /**
@@ -1,6 +1,6 @@
1
1
  /**
2
2
  * Module Runner - Execute Cognitive Modules
3
- * v2.1: Envelope format support, clean input mapping
3
+ * v2.2: Envelope format with meta/data separation, risk_rule, repair pass
4
4
  */
5
5
  import type { Provider, CognitiveModule, ModuleResult, ModuleInput } from '../types.js';
6
6
  export interface RunOptions {
@@ -8,5 +8,7 @@ export interface RunOptions {
8
8
  args?: string;
9
9
  verbose?: boolean;
10
10
  useEnvelope?: boolean;
11
+ useV22?: boolean;
12
+ enableRepair?: boolean;
11
13
  }
12
14
  export declare function runModule(module: CognitiveModule, provider: Provider, options?: RunOptions): Promise<ModuleResult>;
@@ -1,11 +1,116 @@
1
1
  /**
2
2
  * Module Runner - Execute Cognitive Modules
3
- * v2.1: Envelope format support, clean input mapping
3
+ * v2.2: Envelope format with meta/data separation, risk_rule, repair pass
4
4
  */
5
+ import { aggregateRisk, isV22Envelope } from '../types.js';
6
+ // =============================================================================
7
+ // Repair Pass (v2.2)
8
+ // =============================================================================
9
+ /**
10
+ * Attempt to repair envelope format issues without changing semantics.
11
+ *
12
+ * Repairs (lossless only):
13
+ * - Missing meta fields (fill with conservative defaults)
14
+ * - Truncate explain if too long
15
+ * - Trim whitespace from string fields
16
+ *
17
+ * Does NOT repair:
18
+ * - Invalid enum values (treated as validation failure)
19
+ */
20
+ function repairEnvelope(response, riskRule = 'max_changes_risk', maxExplainLength = 280) {
21
+ const repaired = { ...response };
22
+ // Ensure meta exists
23
+ if (!repaired.meta || typeof repaired.meta !== 'object') {
24
+ repaired.meta = {};
25
+ }
26
+ const meta = repaired.meta;
27
+ const data = (repaired.data ?? {});
28
+ // Repair confidence
29
+ if (typeof meta.confidence !== 'number') {
30
+ meta.confidence = data.confidence ?? 0.5;
31
+ }
32
+ meta.confidence = Math.max(0, Math.min(1, meta.confidence));
33
+ // Repair risk using configurable aggregation rule
34
+ if (!meta.risk) {
35
+ meta.risk = aggregateRisk(data, riskRule);
36
+ }
37
+ // Trim whitespace only (lossless), do NOT invent new values
38
+ if (typeof meta.risk === 'string') {
39
+ meta.risk = meta.risk.trim().toLowerCase();
40
+ }
41
+ // Repair explain
42
+ if (typeof meta.explain !== 'string') {
43
+ const rationale = data.rationale;
44
+ meta.explain = rationale ? String(rationale).slice(0, maxExplainLength) : 'No explanation provided';
45
+ }
46
+ // Trim whitespace (lossless)
47
+ const explainStr = meta.explain;
48
+ meta.explain = explainStr.trim();
49
+ if (meta.explain.length > maxExplainLength) {
50
+ meta.explain = meta.explain.slice(0, maxExplainLength - 3) + '...';
51
+ }
52
+ // Build proper v2.2 response
53
+ const builtMeta = {
54
+ confidence: meta.confidence,
55
+ risk: meta.risk,
56
+ explain: meta.explain
57
+ };
58
+ const result = repaired.ok === false ? {
59
+ ok: false,
60
+ meta: builtMeta,
61
+ error: repaired.error ?? { code: 'UNKNOWN', message: 'Unknown error' },
62
+ partial_data: repaired.partial_data
63
+ } : {
64
+ ok: true,
65
+ meta: builtMeta,
66
+ data: repaired.data
67
+ };
68
+ return result;
69
+ }
70
+ /**
71
+ * Wrap v2.1 response to v2.2 format
72
+ */
73
+ function wrapV21ToV22(response, riskRule = 'max_changes_risk') {
74
+ if (isV22Envelope(response)) {
75
+ return response;
76
+ }
77
+ if (response.ok) {
78
+ const data = (response.data ?? {});
79
+ const confidence = data.confidence ?? 0.5;
80
+ const rationale = data.rationale ?? '';
81
+ return {
82
+ ok: true,
83
+ meta: {
84
+ confidence,
85
+ risk: aggregateRisk(data, riskRule),
86
+ explain: rationale.slice(0, 280) || 'No explanation provided'
87
+ },
88
+ data: data
89
+ };
90
+ }
91
+ else {
92
+ const errorMsg = response.error?.message ?? 'Unknown error';
93
+ return {
94
+ ok: false,
95
+ meta: {
96
+ confidence: 0,
97
+ risk: 'high',
98
+ explain: errorMsg.slice(0, 280)
99
+ },
100
+ error: response.error ?? { code: 'UNKNOWN', message: errorMsg },
101
+ partial_data: response.partial_data
102
+ };
103
+ }
104
+ }
5
105
  export async function runModule(module, provider, options = {}) {
6
- const { args, input, verbose = false, useEnvelope } = options;
106
+ const { args, input, verbose = false, useEnvelope, useV22, enableRepair = true } = options;
7
107
  // Determine if we should use envelope format
8
108
  const shouldUseEnvelope = useEnvelope ?? (module.output?.envelope === true || module.format === 'v2');
109
+ // Determine if we should use v2.2 format
110
+ const isV22Module = module.tier !== undefined || module.formatVersion === 'v2.2';
111
+ const shouldUseV22 = useV22 ?? (isV22Module || module.compat?.runtime_auto_wrap === true);
112
+ // Get risk_rule from module config
113
+ const riskRule = module.metaConfig?.risk_rule ?? 'max_changes_risk';
9
114
  // Build clean input data (v2 style: no $ARGUMENTS pollution)
10
115
  const inputData = input || {};
11
116
  // Map legacy --args to clean input
@@ -61,11 +166,21 @@ export async function runModule(module, provider, options = {}) {
61
166
  }
62
167
  // Add envelope format instructions
63
168
  if (shouldUseEnvelope) {
64
- systemParts.push('', 'RESPONSE FORMAT (Envelope):');
65
- systemParts.push('- Wrap your response in the envelope format');
66
- systemParts.push('- Success: { "ok": true, "data": { ...your output... } }');
67
- systemParts.push('- Error: { "ok": false, "error": { "code": "ERROR_CODE", "message": "..." } }');
68
- systemParts.push('- Include "confidence" (0-1) and "rationale" in data');
169
+ if (shouldUseV22) {
170
+ systemParts.push('', 'RESPONSE FORMAT (Envelope v2.2):');
171
+ systemParts.push('- Wrap your response in the v2.2 envelope format with separate meta and data');
172
+ systemParts.push('- Success: { "ok": true, "meta": { "confidence": 0.9, "risk": "low", "explain": "short summary" }, "data": { ...payload... } }');
173
+ systemParts.push('- Error: { "ok": false, "meta": { "confidence": 0.0, "risk": "high", "explain": "error summary" }, "error": { "code": "ERROR_CODE", "message": "..." } }');
174
+ systemParts.push('- meta.explain must be ≤280 characters. data.rationale can be longer for detailed reasoning.');
175
+ systemParts.push('- meta.risk must be one of: "none", "low", "medium", "high"');
176
+ }
177
+ else {
178
+ systemParts.push('', 'RESPONSE FORMAT (Envelope):');
179
+ systemParts.push('- Wrap your response in the envelope format');
180
+ systemParts.push('- Success: { "ok": true, "data": { ...your output... } }');
181
+ systemParts.push('- Error: { "ok": false, "error": { "code": "ERROR_CODE", "message": "..." } }');
182
+ systemParts.push('- Include "confidence" (0-1) and "rationale" in data');
183
+ }
69
184
  if (module.output?.require_behavior_equivalence) {
70
185
  systemParts.push('- Include "behavior_equivalence" (boolean) in data');
71
186
  }
@@ -105,10 +220,46 @@ export async function runModule(module, provider, options = {}) {
105
220
  }
106
221
  // Handle envelope format
107
222
  if (shouldUseEnvelope && isEnvelopeResponse(parsed)) {
108
- return parseEnvelopeResponse(parsed, result.content);
223
+ let response = parseEnvelopeResponse(parsed, result.content);
224
+ // Upgrade to v2.2 if needed
225
+ if (shouldUseV22 && response.ok && !('meta' in response && response.meta)) {
226
+ const upgraded = wrapV21ToV22(parsed, riskRule);
227
+ response = {
228
+ ok: true,
229
+ meta: upgraded.meta,
230
+ data: upgraded.data,
231
+ raw: result.content
232
+ };
233
+ }
234
+ // Apply repair pass if enabled and response needs it
235
+ if (enableRepair && response.ok && shouldUseV22) {
236
+ const repaired = repairEnvelope(response, riskRule);
237
+ response = {
238
+ ok: true,
239
+ meta: repaired.meta,
240
+ data: repaired.data,
241
+ raw: result.content
242
+ };
243
+ }
244
+ return response;
109
245
  }
110
246
  // Handle legacy format (non-envelope)
111
- return parseLegacyResponse(parsed, result.content);
247
+ const legacyResult = parseLegacyResponse(parsed, result.content);
248
+ // Upgrade to v2.2 if requested
249
+ if (shouldUseV22 && legacyResult.ok) {
250
+ const data = (legacyResult.data ?? {});
251
+ return {
252
+ ok: true,
253
+ meta: {
254
+ confidence: data.confidence ?? 0.5,
255
+ risk: aggregateRisk(data, riskRule),
256
+ explain: (data.rationale ?? '').slice(0, 280) || 'No explanation provided'
257
+ },
258
+ data: legacyResult.data,
259
+ raw: result.content
260
+ };
261
+ }
262
+ return legacyResult;
112
263
  }
113
264
  /**
114
265
  * Check if response is in envelope format
@@ -120,11 +271,32 @@ function isEnvelopeResponse(obj) {
120
271
  return typeof o.ok === 'boolean';
121
272
  }
122
273
  /**
123
- * Parse envelope format response
274
+ * Parse envelope format response (supports both v2.1 and v2.2)
124
275
  */
125
276
  function parseEnvelopeResponse(response, raw) {
277
+ // Check if v2.2 format (has meta)
278
+ if (isV22Envelope(response)) {
279
+ if (response.ok) {
280
+ return {
281
+ ok: true,
282
+ meta: response.meta,
283
+ data: response.data,
284
+ raw,
285
+ };
286
+ }
287
+ else {
288
+ return {
289
+ ok: false,
290
+ meta: response.meta,
291
+ error: response.error,
292
+ partial_data: response.partial_data,
293
+ raw,
294
+ };
295
+ }
296
+ }
297
+ // v2.1 format
126
298
  if (response.ok) {
127
- const data = response.data;
299
+ const data = (response.data ?? {});
128
300
  return {
129
301
  ok: true,
130
302
  data: {
@@ -169,6 +341,7 @@ function parseLegacyResponse(output, raw) {
169
341
  };
170
342
  }
171
343
  }
344
+ // Return as v2.1 format (data includes confidence)
172
345
  return {
173
346
  ok: true,
174
347
  data: {