feature-architect-agent 1.0.10 → 1.0.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -50,55 +50,58 @@ A command-line tool that helps developers plan new features systematically. Inst
50
50
 
51
51
  ## 🚀 Quick Start
52
52
 
53
- ### For Team Members (Published Package)
53
+ ### Installation
54
54
 
55
55
  ```bash
56
- # Install globally from your team's private registry
57
- npm install -g @your-org/feature-architect
56
+ # Install globally from npm
57
+ npm install -g feature-architect-agent
58
58
 
59
- # Verify installation
60
- feature-architect verify
61
-
62
- # That's it! The built-in team API key handles everything.
63
- # No individual API key setup needed!
59
+ # Or for local development
60
+ cd HACKATHON_FEATURE_ARCHITECT_AGENT
61
+ npm install && npm run build && npm link
64
62
  ```
65
63
 
66
- ### For Developers (Local Development)
64
+ ### API Key / Provider Setup
67
65
 
68
- ```bash
69
- # 1. Navigate to the agent directory
70
- cd HACKATHON_FEATURE_ARCHITECT_AGENT
66
+ **Option 1: Use FREE cloud models (Recommended for hackathons!)**
71
67
 
72
- # 2. Install dependencies
73
- npm install
68
+ ```bash
69
+ # MiniMax M2.5 - FREE tier available!
70
+ export MINIMAX_API_KEY=xxx
74
71
 
75
- # 3. Build the package
76
- npm run build
72
+ # Z AI GLM - FREE tier available!
73
+ export ZAI_API_KEY=xxx
74
+ ```
77
75
 
78
- # 4. Link globally (makes command available everywhere)
79
- npm link
76
+ **Option 2: Use FREE local models**
80
77
 
81
- # Now you can use `feature-architect` from ANY directory!
78
+ ```bash
79
+ # OpenCode / Ollama - No API key needed!
80
+ export OPENCODE_BASE_URL=http://localhost:11434/v1
81
+ export OPENCODE_MODEL=llama3.2
82
82
  ```
83
83
 
84
- ### Verify Installation
84
+ **Option 3: Use cloud providers**
85
85
 
86
86
  ```bash
87
- feature-architect verify
88
- ```
87
+ # Gemini (Google)
88
+ export GOOGLE_API_KEY=xxx
89
89
 
90
- Checks for:
91
- - Context file existence
92
- - API key configuration (built-in or user-provided)
93
- - Output directory
90
+ # OpenAI (GPT-4, etc.)
91
+ export OPENAI_API_KEY=sk-xxx
94
92
 
95
- ### API Key Configuration
93
+ # Claude (Anthropic)
94
+ export ANTHROPIC_API_KEY=sk-ant-xxx
95
+ ```
96
+
97
+ **For persistent configuration**, add to `~/.bashrc` or `~/.zshrc`:
96
98
 
97
- **Team members using the published package:**
98
- - No setup needed! The built-in team API key is included.
99
- - Works out of the box after installation.
99
+ ```bash
100
+ echo 'export MINIMAX_API_KEY=xxx' >> ~/.bashrc
101
+ source ~/.bashrc
102
+ ```
100
103
 
101
- **Optional: Use your own API key:**
104
+ ### Verify Installation
102
105
 
103
106
  You can override the built-in key with your own:
104
107
 
@@ -537,21 +540,33 @@ $ feature-architect plan "Add user comments"
537
540
  The agent automatically detects which API key is available and uses the appropriate provider. No manual selection needed!
538
541
 
539
542
  **Priority order:**
540
- 1. `ANTHROPIC_API_KEY` → Claude (Anthropic)
541
- 2. `OPENAI_API_KEY` → OpenAI (GPT-4, etc.)
542
- 3. `GOOGLE_API_KEY` → Gemini
543
- 4. `AI_API_KEY` → Generic fallback (uses Claude)
543
+ 1. `MINIMAX_API_KEY` → MiniMax M2.5 (FREE - no signup required!)
544
+ 2. `ZAI_API_KEY` → Z AI GLM (FREE tier available!)
545
+ 3. `OPENCODE_BASE_URL` → OpenCode/Ollama (FREE - local!)
546
+ 4. `GOOGLE_API_KEY` → Gemini
547
+ 5. `OPENAI_API_KEY` → OpenAI (GPT-4, etc.)
548
+ 6. `ANTHROPIC_API_KEY` → Claude (Anthropic)
549
+ 7. `AI_API_KEY` → Generic fallback
544
550
 
545
551
  **Force specific provider:**
546
552
  ```bash
547
- # Use Claude explicitly
548
- feature-architect plan "user profile" --provider claude
553
+ # Use MiniMax (free M2.5 model)
554
+ feature-architect plan "user profile" --provider minimax
549
555
 
550
- # Use OpenAI explicitly
551
- feature-architect plan "user profile" --provider openai
556
+ # Use Z AI (free GLM models)
557
+ feature-architect plan "user profile" --provider zai
552
558
 
553
- # Use Gemini explicitly
559
+ # Use OpenCode (free local models)
560
+ feature-architect plan "user profile" --provider opencode
561
+
562
+ # Use Gemini
554
563
  feature-architect plan "user profile" --provider gemini
564
+
565
+ # Use OpenAI
566
+ feature-architect plan "user profile" --provider openai
567
+
568
+ # Use Claude
569
+ feature-architect plan "user profile" --provider claude
555
570
  ```
556
571
 
557
572
  ### Incremental Updates
@@ -590,8 +605,8 @@ feature-architect import-context --input team-context.json
590
605
 
591
606
  ## 🚦 Getting Started
592
607
 
593
- 1. **Install**: `cd HACKATHON_FEATURE_ARCHITECT_AGENT && npm install && npm run build && npm link`
594
- 2. **Set API Key**: `export ANTHROPIC_API_KEY=your-key-here`
608
+ 1. **Install**: `npm install -g feature-architect-agent`
609
+ 2. **Set Provider**: `export MINIMAX_API_KEY=xxx` (FREE!)
595
610
  3. **Setup**: `cd your-project && feature-architect init`
596
611
  4. **Plan**: `feature-architect plan "your feature here"`
597
612
  5. **Implement**: Use the generated plan as your blueprint
@@ -615,11 +630,10 @@ git commit -m "feat: add shared context for feature planning"
615
630
 
616
631
  ```bash
617
632
  # 1. Install globally (one-time setup)
618
- cd HACKATHON_FEATURE_ARCHITECT_AGENT
619
- npm install && npm run build && npm link
633
+ npm install -g feature-architect-agent
620
634
 
621
- # 2. Set your API key (add to ~/.bashrc or ~/.zshrc)
622
- export ANTHROPIC_API_KEY=your-key-here
635
+ # 2. Set your provider (add to ~/.bashrc or ~/.zshrc)
636
+ export OPENCODE_BASE_URL=http://localhost:11434/v1 # FREE!
623
637
 
624
638
  # 3. Navigate to your project
625
639
  cd your-project
package/dist/cli/index.js CHANGED
@@ -10,7 +10,7 @@ const program = new commander_1.Command();
10
10
  program
11
11
  .name('feature-architect')
12
12
  .description('AI-powered feature planning agent - generate complete technical specifications')
13
- .version('1.0.0');
13
+ .version('1.0.11');
14
14
  program
15
15
  .command('init')
16
16
  .description('Initialize in project (analyze codebase)')
@@ -52,11 +52,11 @@ program
52
52
  });
53
53
  // Default: show help if no command
54
54
  program.on('command:*', () => {
55
- (0, logger_js_1.header)('Feature Architect Agent v1.0.0');
55
+ (0, logger_js_1.header)('Feature Architect Agent v1.0.11');
56
56
  (0, logger_js_1.info)('AI-powered feature planning for development teams\n');
57
57
  (0, logger_js_1.dim)('Quick start:');
58
58
  (0, logger_js_1.dim)(' 1. feature-architect init # Analyze codebase');
59
- (0, logger_js_1.dim)(' 2. export ANTHROPIC_API_KEY=xxx # Set API key');
59
+ (0, logger_js_1.dim)(' 2. export MINIMAX_API_KEY=xxx # Set API key (MiniMax M2.5 - FREE!)');
60
60
  (0, logger_js_1.dim)(' 3. feature-architect plan "feature" # Plan feature');
61
61
  console.log('');
62
62
  });
@@ -1 +1 @@
1
- {"version":3,"file":"plan.d.ts","sourceRoot":"","sources":["../../src/commands/plan.ts"],"names":[],"mappings":"AAQA,wBAAsB,WAAW,CAC/B,OAAO,EAAE,MAAM,EACf,OAAO,GAAE;IACP,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,UAAU,CAAC,EAAE,OAAO,CAAC;CACjB,GACL,OAAO,CAAC,IAAI,CAAC,CAyJf"}
1
+ {"version":3,"file":"plan.d.ts","sourceRoot":"","sources":["../../src/commands/plan.ts"],"names":[],"mappings":"AAQA,wBAAsB,WAAW,CAC/B,OAAO,EAAE,MAAM,EACf,OAAO,GAAE;IACP,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,UAAU,CAAC,EAAE,OAAO,CAAC;CACjB,GACL,OAAO,CAAC,IAAI,CAAC,CA2Kf"}
@@ -39,8 +39,18 @@ async function planCommand(feature, options = {}) {
39
39
  // Auto-detect if not specified
40
40
  if (!provider) {
41
41
  // Check for API keys in environment (no built-in keys for security)
42
- // Opencode first (free local models)
43
- if (process.env.OPENCODE_BASE_URL || process.env.OPENCODE_URL) {
42
+ // MiniMax first (free M2.5 model)
43
+ if (process.env.MINIMAX_API_KEY) {
44
+ provider = 'minimax';
45
+ apiKey = process.env.MINIMAX_API_KEY;
46
+ // Z AI second (free GLM models)
47
+ }
48
+ else if (process.env.ZAI_API_KEY || process.env.ZHIPU_API_KEY) {
49
+ provider = 'zai';
50
+ apiKey = process.env.ZAI_API_KEY || process.env.ZHIPU_API_KEY;
51
+ // Opencode third (free local models)
52
+ }
53
+ else if (process.env.OPENCODE_BASE_URL || process.env.OPENCODE_URL) {
44
54
  provider = 'opencode';
45
55
  apiKey = 'not-needed';
46
56
  }
@@ -66,12 +76,16 @@ async function planCommand(feature, options = {}) {
66
76
  (0, logger_js_1.dim)('You need to set an API key or local LLM to use this tool.');
67
77
  (0, logger_js_1.dim)('');
68
78
  (0, logger_js_1.dim)('Choose one of these options:');
69
- (0, logger_js_1.dim)(' export OPENCODE_BASE_URL=http://localhost:11434/v1 # For OpenCode/Ollama (FREE)');
79
+ (0, logger_js_1.dim)(' export MINIMAX_API_KEY=xxx # For MiniMax M2.5 (FREE!)');
80
+ (0, logger_js_1.dim)(' export ZAI_API_KEY=xxx # For Z AI GLM models (FREE!)');
81
+ (0, logger_js_1.dim)(' export OPENCODE_BASE_URL=http://localhost:11434/v1 # For OpenCode/Ollama');
70
82
  (0, logger_js_1.dim)(' export GOOGLE_API_KEY=xxx # For Gemini');
71
83
  (0, logger_js_1.dim)(' export OPENAI_API_KEY=sk-xxx # For OpenAI');
72
84
  (0, logger_js_1.dim)(' export ANTHROPIC_API_KEY=sk-ant-xxx # For Claude');
73
85
  (0, logger_js_1.dim)('');
74
86
  (0, logger_js_1.dim)('Get API keys:');
87
+ (0, logger_js_1.dim)(' MiniMax: https://www.minimaxi.com/user-center/basic-information/interface-key (M2.5-Free - FREE!)');
88
+ (0, logger_js_1.dim)(' Z AI: https://open.bigmodel.cn/usercenter/apikeys (GLM-4.7-Flash - FREE!)');
75
89
  (0, logger_js_1.dim)(' OpenCode/Ollama: Install locally - FREE!');
76
90
  (0, logger_js_1.dim)(' Gemini: https://aistudio.google.com/app/apikey');
77
91
  (0, logger_js_1.dim)(' OpenAI: https://platform.openai.com/api-keys');
@@ -100,6 +114,14 @@ async function planCommand(feature, options = {}) {
100
114
  provider = 'opencode';
101
115
  apiKey = 'not-needed'; // Local LLMs don't need API keys
102
116
  }
117
+ else if (providerLower === 'zai' || providerLower === 'zhipu') {
118
+ provider = 'zai';
119
+ apiKey = process.env.ZAI_API_KEY || process.env.ZHIPU_API_KEY || '';
120
+ }
121
+ else if (providerLower === 'minimax') {
122
+ provider = 'minimax';
123
+ apiKey = process.env.MINIMAX_API_KEY || '';
124
+ }
103
125
  if (!apiKey) {
104
126
  (0, logger_js_1.error)(`No API key found for ${provider}`);
105
127
  (0, logger_js_1.dim)(`Set it with: export ${provider.toUpperCase()}_API_KEY=xxx`);
@@ -0,0 +1,41 @@
1
+ /**
2
+ * MiniMax Provider
3
+ *
4
+ * For MiniMax AI models with OpenAI-compatible APIs.
5
+ * Free models available:
6
+ * - MiniMax-M2.5-Free (FREE!)
7
+ * - MiniMax-M2.5
8
+ * - MiniMax-Text-01
9
+ * - MiniMax-Audio
10
+ *
11
+ * Get API key: https://www.minimaxi.com/user-center/basic-information/interface-key
12
+ */
13
+ import type { LLMProvider } from './types.js';
14
+ export interface MiniMaxConfig {
15
+ baseURL?: string;
16
+ model?: string;
17
+ groupId?: string;
18
+ }
19
+ export declare class MiniMaxProvider implements LLMProvider {
20
+ private baseURL;
21
+ private model;
22
+ private apiKey;
23
+ private groupId;
24
+ constructor(apiKey: string, config?: MiniMaxConfig);
25
+ generate(prompt: string, options?: {
26
+ temperature?: number;
27
+ maxTokens?: number;
28
+ }): Promise<string>;
29
+ /**
30
+ * Update the model being used
31
+ */
32
+ setModel(model: string): void;
33
+ /**
34
+ * Get current configuration
35
+ */
36
+ getConfig(): {
37
+ baseURL: string;
38
+ model: string;
39
+ };
40
+ }
41
+ //# sourceMappingURL=MiniMax.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"MiniMax.d.ts","sourceRoot":"","sources":["../../src/llm/MiniMax.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;GAWG;AAEH,OAAO,KAAK,EAAE,WAAW,EAAE,MAAM,YAAY,CAAC;AAE9C,MAAM,WAAW,aAAa;IAC5B,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,KAAK,CAAC,EAAE,MAAM,CAAC;IACf,OAAO,CAAC,EAAE,MAAM,CAAC;CAClB;AAED,qBAAa,eAAgB,YAAW,WAAW;IACjD,OAAO,CAAC,OAAO,CAAS;IACxB,OAAO,CAAC,KAAK,CAAS;IACtB,OAAO,CAAC,MAAM,CAAS;IACvB,OAAO,CAAC,OAAO,CAAS;gBAGtB,MAAM,EAAE,MAAM,EACd,MAAM,GAAE,aAAkB;IAQtB,QAAQ,CACZ,MAAM,EAAE,MAAM,EACd,OAAO,GAAE;QAAE,WAAW,CAAC,EAAE,MAAM,CAAC;QAAC,SAAS,CAAC,EAAE,MAAM,CAAA;KAAO,GACzD,OAAO,CAAC,MAAM,CAAC;IAuDlB;;OAEG;IACH,QAAQ,CAAC,KAAK,EAAE,MAAM,GAAG,IAAI;IAI7B;;OAEG;IACH,SAAS,IAAI;QAAE,OAAO,EAAE,MAAM,CAAC;QAAC,KAAK,EAAE,MAAM,CAAA;KAAE;CAGhD"}
@@ -0,0 +1,86 @@
1
+ "use strict";
2
+ /**
3
+ * MiniMax Provider
4
+ *
5
+ * For MiniMax AI models with OpenAI-compatible APIs.
6
+ * Free models available:
7
+ * - MiniMax-M2.5-Free (FREE!)
8
+ * - MiniMax-M2.5
9
+ * - MiniMax-Text-01
10
+ * - MiniMax-Audio
11
+ *
12
+ * Get API key: https://www.minimaxi.com/user-center/basic-information/interface-key
13
+ */
14
+ Object.defineProperty(exports, "__esModule", { value: true });
15
+ exports.MiniMaxProvider = void 0;
16
+ class MiniMaxProvider {
17
+ baseURL;
18
+ model;
19
+ apiKey;
20
+ groupId;
21
+ constructor(apiKey, config = {}) {
22
+ this.baseURL = config.baseURL || process.env.MINIMAX_BASE_URL || 'https://api.minimax.chat/v1';
23
+ this.model = config.model || process.env.MINIMAX_MODEL || 'MiniMax-M2.5-Free';
24
+ this.apiKey = apiKey || process.env.MINIMAX_API_KEY || '';
25
+ this.groupId = config.groupId || process.env.MINIMAX_GROUP_ID || '';
26
+ }
27
+ async generate(prompt, options = {}) {
28
+ const temperature = options.temperature ?? 0.7;
29
+ const maxTokens = options.maxTokens ?? 8000;
30
+ try {
31
+ const headers = {
32
+ 'Content-Type': 'application/json',
33
+ 'Authorization': `Bearer ${this.apiKey}`
34
+ };
35
+ // MiniMax requires Group ID header
36
+ if (this.groupId) {
37
+ headers['GroupId'] = this.groupId;
38
+ }
39
+ const response = await fetch(`${this.baseURL}/text/chatcompletion_v2`, {
40
+ method: 'POST',
41
+ headers,
42
+ body: JSON.stringify({
43
+ model: this.model,
44
+ messages: [
45
+ {
46
+ role: 'user',
47
+ content: prompt
48
+ }
49
+ ],
50
+ temperature,
51
+ tokens_to_generate: maxTokens,
52
+ stream: false
53
+ })
54
+ });
55
+ if (!response.ok) {
56
+ const errorText = await response.text();
57
+ throw new Error(`MiniMax API error: ${response.status} ${response.statusText}\n${errorText}`);
58
+ }
59
+ const data = await response.json();
60
+ if (data.error) {
61
+ throw new Error(`MiniMax API error: ${data.error.message || JSON.stringify(data.error)}`);
62
+ }
63
+ // MiniMax response format
64
+ return data.choices?.[0]?.message?.content || data.reply || '';
65
+ }
66
+ catch (error) {
67
+ if (error instanceof Error) {
68
+ throw new Error(`MiniMax request failed: ${error.message}`);
69
+ }
70
+ throw new Error('MiniMax request failed with unknown error');
71
+ }
72
+ }
73
+ /**
74
+ * Update the model being used
75
+ */
76
+ setModel(model) {
77
+ this.model = model;
78
+ }
79
+ /**
80
+ * Get current configuration
81
+ */
82
+ getConfig() {
83
+ return { baseURL: this.baseURL, model: this.model };
84
+ }
85
+ }
86
+ exports.MiniMaxProvider = MiniMaxProvider;
@@ -3,10 +3,19 @@
3
3
  *
4
4
  * For local LLM servers with OpenAI-compatible APIs.
5
5
  * Commonly used with:
6
- * - OpenCode
6
+ * - OpenCode (with Z AI GLM models)
7
7
  * - LM Studio
8
8
  * - LocalAI
9
9
  * - text-generation-webui
10
+ * - Ollama
11
+ *
12
+ * Supported Models:
13
+ * - GLM-4.7-Flash (Z AI)
14
+ * - GLM-4-Flash
15
+ * - GLM-4-Air
16
+ * - llama3.2
17
+ * - deepseek-chat
18
+ * - qwen2.5
10
19
  */
11
20
  import type { LLMProvider } from './types.js';
12
21
  export interface OpenCodeConfig {
@@ -1 +1 @@
1
- {"version":3,"file":"Opencode.d.ts","sourceRoot":"","sources":["../../src/llm/Opencode.ts"],"names":[],"mappings":"AAAA;;;;;;;;;GASG;AAEH,OAAO,KAAK,EAAE,WAAW,EAAE,MAAM,YAAY,CAAC;AAE9C,MAAM,WAAW,cAAc;IAC7B,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,KAAK,CAAC,EAAE,MAAM,CAAC;CAChB;AAED,qBAAa,gBAAiB,YAAW,WAAW;IAClD,OAAO,CAAC,OAAO,CAAS;IACxB,OAAO,CAAC,KAAK,CAAS;IACtB,OAAO,CAAC,MAAM,CAAS;gBAGrB,MAAM,GAAE,MAAqB,EAAE,0CAA0C;IACzE,MAAM,GAAE,cAAmB;IAOvB,QAAQ,CACZ,MAAM,EAAE,MAAM,EACd,OAAO,GAAE;QAAE,WAAW,CAAC,EAAE,MAAM,CAAC;QAAC,SAAS,CAAC,EAAE,MAAM,CAAA;KAAO,GACzD,OAAO,CAAC,MAAM,CAAC;IAgDlB;;OAEG;IACH,QAAQ,CAAC,KAAK,EAAE,MAAM,GAAG,IAAI;IAI7B;;OAEG;IACH,SAAS,IAAI;QAAE,OAAO,EAAE,MAAM,CAAC;QAAC,KAAK,EAAE,MAAM,CAAA;KAAE;CAGhD"}
1
+ {"version":3,"file":"Opencode.d.ts","sourceRoot":"","sources":["../../src/llm/Opencode.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;;;;;;;GAkBG;AAEH,OAAO,KAAK,EAAE,WAAW,EAAE,MAAM,YAAY,CAAC;AAE9C,MAAM,WAAW,cAAc;IAC7B,OAAO,CAAC,EAAE,MAAM,CAAC;IACjB,KAAK,CAAC,EAAE,MAAM,CAAC;CAChB;AAUD,qBAAa,gBAAiB,YAAW,WAAW;IAClD,OAAO,CAAC,OAAO,CAAS;IACxB,OAAO,CAAC,KAAK,CAAS;IACtB,OAAO,CAAC,MAAM,CAAS;gBAGrB,MAAM,GAAE,MAAqB,EAAE,0CAA0C;IACzE,MAAM,GAAE,cAAmB;IAOvB,QAAQ,CACZ,MAAM,EAAE,MAAM,EACd,OAAO,GAAE;QAAE,WAAW,CAAC,EAAE,MAAM,CAAC;QAAC,SAAS,CAAC,EAAE,MAAM,CAAA;KAAO,GACzD,OAAO,CAAC,MAAM,CAAC;IAgDlB;;OAEG;IACH,QAAQ,CAAC,KAAK,EAAE,MAAM,GAAG,IAAI;IAI7B;;OAEG;IACH,SAAS,IAAI;QAAE,OAAO,EAAE,MAAM,CAAC;QAAC,KAAK,EAAE,MAAM,CAAA;KAAE;CAGhD"}
@@ -4,22 +4,38 @@
4
4
  *
5
5
  * For local LLM servers with OpenAI-compatible APIs.
6
6
  * Commonly used with:
7
- * - OpenCode
7
+ * - OpenCode (with Z AI GLM models)
8
8
  * - LM Studio
9
9
  * - LocalAI
10
10
  * - text-generation-webui
11
+ * - Ollama
12
+ *
13
+ * Supported Models:
14
+ * - GLM-4.7-Flash (Z AI)
15
+ * - GLM-4-Flash
16
+ * - GLM-4-Air
17
+ * - llama3.2
18
+ * - deepseek-chat
19
+ * - qwen2.5
11
20
  */
12
21
  Object.defineProperty(exports, "__esModule", { value: true });
13
22
  exports.OpenCodeProvider = void 0;
23
+ // Default OpenCode endpoints to try
24
+ const DEFAULT_ENDPOINTS = [
25
+ 'http://localhost:8080/v1', // OpenCode default
26
+ 'http://localhost:11434/v1', // Ollama default
27
+ 'http://localhost:1234/v1', // LM Studio default
28
+ 'http://localhost:5000/v1', // LocalAI default
29
+ ];
14
30
  class OpenCodeProvider {
15
31
  baseURL;
16
32
  model;
17
33
  apiKey;
18
34
  constructor(apiKey = 'not-needed', // Local servers often don't need API keys
19
35
  config = {}) {
20
- this.baseURL = config.baseURL || process.env.OPENCODE_BASE_URL || 'http://localhost:11434/v1';
21
- this.model = config.model || process.env.OPENCODE_MODEL || 'llama3.2';
22
- this.apiKey = apiKey;
36
+ this.baseURL = config.baseURL || process.env.OPENCODE_BASE_URL || DEFAULT_ENDPOINTS[0];
37
+ this.model = config.model || process.env.OPENCODE_MODEL || 'GLM-4.7-Flash';
38
+ this.apiKey = apiKey || process.env.ZAI_API_KEY || process.env.OPENCODE_API_KEY || 'not-needed';
23
39
  }
24
40
  async generate(prompt, options = {}) {
25
41
  const temperature = options.temperature ?? 0.7;
@@ -0,0 +1,41 @@
1
+ /**
2
+ * Z AI Provider (Zhipu AI / GLM Models)
3
+ *
4
+ * Supports GLM-4.7-Flash and other GLM models from Z AI (Zhipu AI)
5
+ * API Documentation: https://open.bigmodel.cn/dev/api
6
+ *
7
+ * Supported Models:
8
+ * - GLM-4.7-Flash (Latest, fast)
9
+ * - GLM-4-Flash
10
+ * - GLM-4-Air
11
+ * - GLM-4-Plus
12
+ * - GLM-4
13
+ */
14
+ import type { LLMProvider } from './types.js';
15
+ export interface ZAIConfig {
16
+ apiKey?: string;
17
+ model?: string;
18
+ baseURL?: string;
19
+ }
20
+ export declare class ZAIProvider implements LLMProvider {
21
+ private apiKey;
22
+ private model;
23
+ private baseURL;
24
+ constructor(apiKey?: string, config?: ZAIConfig);
25
+ generate(prompt: string, options?: {
26
+ temperature?: number;
27
+ maxTokens?: number;
28
+ }): Promise<string>;
29
+ /**
30
+ * Update the model being used
31
+ */
32
+ setModel(model: string): void;
33
+ /**
34
+ * Get current configuration
35
+ */
36
+ getConfig(): {
37
+ baseURL: string;
38
+ model: string;
39
+ };
40
+ }
41
+ //# sourceMappingURL=ZAI.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"ZAI.d.ts","sourceRoot":"","sources":["../../src/llm/ZAI.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;GAYG;AAEH,OAAO,KAAK,EAAE,WAAW,EAAE,MAAM,YAAY,CAAC;AAE9C,MAAM,WAAW,SAAS;IACxB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,KAAK,CAAC,EAAE,MAAM,CAAC;IACf,OAAO,CAAC,EAAE,MAAM,CAAC;CAClB;AAED,qBAAa,WAAY,YAAW,WAAW;IAC7C,OAAO,CAAC,MAAM,CAAS;IACvB,OAAO,CAAC,KAAK,CAAS;IACtB,OAAO,CAAC,OAAO,CAAS;gBAGtB,MAAM,GAAE,MAAW,EACnB,MAAM,GAAE,SAAc;IAWlB,QAAQ,CACZ,MAAM,EAAE,MAAM,EACd,OAAO,GAAE;QAAE,WAAW,CAAC,EAAE,MAAM,CAAC;QAAC,SAAS,CAAC,EAAE,MAAM,CAAA;KAAO,GACzD,OAAO,CAAC,MAAM,CAAC;IA+ClB;;OAEG;IACH,QAAQ,CAAC,KAAK,EAAE,MAAM,GAAG,IAAI;IAI7B;;OAEG;IACH,SAAS,IAAI;QAAE,OAAO,EAAE,MAAM,CAAC;QAAC,KAAK,EAAE,MAAM,CAAA;KAAE;CAGhD"}
@@ -0,0 +1,82 @@
1
+ "use strict";
2
+ /**
3
+ * Z AI Provider (Zhipu AI / GLM Models)
4
+ *
5
+ * Supports GLM-4.7-Flash and other GLM models from Z AI (Zhipu AI)
6
+ * API Documentation: https://open.bigmodel.cn/dev/api
7
+ *
8
+ * Supported Models:
9
+ * - GLM-4.7-Flash (Latest, fast)
10
+ * - GLM-4-Flash
11
+ * - GLM-4-Air
12
+ * - GLM-4-Plus
13
+ * - GLM-4
14
+ */
15
+ Object.defineProperty(exports, "__esModule", { value: true });
16
+ exports.ZAIProvider = void 0;
17
+ class ZAIProvider {
18
+ apiKey;
19
+ model;
20
+ baseURL;
21
+ constructor(apiKey = '', config = {}) {
22
+ this.apiKey = apiKey || process.env.ZAI_API_KEY || process.env.ZHIPU_API_KEY || '';
23
+ this.model = config.model || process.env.ZAI_MODEL || 'GLM-4.7-Flash';
24
+ this.baseURL = config.baseURL || process.env.ZAI_BASE_URL || 'https://open.bigmodel.cn/api/paas/v4';
25
+ if (!this.apiKey) {
26
+ throw new Error('Z AI API key is required. Set ZAI_API_KEY environment variable.');
27
+ }
28
+ }
29
+ async generate(prompt, options = {}) {
30
+ const temperature = options.temperature ?? 0.7;
31
+ const maxTokens = options.maxTokens ?? 8000;
32
+ try {
33
+ const response = await fetch(`${this.baseURL}/chat/completions`, {
34
+ method: 'POST',
35
+ headers: {
36
+ 'Content-Type': 'application/json',
37
+ 'Authorization': `Bearer ${this.apiKey}`
38
+ },
39
+ body: JSON.stringify({
40
+ model: this.model,
41
+ messages: [
42
+ {
43
+ role: 'user',
44
+ content: prompt
45
+ }
46
+ ],
47
+ temperature,
48
+ max_tokens: maxTokens,
49
+ stream: false
50
+ })
51
+ });
52
+ if (!response.ok) {
53
+ const errorText = await response.text();
54
+ throw new Error(`Z AI API error: ${response.status} ${response.statusText}\n${errorText}`);
55
+ }
56
+ const data = await response.json();
57
+ if (data.error) {
58
+ throw new Error(`Z AI API error: ${data.error.message || JSON.stringify(data.error)}`);
59
+ }
60
+ return data.choices[0]?.message?.content || '';
61
+ }
62
+ catch (error) {
63
+ if (error instanceof Error) {
64
+ throw new Error(`Z AI request failed: ${error.message}`);
65
+ }
66
+ throw new Error('Z AI request failed with unknown error');
67
+ }
68
+ }
69
+ /**
70
+ * Update the model being used
71
+ */
72
+ setModel(model) {
73
+ this.model = model;
74
+ }
75
+ /**
76
+ * Get current configuration
77
+ */
78
+ getConfig() {
79
+ return { baseURL: this.baseURL, model: this.model };
80
+ }
81
+ }
82
+ exports.ZAIProvider = ZAIProvider;
@@ -1,5 +1,5 @@
1
1
  import type { LLMProvider } from './types.js';
2
- export type ProviderType = 'claude' | 'anthropic' | 'openai' | 'gemini' | 'google' | 'opencode' | 'ollama';
2
+ export type ProviderType = 'claude' | 'anthropic' | 'openai' | 'gemini' | 'google' | 'opencode' | 'ollama' | 'zai' | 'zhipu' | 'minimax';
3
3
  export interface ProviderConfig {
4
4
  provider: ProviderType;
5
5
  apiKey?: string;
@@ -1 +1 @@
1
- {"version":3,"file":"factory.d.ts","sourceRoot":"","sources":["../../src/llm/factory.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,WAAW,EAAE,MAAM,YAAY,CAAC;AAM9C,MAAM,MAAM,YAAY,GAAG,QAAQ,GAAG,WAAW,GAAG,QAAQ,GAAG,QAAQ,GAAG,QAAQ,GAAG,UAAU,GAAG,QAAQ,CAAC;AAE3G,MAAM,WAAW,cAAc;IAC7B,QAAQ,EAAE,YAAY,CAAC;IACvB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,KAAK,CAAC,EAAE,MAAM,CAAC;CAChB;AAWD,wBAAgB,cAAc,CAAC,MAAM,EAAE,cAAc,GAAG,WAAW,CA4ClE"}
1
+ {"version":3,"file":"factory.d.ts","sourceRoot":"","sources":["../../src/llm/factory.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,WAAW,EAAE,MAAM,YAAY,CAAC;AAQ9C,MAAM,MAAM,YAAY,GAAG,QAAQ,GAAG,WAAW,GAAG,QAAQ,GAAG,QAAQ,GAAG,QAAQ,GAAG,UAAU,GAAG,QAAQ,GAAG,KAAK,GAAG,OAAO,GAAG,SAAS,CAAC;AAEzI,MAAM,WAAW,cAAc;IAC7B,QAAQ,EAAE,YAAY,CAAC;IACvB,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,KAAK,CAAC,EAAE,MAAM,CAAC;CAChB;AAYD,wBAAgB,cAAc,CAAC,MAAM,EAAE,cAAc,GAAG,WAAW,CA8DlE"}
@@ -5,6 +5,8 @@ const Claude_js_1 = require("./Claude.js");
5
5
  const OpenAI_js_1 = require("./OpenAI.js");
6
6
  const Gemini_js_1 = require("./Gemini.js");
7
7
  const Opencode_js_1 = require("./Opencode.js");
8
+ const ZAI_js_1 = require("./ZAI.js");
9
+ const MiniMax_js_1 = require("./MiniMax.js");
8
10
  /**
9
11
  * Normalize provider name - accepts both 'claude'/'anthropic' and 'gemini'/'google'
10
12
  */
@@ -13,6 +15,8 @@ function normalizeProvider(provider) {
13
15
  return 'claude';
14
16
  if (provider === 'google')
15
17
  return 'gemini';
18
+ if (provider === 'zhipu')
19
+ return 'zai';
16
20
  return provider;
17
21
  }
18
22
  function createProvider(config) {
@@ -29,9 +33,19 @@ function createProvider(config) {
29
33
  case 'gemini':
30
34
  return new Gemini_js_1.GeminiProvider(apiKey, config.model || 'gemini-flash-latest');
31
35
  case 'opencode':
32
- return new Opencode_js_1.OpenCodeProvider(apiKey || 'not-needed', {
33
- baseURL: process.env.OPENCODE_BASE_URL || 'http://localhost:11434/v1',
34
- model: config.model || process.env.OPENCODE_MODEL || 'llama3.2'
36
+ return new Opencode_js_1.OpenCodeProvider(apiKey || process.env.ZAI_API_KEY || 'not-needed', {
37
+ baseURL: process.env.OPENCODE_BASE_URL || 'http://localhost:8080/v1',
38
+ model: config.model || process.env.OPENCODE_MODEL || 'GLM-4.7-Flash'
39
+ });
40
+ case 'zai':
41
+ return new ZAI_js_1.ZAIProvider(apiKey || process.env.ZAI_API_KEY || '', {
42
+ model: config.model || process.env.ZAI_MODEL || 'GLM-4.7-Flash'
43
+ });
44
+ case 'minimax':
45
+ return new MiniMax_js_1.MiniMaxProvider(apiKey || process.env.MINIMAX_API_KEY || '', {
46
+ baseURL: process.env.MINIMAX_BASE_URL || 'https://api.minimax.chat/v1',
47
+ groupId: process.env.MINIMAX_GROUP_ID || '',
48
+ model: config.model || process.env.MINIMAX_MODEL || 'MiniMax-M2.5-Free'
35
49
  });
36
50
  case 'ollama':
37
51
  throw new Error('Ollama provider not yet implemented');
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "feature-architect-agent",
3
- "version": "1.0.10",
3
+ "version": "1.0.12",
4
4
  "description": "AI-powered feature planning agent - generates complete technical specifications",
5
5
  "main": "dist/index.js",
6
6
  "bin": {