@aigne/cli 1.16.0 → 1.18.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,51 @@
1
1
  # Changelog
2
2
 
3
+ ## [1.18.0](https://github.com/AIGNE-io/aigne-framework/compare/cli-v1.17.0...cli-v1.18.0) (2025-07-01)
4
+
5
+
6
+ ### Features
7
+
8
+ * **example:** use AIGNE cli to run chat-bot example ([#198](https://github.com/AIGNE-io/aigne-framework/issues/198)) ([7085541](https://github.com/AIGNE-io/aigne-framework/commit/708554100692f2a557f7329ea78e46c3c870ce10))
9
+
10
+
11
+ ### Dependencies
12
+
13
+ * The following workspace dependencies were updated
14
+ * dependencies
15
+ * @aigne/agent-library bumped to 1.17.5
16
+ * @aigne/anthropic bumped to 0.5.0
17
+ * @aigne/bedrock bumped to 0.5.0
18
+ * @aigne/core bumped to 1.28.0
19
+ * @aigne/deepseek bumped to 0.4.0
20
+ * @aigne/gemini bumped to 0.4.0
21
+ * @aigne/ollama bumped to 0.4.0
22
+ * @aigne/open-router bumped to 0.4.0
23
+ * @aigne/openai bumped to 0.6.0
24
+ * @aigne/xai bumped to 0.4.0
25
+
26
+ ## [1.17.0](https://github.com/AIGNE-io/aigne-framework/compare/cli-v1.16.0...cli-v1.17.0) (2025-07-01)
27
+
28
+
29
+ ### Features
30
+
31
+ * **cli:** support HTTPS_PROXY and named path param ([#196](https://github.com/AIGNE-io/aigne-framework/issues/196)) ([04e684e](https://github.com/AIGNE-io/aigne-framework/commit/04e684ee26bc2d79924b0e3cb541cd07a7191804))
32
+
33
+
34
+ ### Dependencies
35
+
36
+ * The following workspace dependencies were updated
37
+ * dependencies
38
+ * @aigne/agent-library bumped to 1.17.4
39
+ * @aigne/anthropic bumped to 0.4.0
40
+ * @aigne/bedrock bumped to 0.4.0
41
+ * @aigne/core bumped to 1.27.0
42
+ * @aigne/deepseek bumped to 0.3.11
43
+ * @aigne/gemini bumped to 0.3.11
44
+ * @aigne/ollama bumped to 0.3.11
45
+ * @aigne/open-router bumped to 0.3.11
46
+ * @aigne/openai bumped to 0.5.0
47
+ * @aigne/xai bumped to 0.3.11
48
+
3
49
  ## [1.16.0](https://github.com/AIGNE-io/aigne-framework/compare/cli-v1.15.0...cli-v1.16.0) (2025-06-30)
4
50
 
5
51
 
package/README.md CHANGED
@@ -16,12 +16,12 @@ Command-line tool for [AIGNE Framework](https://github.com/AIGNE-io/aigne-framew
16
16
 
17
17
  ## Features
18
18
 
19
- - **Project Creation**: Quickly create new AIGNE projects with predefined file structures and configurations
20
- - **Agent Running**: Easily run and test AIGNE agents
21
- - **Testing Support**: Built-in test command for unit testing and integration testing
22
- - **MCP Services**: Support for launching agents as MCP servers for integration with external systems
23
- - **Interactive Interface**: Beautiful command-line interface providing an intuitive user experience
24
- - **Multi-model Support**: Support for OpenAI, Claude, XAI, and other model providers
19
+ * **Project Creation**: Quickly create new AIGNE projects with predefined file structures and configurations
20
+ * **Agent Running**: Easily run and test AIGNE agents
21
+ * **Testing Support**: Built-in test command for unit testing and integration testing
22
+ * **MCP Services**: Support for launching agents as MCP servers for integration with external systems
23
+ * **Interactive Interface**: Beautiful command-line interface providing an intuitive user experience
24
+ * **Multi-model Support**: Support for OpenAI, Claude, XAI, and other model providers
25
25
 
26
26
  ## Installation
27
27
 
@@ -55,13 +55,13 @@ aigne --help
55
55
  aigne create [path]
56
56
 
57
57
  # Run an agent
58
- aigne run [path]
58
+ aigne run --path xxx
59
59
 
60
60
  # Run tests
61
- aigne test [path]
61
+ aigne test --path xxx
62
62
 
63
63
  # Start MCP server
64
- aigne serve [path] --mcp
64
+ aigne serve --path xxx --mcp
65
65
 
66
66
  # Start observability server
67
67
  aigne observe [option]
@@ -81,8 +81,8 @@ aigne create my-project
81
81
 
82
82
  The interactive creation process will ask for:
83
83
 
84
- - Project name
85
- - Project template (currently supports the default template)
84
+ * Project name
85
+ * Project template (currently supports the default template)
86
86
 
87
87
  ## Run Command
88
88
 
@@ -104,10 +104,10 @@ aigne run --entry-agent myAgent
104
104
 
105
105
  Available options:
106
106
 
107
- - `--entry-agent <entry-agent>` - Specify the agent name to run (defaults to the first agent found)
108
- - `--cache-dir <dir>` - Specify the directory to download the package to (used in URL mode)
109
- - `--model <provider[:model]>` - Specify the AI model in format 'provider\[:model]' where model is optional (e.g., 'openai' or 'openai:gpt-4o-mini')
110
- - `--verbose` - Enable verbose logging
107
+ * `--entry-agent <entry-agent>` - Specify the agent name to run (defaults to the first agent found)
108
+ * `--cache-dir <dir>` - Specify the directory to download the package to (used in URL mode)
109
+ * `--model <provider[:model]>` - Specify the AI model in format 'provider\[:model]' where model is optional (e.g., 'openai' or 'openai:gpt-4o-mini')
110
+ * `--verbose` - Enable verbose logging
111
111
 
112
112
  ## Test Command
113
113
 
@@ -133,7 +133,7 @@ aigne serve --mcp
133
133
  aigne serve --mcp --port 3001
134
134
 
135
135
  # Start MCP server for agents at specified path
136
- aigne serve path/to/agents --mcp
136
+ aigne serve --path path/to/agents --mcp
137
137
  ```
138
138
 
139
139
  ## Serve Command (observability)
package/README.zh.md CHANGED
@@ -16,12 +16,12 @@
16
16
 
17
17
  ## 特性
18
18
 
19
- - **项目创建**:快速创建新的 AIGNE 项目,包含预设的文件结构和配置
20
- - **代理运行**:轻松运行和测试 AIGNE 代理
21
- - **测试支持**:内置测试命令,方便进行单元测试和集成测试
22
- - **MCP 服务**:支持将代理作为 MCP 服务器启动,与外部系统集成
23
- - **交互式界面**:美观的命令行界面,提供直观的使用体验
24
- - **多模型支持**:支持 OpenAI、Claude、XAI 等多种模型提供商
19
+ * **项目创建**:快速创建新的 AIGNE 项目,包含预设的文件结构和配置
20
+ * **代理运行**:轻松运行和测试 AIGNE 代理
21
+ * **测试支持**:内置测试命令,方便进行单元测试和集成测试
22
+ * **MCP 服务**:支持将代理作为 MCP 服务器启动,与外部系统集成
23
+ * **交互式界面**:美观的命令行界面,提供直观的使用体验
24
+ * **多模型支持**:支持 OpenAI、Claude、XAI 等多种模型提供商
25
25
 
26
26
  ## 安装
27
27
 
@@ -55,13 +55,13 @@ aigne --help
55
55
  aigne create [path]
56
56
 
57
57
  # 运行代理
58
- aigne run [path]
58
+ aigne run --path xxx
59
59
 
60
60
  # 运行测试
61
- aigne test [path]
61
+ aigne test --path xxx
62
62
 
63
63
  # 启动 MCP 服务器
64
- aigne serve [path] --mcp
64
+ aigne serve --path xxx --mcp
65
65
 
66
66
  # 启动 observability 服务器
67
67
  aigne observe [option]
@@ -81,8 +81,8 @@ aigne create my-project
81
81
 
82
82
  交互式创建过程会询问:
83
83
 
84
- - 项目名称
85
- - 项目模板(目前支持 default 模板)
84
+ * 项目名称
85
+ * 项目模板(目前支持 default 模板)
86
86
 
87
87
  ## 运行命令 (run)
88
88
 
@@ -104,10 +104,10 @@ aigne run --entry-agent myAgent
104
104
 
105
105
  可用选项:
106
106
 
107
- - `--entry-agent <代理>` - 指定要运行的代理名称(默认为找到的第一个代理)
108
- - `--cache-dir <目录>` - 指定下载包的目录(URL模式下使用)
109
- - `--model <提供商[:模型]>` - 指定AI模型,格式为'提供商\[:模型]',其中模型是可选的(如'openai'或'openai:gpt-4o-mini')
110
- - `--verbose` - 启用详细日志记录
107
+ * `--entry-agent <代理>` - 指定要运行的代理名称(默认为找到的第一个代理)
108
+ * `--cache-dir <目录>` - 指定下载包的目录(URL模式下使用)
109
+ * `--model <提供商[:模型]>` - 指定AI模型,格式为'提供商\[:模型]',其中模型是可选的(如'openai'或'openai:gpt-4o-mini')
110
+ * `--verbose` - 启用详细日志记录
111
111
 
112
112
  ## 测试命令 (test)
113
113
 
@@ -133,7 +133,7 @@ aigne serve --mcp
133
133
  aigne serve --mcp --port 3001
134
134
 
135
135
  # 为指定路径的代理启动 MCP 服务器
136
- aigne serve path/to/agents --mcp
136
+ aigne serve -- path path/to/agents --mcp
137
137
  ```
138
138
 
139
139
  ## 服务命令 (observability)
package/dist/cli.js CHANGED
@@ -2,7 +2,7 @@
2
2
  import { config } from "dotenv-flow";
3
3
  import PrettyError from "pretty-error";
4
4
  import { createAIGNECommand } from "./commands/aigne.js";
5
- config();
5
+ config({ silent: true });
6
6
  createAIGNECommand()
7
7
  .parseAsync()
8
8
  .catch((error) => {
@@ -2,7 +2,7 @@ import { Command } from "commander";
2
2
  import { AIGNE_CLI_VERSION } from "../constants.js";
3
3
  import { asciiLogo } from "../utils/ascii-logo.js";
4
4
  import { createCreateCommand } from "./create.js";
5
- import { createObservabilityCommand } from "./observability.js";
5
+ import { createObservabilityCommand } from "./observe.js";
6
6
  import { createRunCommand } from "./run.js";
7
7
  import { createServeCommand } from "./serve.js";
8
8
  import { createTestCommand } from "./test.js";
@@ -1,6 +1,6 @@
1
1
  import { tryOrThrow } from "@aigne/core/utils/type-utils.js";
2
- import { startObservabilityCLIServer } from "@aigne/observability/cli";
3
- import getObservabilityDbPath from "@aigne/observability/db-path";
2
+ import { startObservabilityCLIServer } from "@aigne/observability-api/cli";
3
+ import getObservabilityDbPath from "@aigne/observability-api/db-path";
4
4
  import { Command } from "commander";
5
5
  import detectPort from "detect-port";
6
6
  const DEFAULT_PORT = () => tryOrThrow(() => {
@@ -20,7 +20,7 @@ export function createObservabilityCommand() {
20
20
  .action(async (options) => {
21
21
  const port = await detectPort(options.port || DEFAULT_PORT());
22
22
  const dbUrl = getObservabilityDbPath();
23
- console.log("DB PATH:", dbUrl);
23
+ console.log("Observability database path:", dbUrl);
24
24
  await startObservabilityCLIServer({ port, dbUrl });
25
25
  })
26
26
  .showHelpAfterError(true)
@@ -7,6 +7,7 @@ import { loadModel } from "@aigne/core/loader/index.js";
7
7
  import { logger } from "@aigne/core/utils/logger.js";
8
8
  import { isNonNullable } from "@aigne/core/utils/type-utils.js";
9
9
  import { Listr, PRESET_TIMER } from "@aigne/listr2";
10
+ import { config } from "dotenv-flow";
10
11
  import { availableMemories, availableModels } from "../constants.js";
11
12
  import { isV1Package, toAIGNEPackage } from "../utils/agent-v1.js";
12
13
  import { downloadAndExtract } from "../utils/download.js";
@@ -43,7 +44,12 @@ export function createRunCommand() {
43
44
  {
44
45
  title: "Initialize AIGNE",
45
46
  task: async (ctx) => {
46
- const aigne = await loadAIGNE(dir, options);
47
+ // Load env files in the aigne directory
48
+ config({ path: dir, silent: true });
49
+ const aigne = await loadAIGNE(dir, {
50
+ ...options,
51
+ model: options.model || process.env.MODEL,
52
+ });
47
53
  ctx.aigne = aigne;
48
54
  },
49
55
  },
@@ -91,10 +97,11 @@ export function createRunCommand() {
91
97
  .showSuggestionAfterError(true);
92
98
  }
93
99
  async function loadAIGNE(path, options) {
100
+ const models = availableModels();
94
101
  const model = options.model
95
- ? await loadModel(availableModels, parseModelOption(options.model))
102
+ ? await loadModel(models, parseModelOption(options.model))
96
103
  : undefined;
97
- return await AIGNE.load(path, { models: availableModels, memories: availableMemories, model });
104
+ return await AIGNE.load(path, { models, memories: availableMemories, model });
98
105
  }
99
106
  async function downloadPackage(url, cacheDir) {
100
107
  await rm(cacheDir, { recursive: true, force: true });
@@ -16,16 +16,17 @@ const DEFAULT_PORT = () => tryOrThrow(() => {
16
16
  export function createServeCommand() {
17
17
  return new Command("serve")
18
18
  .description("Serve the agents in the specified directory as a MCP server")
19
- .argument("[path]", "Path to the agents directory", ".")
19
+ .option("--url, --path <path_or_url>", "Path to the agents directory or URL to aigne project", ".")
20
20
  .option("--mcp", "Serve the agents as a MCP server")
21
21
  .option("--host <host>", "Host to run the MCP server on, use 0.0.0.0 to publicly expose the server", "localhost")
22
22
  .option("--port <port>", "Port to run the MCP server on", (s) => Number.parseInt(s))
23
23
  .option("--pathname <pathname>", "Pathname to the service", "/mcp")
24
- .action(async (path, options) => {
24
+ .action(async (options) => {
25
+ const { path } = options;
25
26
  const absolutePath = isAbsolute(path) ? path : resolve(process.cwd(), path);
26
27
  const port = options.port || DEFAULT_PORT();
27
28
  const aigne = await AIGNE.load(absolutePath, {
28
- models: availableModels,
29
+ models: availableModels(),
29
30
  memories: availableMemories,
30
31
  });
31
32
  if (options.mcp)
@@ -4,8 +4,9 @@ import { Command } from "commander";
4
4
  export function createTestCommand() {
5
5
  return new Command("test")
6
6
  .description("Run tests in the specified agents directory")
7
- .argument("[path]", "Path to the agents directory", ".")
8
- .action(async (path) => {
7
+ .option("--url, --path <path_or_url>", "Path to the agents directory or URL to aigne project", ".")
8
+ .action(async (options) => {
9
+ const { path } = options;
9
10
  const absolutePath = isAbsolute(path) ? path : resolve(process.cwd(), path);
10
11
  spawnSync("node", ["--test"], { cwd: absolutePath, stdio: "inherit" });
11
12
  })
@@ -1,7 +1,5 @@
1
1
  import { DefaultMemory } from "@aigne/agent-library/default-memory/index.js";
2
- import { AnthropicChatModel } from "@aigne/anthropic";
3
- import { BedrockChatModel } from "@aigne/bedrock";
4
- import { OpenAIChatModel } from "@aigne/openai";
2
+ import type { LoadableModel } from "@aigne/core/loader/index.js";
5
3
  export declare const AIGNE_CLI_VERSION: string;
6
- export declare const availableModels: (typeof OpenAIChatModel | typeof AnthropicChatModel | typeof BedrockChatModel)[];
4
+ export declare function availableModels(): LoadableModel[];
7
5
  export declare const availableMemories: (typeof DefaultMemory)[];
package/dist/constants.js CHANGED
@@ -7,16 +7,57 @@ import { OllamaChatModel } from "@aigne/ollama";
7
7
  import { OpenRouterChatModel } from "@aigne/open-router";
8
8
  import { OpenAIChatModel } from "@aigne/openai";
9
9
  import { XAIChatModel } from "@aigne/xai";
10
+ import { NodeHttpHandler, streamCollector } from "@smithy/node-http-handler";
11
+ import { HttpsProxyAgent } from "https-proxy-agent";
10
12
  import pkg from "../package.json" with { type: "json" };
11
13
  export const AIGNE_CLI_VERSION = pkg.version;
12
- export const availableModels = [
13
- OpenAIChatModel,
14
- AnthropicChatModel,
15
- BedrockChatModel,
16
- DeepSeekChatModel,
17
- GeminiChatModel,
18
- OllamaChatModel,
19
- OpenRouterChatModel,
20
- XAIChatModel,
21
- ];
14
+ export function availableModels() {
15
+ const proxy = ["HTTPS_PROXY", "https_proxy", "HTTP_PROXY", "http_proxy", "ALL_PROXY", "all_proxy"]
16
+ .map((i) => process.env[i])
17
+ .filter(Boolean)[0];
18
+ const httpAgent = proxy ? new HttpsProxyAgent(proxy) : undefined;
19
+ return [
20
+ {
21
+ name: OpenAIChatModel.name,
22
+ create: (params) => new OpenAIChatModel({ ...params, clientOptions: { httpAgent } }),
23
+ },
24
+ {
25
+ name: AnthropicChatModel.name,
26
+ create: (params) => new AnthropicChatModel({ ...params, clientOptions: { httpAgent } }),
27
+ },
28
+ {
29
+ name: BedrockChatModel.name,
30
+ create: (params) => new BedrockChatModel({
31
+ ...params,
32
+ clientOptions: {
33
+ requestHandler: NodeHttpHandler.create({
34
+ httpAgent,
35
+ httpsAgent: httpAgent,
36
+ }),
37
+ streamCollector,
38
+ },
39
+ }),
40
+ },
41
+ {
42
+ name: DeepSeekChatModel.name,
43
+ create: (params) => new DeepSeekChatModel({ ...params, clientOptions: { httpAgent } }),
44
+ },
45
+ {
46
+ name: GeminiChatModel.name,
47
+ create: (params) => new GeminiChatModel({ ...params, clientOptions: { httpAgent } }),
48
+ },
49
+ {
50
+ name: OllamaChatModel.name,
51
+ create: (params) => new OllamaChatModel({ ...params, clientOptions: { httpAgent } }),
52
+ },
53
+ {
54
+ name: OpenRouterChatModel.name,
55
+ create: (params) => new OpenRouterChatModel({ ...params, clientOptions: { httpAgent } }),
56
+ },
57
+ {
58
+ name: XAIChatModel.name,
59
+ create: (params) => new XAIChatModel({ ...params, clientOptions: { httpAgent } }),
60
+ },
61
+ ];
62
+ }
22
63
  export const availableMemories = [DefaultMemory];
@@ -148,7 +148,7 @@ export class TerminalTracer {
148
148
  }
149
149
  formatResult(agent, context, m = {}) {
150
150
  const { isTTY } = process.stdout;
151
- const outputKey = agent instanceof AIAgent ? agent.outputKey : undefined;
151
+ const outputKey = this.outputKey || (agent instanceof AIAgent ? agent.outputKey : undefined);
152
152
  const prefix = logger.enabled(LogLevel.INFO)
153
153
  ? `${chalk.grey(figures.tick)} 🤖 ${this.formatTokenUsage(context.usage)}`
154
154
  : null;
@@ -6,5 +6,6 @@ export interface ChatLoopOptions {
6
6
  defaultQuestion?: string;
7
7
  inputKey?: string;
8
8
  skipLoop?: boolean;
9
+ outputKey?: string;
9
10
  }
10
11
  export declare function runChatLoopInTerminal(userAgent: UserAgent<any, any>, options?: ChatLoopOptions): Promise<void>;
@@ -42,7 +42,7 @@ export async function runChatLoopInTerminal(userAgent, options = {}) {
42
42
  }
43
43
  }
44
44
  async function callAgent(userAgent, input, options) {
45
- const tracer = new TerminalTracer(userAgent.context);
45
+ const tracer = new TerminalTracer(userAgent.context, options);
46
46
  await tracer.run(userAgent, typeof input === "string" ? { [options.inputKey || DEFAULT_CHAT_INPUT_KEY]: input } : input);
47
47
  }
48
48
  const COMMANDS = {
@@ -13,6 +13,7 @@ export interface RunAIGNECommandOptions {
13
13
  input?: string[];
14
14
  format?: "text" | "json" | "yaml";
15
15
  output?: string;
16
+ outputKey?: string;
16
17
  logLevel?: LogLevel;
17
18
  force?: boolean;
18
19
  }
@@ -21,7 +21,9 @@ export const createRunAIGNECommand = (name = "run") => new Command(name)
21
21
  .allowExcessArguments(true)
22
22
  .description("Run agent with AIGNE in terminal")
23
23
  .option("--chat", "Run chat loop in terminal", false)
24
- .option("--model <provider[:model]>", `AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini'. Available providers: ${availableModels.map((i) => i.name.toLowerCase().replace(/ChatModel$/i, "")).join(", ")} (default: openai)`, process.env.MODEL)
24
+ .option("--model <provider[:model]>", `AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini'. Available providers: ${availableModels()
25
+ .map((i) => i.name.toLowerCase().replace(/ChatModel$/i, ""))
26
+ .join(", ")} (default: openai)`)
25
27
  .option("--temperature <temperature>", "Temperature for the model (controls randomness, higher values produce more random outputs). Range: 0.0-2.0", customZodError("--temperature", (s) => z.coerce.number().min(0).max(2).parse(s)))
26
28
  .option("--top-p <top-p>", "Top P (nucleus sampling) parameter for the model (controls diversity). Range: 0.0-1.0", customZodError("--top-p", (s) => z.coerce.number().min(0).max(1).parse(s)))
27
29
  .option("--presence-penalty <presence-penalty>", "Presence penalty for the model (penalizes repeating the same tokens). Range: -2.0 to 2.0", customZodError("--presence-penalty", (s) => z.coerce.number().min(-2).max(2).parse(s)))
@@ -89,7 +91,7 @@ export async function parseAgentInputByCommander(agent, options = {}) {
89
91
  return input;
90
92
  }
91
93
  export const parseModelOption = (model) => {
92
- const { provider, name } = model?.match(/(?<provider>[^:]+)(:(?<name>(\S+)))?/)?.groups ?? {};
94
+ const { provider, name } = (model || process.env.MODEL)?.match(/(?<provider>[^:]+)(:(?<name>(\S+)))?/)?.groups ?? {};
93
95
  return { provider, name };
94
96
  };
95
97
  export async function runWithAIGNE(agentCreator, { argv = process.argv, chatLoopOptions, modelOptions, outputKey, } = {}) {
@@ -100,7 +102,7 @@ export async function runWithAIGNE(agentCreator, { argv = process.argv, chatLoop
100
102
  if (options.logLevel) {
101
103
  logger.level = options.logLevel;
102
104
  }
103
- const model = await loadModel(availableModels, {
105
+ const model = await loadModel(availableModels(), {
104
106
  ...parseModelOption(options.model),
105
107
  temperature: options.temperature,
106
108
  topP: options.topP,
@@ -122,7 +124,7 @@ export async function runWithAIGNE(agentCreator, { argv = process.argv, chatLoop
122
124
  }
123
125
  await runAgentWithAIGNE(aigne, agent, {
124
126
  ...options,
125
- outputKey,
127
+ outputKey: outputKey || options.outputKey,
126
128
  chatLoopOptions,
127
129
  modelOptions,
128
130
  input,
@@ -166,6 +168,7 @@ export async function runAgentWithAIGNE(aigne, agent, { outputKey, chatLoopOptio
166
168
  const userAgent = agent instanceof UserAgent ? agent : aigne.invoke(agent);
167
169
  await runChatLoopInTerminal(userAgent, {
168
170
  ...chatLoopOptions,
171
+ outputKey,
169
172
  });
170
173
  return;
171
174
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aigne/cli",
3
- "version": "1.16.0",
3
+ "version": "1.18.0",
4
4
  "description": "cli for AIGNE framework",
5
5
  "publishConfig": {
6
6
  "access": "public"
@@ -38,6 +38,7 @@
38
38
  "@aigne/listr2": "^1.0.10",
39
39
  "@aigne/marked-terminal": "^7.3.1",
40
40
  "@modelcontextprotocol/sdk": "^1.11.0",
41
+ "@smithy/node-http-handler": "^4.0.6",
41
42
  "chalk": "^5.4.1",
42
43
  "commander": "^13.1.0",
43
44
  "detect-port": "^2.1.0",
@@ -45,6 +46,7 @@
45
46
  "express": "^5.1.0",
46
47
  "glob": "^11.0.2",
47
48
  "gradient-string": "^3.0.0",
49
+ "https-proxy-agent": "^7.0.6",
48
50
  "inquirer": "^12.6.0",
49
51
  "marked": "^15.0.11",
50
52
  "prettier": "^3.5.3",
@@ -53,17 +55,17 @@
53
55
  "wrap-ansi": "^9.0.0",
54
56
  "yaml": "^2.7.1",
55
57
  "zod": "^3.24.4",
56
- "@aigne/agent-library": "^1.17.3",
57
- "@aigne/anthropic": "^0.3.10",
58
- "@aigne/bedrock": "^0.3.10",
59
- "@aigne/core": "^1.26.0",
60
- "@aigne/deepseek": "^0.3.10",
61
- "@aigne/observability": "^0.3.0",
62
- "@aigne/gemini": "^0.3.10",
63
- "@aigne/ollama": "^0.3.10",
64
- "@aigne/openai": "^0.4.3",
65
- "@aigne/xai": "^0.3.10",
66
- "@aigne/open-router": "^0.3.10"
58
+ "@aigne/agent-library": "^1.17.5",
59
+ "@aigne/anthropic": "^0.5.0",
60
+ "@aigne/core": "^1.28.0",
61
+ "@aigne/deepseek": "^0.4.0",
62
+ "@aigne/gemini": "^0.4.0",
63
+ "@aigne/bedrock": "^0.5.0",
64
+ "@aigne/observability-api": "^0.3.0",
65
+ "@aigne/open-router": "^0.4.0",
66
+ "@aigne/ollama": "^0.4.0",
67
+ "@aigne/openai": "^0.6.0",
68
+ "@aigne/xai": "^0.4.0"
67
69
  },
68
70
  "devDependencies": {
69
71
  "@types/archiver": "^6.0.3",
@@ -0,0 +1,44 @@
1
+ # Change the name of this file to .env.local and fill in the following values
2
+
3
+ # Uncomment the lines below to enable debug logging
4
+ # DEBUG="aigne:*"
5
+
6
+ # Use different Models
7
+
8
+ # OpenAI
9
+ MODEL="openai:gpt-4.1"
10
+ OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
11
+
12
+ # Anthropic claude
13
+ # MODEL="anthropic:claude-3-7-sonnet-latest"
14
+ # ANTHROPIC_API_KEY=""
15
+
16
+ # Gemini
17
+ # MODEL="gemini:gemini-2.0-flash"
18
+ # GEMINI_API_KEY=""
19
+
20
+ # Bedrock nova
21
+ # MODEL=bedrock:us.amazon.nova-premier-v1:0
22
+ # AWS_ACCESS_KEY_ID=""
23
+ # AWS_SECRET_ACCESS_KEY=""
24
+ # AWS_REGION=us-west-2
25
+
26
+ # DeepSeek
27
+ # MODEL="deepseek:deepseek-chat"
28
+ # DEEPSEEK_API_KEY=""
29
+
30
+ # OpenRouter
31
+ # MODEL="openrouter:openai/gpt-4o"
32
+ # OPEN_ROUTER_API_KEY=""
33
+
34
+ # xAI
35
+ # MODEL="xai:grok-2-latest"
36
+ # XAI_API_KEY=""
37
+
38
+ # Ollama
39
+ # MODEL="ollama:llama3.2"
40
+ # OLLAMA_DEFAULT_BASE_URL="http://localhost:11434/v1";
41
+
42
+
43
+ # Setup proxy if needed
44
+ # HTTPS_PROXY=http://localhost:7890
@@ -17,6 +17,16 @@ This is the default project template for the AIGNE framework, providing a basic
17
17
  npm install -g aigne
18
18
  ```
19
19
 
20
+ ### Setup Environment Variables
21
+
22
+ Copy the `.env.local.example` file to `.env.local` and set your OpenAI API key:
23
+
24
+ ```shell
25
+ # OpenAI
26
+ MODEL="openai:gpt-4.1"
27
+ OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
28
+ ```
29
+
20
30
  ### Start the Project
21
31
 
22
32
  ```bash