@aigne/example-workflow-orchestrator 1.10.21 → 1.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,44 @@
1
1
  # Change the name of this file to .env.local and fill in the following values
2
2
 
3
- DEBUG=aigne:mcp
3
+ # Uncomment the lines below to enable debug logging
4
+ # DEBUG="aigne:*"
4
5
 
5
- OPENAI_API_KEY="" # Your OpenAI API key
6
+ # Use different Models
7
+
8
+ # OpenAI
9
+ MODEL="openai:gpt-4.1"
10
+ OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
11
+
12
+ # Anthropic claude
13
+ # MODEL="anthropic:claude-3-7-sonnet-latest"
14
+ # ANTHROPIC_API_KEY=""
15
+
16
+ # Gemini
17
+ # MODEL="gemini:gemini-2.0-flash"
18
+ # GEMINI_API_KEY=""
19
+
20
+ # Bedrock nova
21
+ # MODEL=bedrock:us.amazon.nova-premier-v1:0
22
+ # AWS_ACCESS_KEY_ID=""
23
+ # AWS_SECRET_ACCESS_KEY=""
24
+ # AWS_REGION=us-west-2
25
+
26
+ # DeepSeek
27
+ # MODEL="deepseek:deepseek-chat"
28
+ # DEEPSEEK_API_KEY=""
29
+
30
+ # OpenRouter
31
+ # MODEL="openrouter:openai/gpt-4o"
32
+ # OPEN_ROUTER_API_KEY=""
33
+
34
+ # xAI
35
+ # MODEL="xai:grok-2-latest"
36
+ # XAI_API_KEY=""
37
+
38
+ # Ollama
39
+ # MODEL="ollama:llama3.2"
40
+ # OLLAMA_DEFAULT_BASE_URL="http://localhost:11434/v1";
41
+
42
+
43
+ # Setup proxy if needed
44
+ # HTTPS_PROXY=http://localhost:7890
package/README.md CHANGED
@@ -39,11 +39,11 @@ class style_enforcer processing
39
39
 
40
40
  ## Prerequisites
41
41
 
42
- - [Node.js](https://nodejs.org) and npm installed on your machine
43
- - An [OpenAI API key](https://platform.openai.com/api-keys) for interacting with OpenAI's services
44
- - Optional dependencies (if running the example from source code):
45
- - [Bun](https://bun.sh) for running unit tests & examples
46
- - [Pnpm](https://pnpm.io) for package management
42
+ * [Node.js](https://nodejs.org) and npm installed on your machine
43
+ * An [OpenAI API key](https://platform.openai.com/api-keys) for interacting with OpenAI's services
44
+ * Optional dependencies (if running the example from source code):
45
+ * [Bun](https://bun.sh) for running unit tests & examples
46
+ * [Pnpm](https://pnpm.io) for package management
47
47
 
48
48
  ## Quick Start (No Installation Required)
49
49
 
@@ -92,6 +92,21 @@ DOCKER_CONTAINER="true"
92
92
 
93
93
  This ensures Puppeteer configures itself correctly for a Docker environment, preventing potential compatibility issues.
94
94
 
95
+ #### Using Different Models
96
+
97
+ You can use different AI models by setting the `MODEL` environment variable along with the corresponding API key. The framework supports multiple providers:
98
+
99
+ * **OpenAI**: `MODEL="openai:gpt-4.1"` with `OPENAI_API_KEY`
100
+ * **Anthropic**: `MODEL="anthropic:claude-3-7-sonnet-latest"` with `ANTHROPIC_API_KEY`
101
+ * **Google Gemini**: `MODEL="gemini:gemini-2.0-flash"` with `GEMINI_API_KEY`
102
+ * **AWS Bedrock**: `MODEL="bedrock:us.amazon.nova-premier-v1:0"` with AWS credentials
103
+ * **DeepSeek**: `MODEL="deepseek:deepseek-chat"` with `DEEPSEEK_API_KEY`
104
+ * **OpenRouter**: `MODEL="openrouter:openai/gpt-4o"` with `OPEN_ROUTER_API_KEY`
105
+ * **xAI**: `MODEL="xai:grok-2-latest"` with `XAI_API_KEY`
106
+ * **Ollama**: `MODEL="ollama:llama3.2"` with `OLLAMA_DEFAULT_BASE_URL`
107
+
108
+ For detailed configuration examples, please refer to the `.env.local.example` file in this directory.
109
+
95
110
  ### Run the Example
96
111
 
97
112
  ```bash
@@ -111,7 +126,7 @@ The example supports the following command-line parameters:
111
126
  | Parameter | Description | Default |
112
127
  |-----------|-------------|---------|
113
128
  | `--chat` | Run in interactive chat mode | Disabled (one-shot mode) |
114
- | `--model <provider[:model]>` | AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
129
+ | `--model <provider[:model]>` | AI model to use in format 'provider\[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
115
130
  | `--temperature <value>` | Temperature for model generation | Provider default |
116
131
  | `--top-p <value>` | Top-p sampling value | Provider default |
117
132
  | `--presence-penalty <value>` | Presence penalty value | Provider default |
@@ -139,13 +154,11 @@ The following example demonstrates how to build a orchestrator workflow:
139
154
  Here is the generated report for this example: [arcblock-deep-research.md](./generated-report-arcblock.md)
140
155
 
141
156
  ```typescript
142
- import assert from "node:assert";
143
157
  import { OrchestratorAgent } from "@aigne/agent-library/orchestrator/index.js";
144
158
  import { AIAgent, AIGNE, MCPAgent } from "@aigne/core";
145
159
  import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";
146
160
 
147
161
  const { OPENAI_API_KEY } = process.env;
148
- assert(OPENAI_API_KEY, "Please set the OPENAI_API_KEY environment variable");
149
162
 
150
163
  const model = new OpenAIChatModel({
151
164
  apiKey: OPENAI_API_KEY,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aigne/example-workflow-orchestrator",
3
- "version": "1.10.21",
3
+ "version": "1.11.0",
4
4
  "description": "A demonstration of using AIGNE Framework to build a orchestrator workflow",
5
5
  "author": "Arcblock <blocklet@arcblock.io> https://github.com/blocklet",
6
6
  "homepage": "https://github.com/AIGNE-io/aigne-framework/tree/main/examples/workflow-orchestrator",
@@ -16,14 +16,14 @@
16
16
  "README.md"
17
17
  ],
18
18
  "dependencies": {
19
- "@aigne/agent-library": "^1.17.4",
20
- "@aigne/cli": "^1.17.0",
21
- "@aigne/core": "^1.27.0",
22
- "@aigne/openai": "^0.5.0"
19
+ "@aigne/cli": "^1.18.0",
20
+ "@aigne/agent-library": "^1.17.5",
21
+ "@aigne/core": "^1.28.0",
22
+ "@aigne/openai": "^0.6.0"
23
23
  },
24
24
  "devDependencies": {
25
25
  "@types/bun": "^1.2.9",
26
- "@aigne/test-utils": "^0.4.11"
26
+ "@aigne/test-utils": "^0.4.12"
27
27
  },
28
28
  "scripts": {
29
29
  "start": "bun run index.ts",