@aigne/example-workflow-concurrency 1.12.7 → 1.13.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,44 @@
1
1
  # Change the name of this file to .env.local and fill in the following values
2
2
 
3
- DEBUG=aigne:mcp
3
+ # Uncomment the lines below to enable debug logging
4
+ # DEBUG="aigne:*"
4
5
 
5
- OPENAI_API_KEY="" # Your OpenAI API key
6
+ # Use different Models
7
+
8
+ # OpenAI
9
+ MODEL="openai:gpt-4.1"
10
+ OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
11
+
12
+ # Anthropic claude
13
+ # MODEL="anthropic:claude-3-7-sonnet-latest"
14
+ # ANTHROPIC_API_KEY=""
15
+
16
+ # Gemini
17
+ # MODEL="gemini:gemini-2.0-flash"
18
+ # GEMINI_API_KEY=""
19
+
20
+ # Bedrock nova
21
+ # MODEL=bedrock:us.amazon.nova-premier-v1:0
22
+ # AWS_ACCESS_KEY_ID=""
23
+ # AWS_SECRET_ACCESS_KEY=""
24
+ # AWS_REGION=us-west-2
25
+
26
+ # DeepSeek
27
+ # MODEL="deepseek:deepseek-chat"
28
+ # DEEPSEEK_API_KEY=""
29
+
30
+ # OpenRouter
31
+ # MODEL="openrouter:openai/gpt-4o"
32
+ # OPEN_ROUTER_API_KEY=""
33
+
34
+ # xAI
35
+ # MODEL="xai:grok-2-latest"
36
+ # XAI_API_KEY=""
37
+
38
+ # Ollama
39
+ # MODEL="ollama:llama3.2"
40
+ # OLLAMA_DEFAULT_BASE_URL="http://localhost:11434/v1";
41
+
42
+
43
+ # Setup proxy if needed
44
+ # HTTPS_PROXY=http://localhost:7890
package/README.md CHANGED
@@ -26,11 +26,11 @@ class aggregator processing
26
26
 
27
27
  ## Prerequisites
28
28
 
29
- - [Node.js](https://nodejs.org) and npm installed on your machine
30
- - An [OpenAI API key](https://platform.openai.com/api-keys) for interacting with OpenAI's services
31
- - Optional dependencies (if running the example from source code):
32
- - [Bun](https://bun.sh) for running unit tests & examples
33
- - [Pnpm](https://pnpm.io) for package management
29
+ * [Node.js](https://nodejs.org) and npm installed on your machine
30
+ * An [OpenAI API key](https://platform.openai.com/api-keys) for interacting with OpenAI's services
31
+ * Optional dependencies (if running the example from source code):
32
+ * [Bun](https://bun.sh) for running unit tests & examples
33
+ * [Pnpm](https://pnpm.io) for package management
34
34
 
35
35
  ## Quick Start (No Installation Required)
36
36
 
@@ -71,6 +71,21 @@ Setup your OpenAI API key in the `.env.local` file:
71
71
  OPENAI_API_KEY="" # Set your OpenAI API key here
72
72
  ```
73
73
 
74
+ #### Using Different Models
75
+
76
+ You can use different AI models by setting the `MODEL` environment variable along with the corresponding API key. The framework supports multiple providers:
77
+
78
+ * **OpenAI**: `MODEL="openai:gpt-4.1"` with `OPENAI_API_KEY`
79
+ * **Anthropic**: `MODEL="anthropic:claude-3-7-sonnet-latest"` with `ANTHROPIC_API_KEY`
80
+ * **Google Gemini**: `MODEL="gemini:gemini-2.0-flash"` with `GEMINI_API_KEY`
81
+ * **AWS Bedrock**: `MODEL="bedrock:us.amazon.nova-premier-v1:0"` with AWS credentials
82
+ * **DeepSeek**: `MODEL="deepseek:deepseek-chat"` with `DEEPSEEK_API_KEY`
83
+ * **OpenRouter**: `MODEL="openrouter:openai/gpt-4o"` with `OPEN_ROUTER_API_KEY`
84
+ * **xAI**: `MODEL="xai:grok-2-latest"` with `XAI_API_KEY`
85
+ * **Ollama**: `MODEL="ollama:llama3.2"` with `OLLAMA_DEFAULT_BASE_URL`
86
+
87
+ For detailed configuration examples, please refer to the `.env.local.example` file in this directory.
88
+
74
89
  ### Run the Example
75
90
 
76
91
  ```bash
@@ -90,7 +105,7 @@ The example supports the following command-line parameters:
90
105
  | Parameter | Description | Default |
91
106
  |-----------|-------------|---------|
92
107
  | `--chat` | Run in interactive chat mode | Disabled (one-shot mode) |
93
- | `--model <provider[:model]>` | AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
108
+ | `--model <provider[:model]>` | AI model to use in format 'provider\[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
94
109
  | `--temperature <value>` | Temperature for model generation | Provider default |
95
110
  | `--top-p <value>` | Top-p sampling value | Provider default |
96
111
  | `--presence-penalty <value>` | Presence penalty value | Provider default |
@@ -116,12 +131,10 @@ echo "Analyze product: Smart home assistant with voice control and AI learning c
116
131
  The following example demonstrates how to build a concurrency workflow:
117
132
 
118
133
  ```typescript
119
- import assert from "node:assert";
120
134
  import { AIAgent, AIGNE, TeamAgent, ProcessMode } from "@aigne/core";
121
135
  import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";
122
136
 
123
137
  const { OPENAI_API_KEY } = process.env;
124
- assert(OPENAI_API_KEY, "Please set the OPENAI_API_KEY environment variable");
125
138
 
126
139
  const model = new OpenAIChatModel({
127
140
  apiKey: OPENAI_API_KEY,
@@ -150,7 +163,7 @@ const aigne = new AIGNE({ model });
150
163
  // 创建一个 TeamAgent 来处理并行工作流
151
164
  const teamAgent = TeamAgent.from({
152
165
  skills: [featureExtractor, audienceAnalyzer],
153
- mode: ProcessMode.parallel
166
+ mode: ProcessMode.parallel,
154
167
  });
155
168
 
156
169
  const result = await aigne.invoke(teamAgent, {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aigne/example-workflow-concurrency",
3
- "version": "1.12.7",
3
+ "version": "1.13.1",
4
4
  "description": "A demonstration of using AIGNE Framework to build a concurrency workflow",
5
5
  "author": "Arcblock <blocklet@arcblock.io> https://github.com/blocklet",
6
6
  "homepage": "https://github.com/AIGNE-io/aigne-framework/tree/main/examples/workflow-concurrency",
@@ -16,14 +16,14 @@
16
16
  "README.md"
17
17
  ],
18
18
  "dependencies": {
19
- "@aigne/agent-library": "^1.17.4",
20
- "@aigne/cli": "^1.17.0",
21
- "@aigne/core": "^1.27.0",
22
- "@aigne/openai": "^0.5.0"
19
+ "@aigne/agent-library": "^1.17.6",
20
+ "@aigne/cli": "^1.18.1",
21
+ "@aigne/openai": "^0.6.1",
22
+ "@aigne/core": "^1.28.1"
23
23
  },
24
24
  "devDependencies": {
25
25
  "@types/bun": "^1.2.9",
26
- "@aigne/test-utils": "^0.4.11"
26
+ "@aigne/test-utils": "^0.4.13"
27
27
  },
28
28
  "scripts": {
29
29
  "start": "bun run index.ts",