@aigne/example-workflow-router 1.14.7 → 1.15.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,44 @@
1
1
  # Change the name of this file to .env.local and fill in the following values
2
2
 
3
- DEBUG=aigne:mcp
3
+ # Uncomment the lines below to enable debug logging
4
+ # DEBUG="aigne:*"
4
5
 
5
- OPENAI_API_KEY="" # Your OpenAI API key
6
+ # Use different Models
7
+
8
+ # OpenAI
9
+ MODEL="openai:gpt-4.1"
10
+ OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
11
+
12
+ # Anthropic claude
13
+ # MODEL="anthropic:claude-3-7-sonnet-latest"
14
+ # ANTHROPIC_API_KEY=""
15
+
16
+ # Gemini
17
+ # MODEL="gemini:gemini-2.0-flash"
18
+ # GEMINI_API_KEY=""
19
+
20
+ # Bedrock nova
21
+ # MODEL=bedrock:us.amazon.nova-premier-v1:0
22
+ # AWS_ACCESS_KEY_ID=""
23
+ # AWS_SECRET_ACCESS_KEY=""
24
+ # AWS_REGION=us-west-2
25
+
26
+ # DeepSeek
27
+ # MODEL="deepseek:deepseek-chat"
28
+ # DEEPSEEK_API_KEY=""
29
+
30
+ # OpenRouter
31
+ # MODEL="openrouter:openai/gpt-4o"
32
+ # OPEN_ROUTER_API_KEY=""
33
+
34
+ # xAI
35
+ # MODEL="xai:grok-2-latest"
36
+ # XAI_API_KEY=""
37
+
38
+ # Ollama
39
+ # MODEL="ollama:llama3.2"
40
+ # OLLAMA_DEFAULT_BASE_URL="http://localhost:11434/v1";
41
+
42
+
43
+ # Setup proxy if needed
44
+ # HTTPS_PROXY=http://localhost:7890
package/README.md CHANGED
@@ -29,11 +29,11 @@ class other processing
29
29
 
30
30
  ## Prerequisites
31
31
 
32
- - [Node.js](https://nodejs.org) and npm installed on your machine
33
- - An [OpenAI API key](https://platform.openai.com/api-keys) for interacting with OpenAI's services
34
- - Optional dependencies (if running the example from source code):
35
- - [Bun](https://bun.sh) for running unit tests & examples
36
- - [Pnpm](https://pnpm.io) for package management
32
+ * [Node.js](https://nodejs.org) and npm installed on your machine
33
+ * An [OpenAI API key](https://platform.openai.com/api-keys) for interacting with OpenAI's services
34
+ * Optional dependencies (if running the example from source code):
35
+ * [Bun](https://bun.sh) for running unit tests & examples
36
+ * [Pnpm](https://pnpm.io) for package management
37
37
 
38
38
  ## Quick Start (No Installation Required)
39
39
 
@@ -74,6 +74,21 @@ Setup your OpenAI API key in the `.env.local` file:
74
74
  OPENAI_API_KEY="" # Set your OpenAI API key here
75
75
  ```
76
76
 
77
+ #### Using Different Models
78
+
79
+ You can use different AI models by setting the `MODEL` environment variable along with the corresponding API key. The framework supports multiple providers:
80
+
81
+ * **OpenAI**: `MODEL="openai:gpt-4.1"` with `OPENAI_API_KEY`
82
+ * **Anthropic**: `MODEL="anthropic:claude-3-7-sonnet-latest"` with `ANTHROPIC_API_KEY`
83
+ * **Google Gemini**: `MODEL="gemini:gemini-2.0-flash"` with `GEMINI_API_KEY`
84
+ * **AWS Bedrock**: `MODEL="bedrock:us.amazon.nova-premier-v1:0"` with AWS credentials
85
+ * **DeepSeek**: `MODEL="deepseek:deepseek-chat"` with `DEEPSEEK_API_KEY`
86
+ * **OpenRouter**: `MODEL="openrouter:openai/gpt-4o"` with `OPEN_ROUTER_API_KEY`
87
+ * **xAI**: `MODEL="xai:grok-2-latest"` with `XAI_API_KEY`
88
+ * **Ollama**: `MODEL="ollama:llama3.2"` with `OLLAMA_DEFAULT_BASE_URL`
89
+
90
+ For detailed configuration examples, please refer to the `.env.local.example` file in this directory.
91
+
77
92
  ### Run the Example
78
93
 
79
94
  ```bash
@@ -93,7 +108,7 @@ The example supports the following command-line parameters:
93
108
  | Parameter | Description | Default |
94
109
  |-----------|-------------|---------|
95
110
  | `--chat` | Run in interactive chat mode | Disabled (one-shot mode) |
96
- | `--model <provider[:model]>` | AI model to use in format 'provider[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
111
+ | `--model <provider[:model]>` | AI model to use in format 'provider\[:model]' where model is optional. Examples: 'openai' or 'openai:gpt-4o-mini' | openai |
97
112
  | `--temperature <value>` | Temperature for model generation | Provider default |
98
113
  | `--top-p <value>` | Top-p sampling value | Provider default |
99
114
  | `--presence-penalty <value>` | Presence penalty value | Provider default |
@@ -119,12 +134,10 @@ echo "How do I return a product?" | pnpm start
119
134
  The following example demonstrates how to build a router workflow:
120
135
 
121
136
  ```typescript
122
- import assert from "node:assert";
123
137
  import { AIAgent, AIGNE } from "@aigne/core";
124
138
  import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";
125
139
 
126
140
  const { OPENAI_API_KEY } = process.env;
127
- assert(OPENAI_API_KEY, "Please set the OPENAI_API_KEY environment variable");
128
141
 
129
142
  const model = new OpenAIChatModel({
130
143
  apiKey: OPENAI_API_KEY,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aigne/example-workflow-router",
3
- "version": "1.14.7",
3
+ "version": "1.15.0",
4
4
  "description": "A demonstration of using AIGNE Framework to build a router workflow",
5
5
  "author": "Arcblock <blocklet@arcblock.io> https://github.com/blocklet",
6
6
  "homepage": "https://github.com/AIGNE-io/aigne-framework/tree/main/examples/workflow-router",
@@ -16,14 +16,14 @@
16
16
  "README.md"
17
17
  ],
18
18
  "dependencies": {
19
- "@aigne/agent-library": "^1.17.4",
20
- "@aigne/cli": "^1.17.0",
21
- "@aigne/core": "^1.27.0",
22
- "@aigne/openai": "^0.5.0"
19
+ "@aigne/agent-library": "^1.17.5",
20
+ "@aigne/cli": "^1.18.0",
21
+ "@aigne/core": "^1.28.0",
22
+ "@aigne/openai": "^0.6.0"
23
23
  },
24
24
  "devDependencies": {
25
25
  "@types/bun": "^1.2.9",
26
- "@aigne/test-utils": "^0.4.11"
26
+ "@aigne/test-utils": "^0.4.12"
27
27
  },
28
28
  "scripts": {
29
29
  "start": "bun run index.ts",