@aigne/ollama 0.6.2 → 0.6.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +12 -0
- package/README.md +0 -2
- package/package.json +4 -4
- package/README.zh.md +0 -115
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,17 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## [0.6.3](https://github.com/AIGNE-io/aigne-framework/compare/ollama-v0.6.2...ollama-v0.6.3) (2025-07-10)
|
|
4
|
+
|
|
5
|
+
|
|
6
|
+
### Dependencies
|
|
7
|
+
|
|
8
|
+
* The following workspace dependencies were updated
|
|
9
|
+
* dependencies
|
|
10
|
+
* @aigne/openai bumped to 0.9.0
|
|
11
|
+
* devDependencies
|
|
12
|
+
* @aigne/core bumped to 1.33.0
|
|
13
|
+
* @aigne/test-utils bumped to 0.5.5
|
|
14
|
+
|
|
3
15
|
## [0.6.2](https://github.com/AIGNE-io/aigne-framework/compare/ollama-v0.6.1...ollama-v0.6.2) (2025-07-09)
|
|
4
16
|
|
|
5
17
|
|
package/README.md
CHANGED
|
@@ -6,8 +6,6 @@
|
|
|
6
6
|
[](https://www.npmjs.com/package/@aigne/ollama)
|
|
7
7
|
[](https://github.com/AIGNE-io/aigne-framework/blob/main/LICENSE.md)
|
|
8
8
|
|
|
9
|
-
**English** | [中文](README.zh.md)
|
|
10
|
-
|
|
11
9
|
AIGNE Ollama SDK for integrating with locally hosted AI models via Ollama within the [AIGNE Framework](https://github.com/AIGNE-io/aigne-framework).
|
|
12
10
|
|
|
13
11
|
## Introduction
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@aigne/ollama",
|
|
3
|
-
"version": "0.6.
|
|
3
|
+
"version": "0.6.3",
|
|
4
4
|
"description": "AIGNE Ollama SDK for integrating with locally hosted AI models via Ollama",
|
|
5
5
|
"publishConfig": {
|
|
6
6
|
"access": "public"
|
|
@@ -32,7 +32,7 @@
|
|
|
32
32
|
}
|
|
33
33
|
},
|
|
34
34
|
"dependencies": {
|
|
35
|
-
"@aigne/openai": "^0.
|
|
35
|
+
"@aigne/openai": "^0.9.0"
|
|
36
36
|
},
|
|
37
37
|
"devDependencies": {
|
|
38
38
|
"@types/bun": "^1.2.17",
|
|
@@ -40,8 +40,8 @@
|
|
|
40
40
|
"npm-run-all": "^4.1.5",
|
|
41
41
|
"rimraf": "^6.0.1",
|
|
42
42
|
"typescript": "^5.8.3",
|
|
43
|
-
"@aigne/
|
|
44
|
-
"@aigne/
|
|
43
|
+
"@aigne/core": "^1.33.0",
|
|
44
|
+
"@aigne/test-utils": "^0.5.5"
|
|
45
45
|
},
|
|
46
46
|
"scripts": {
|
|
47
47
|
"lint": "tsc --noEmit",
|
package/README.zh.md
DELETED
|
@@ -1,115 +0,0 @@
|
|
|
1
|
-
# @aigne/ollama
|
|
2
|
-
|
|
3
|
-
[](https://star-history.com/#AIGNE-io/aigne-framework)
|
|
4
|
-
[](https://github.com/AIGNE-io/aigne-framework/issues)
|
|
5
|
-
[](https://codecov.io/gh/AIGNE-io/aigne-framework)
|
|
6
|
-
[](https://www.npmjs.com/package/@aigne/ollama)
|
|
7
|
-
[](https://github.com/AIGNE-io/aigne-framework/blob/main/LICENSE.md)
|
|
8
|
-
|
|
9
|
-
[English](README.md) | **中文**
|
|
10
|
-
|
|
11
|
-
AIGNE Ollama SDK,用于在 [AIGNE 框架](https://github.com/AIGNE-io/aigne-framework) 中通过 Ollama 集成本地托管的 AI 模型。
|
|
12
|
-
|
|
13
|
-
## 简介
|
|
14
|
-
|
|
15
|
-
`@aigne/ollama` 提供了 AIGNE 框架与通过 Ollama 本地托管的 AI 模型之间的无缝集成。该包使开发者能够在 AIGNE 应用程序中轻松利用通过 Ollama 在本地运行的开源语言模型,同时提供框架内一致的接口,并提供私有、离线的 AI 能力访问。
|
|
16
|
-
|
|
17
|
-
## 特性
|
|
18
|
-
|
|
19
|
-
* **Ollama 集成**:直接连接到本地 Ollama 实例
|
|
20
|
-
* **本地模型支持**:支持通过 Ollama 托管的各种开源模型
|
|
21
|
-
* **聊天完成**:支持所有可用 Ollama 模型的聊天完成 API
|
|
22
|
-
* **流式响应**:支持流式响应,提供更高响应性的应用程序体验
|
|
23
|
-
* **类型安全**:为所有 API 和模型提供全面的 TypeScript 类型定义
|
|
24
|
-
* **一致接口**:兼容 AIGNE 框架的模型接口
|
|
25
|
-
* **注重隐私**:本地运行模型,无需将数据发送到外部 API 服务
|
|
26
|
-
* **完整配置**:丰富的配置选项用于微调行为
|
|
27
|
-
|
|
28
|
-
## 安装
|
|
29
|
-
|
|
30
|
-
### 使用 npm
|
|
31
|
-
|
|
32
|
-
```bash
|
|
33
|
-
npm install @aigne/ollama @aigne/core
|
|
34
|
-
```
|
|
35
|
-
|
|
36
|
-
### 使用 yarn
|
|
37
|
-
|
|
38
|
-
```bash
|
|
39
|
-
yarn add @aigne/ollama @aigne/core
|
|
40
|
-
```
|
|
41
|
-
|
|
42
|
-
### 使用 pnpm
|
|
43
|
-
|
|
44
|
-
```bash
|
|
45
|
-
pnpm add @aigne/ollama @aigne/core
|
|
46
|
-
```
|
|
47
|
-
|
|
48
|
-
## 前提条件
|
|
49
|
-
|
|
50
|
-
在使用此包之前,您需要在机器上安装并运行 [Ollama](https://ollama.ai/),并至少拉取一个模型。请按照 [Ollama 网站](https://ollama.ai/) 上的说明设置 Ollama。
|
|
51
|
-
|
|
52
|
-
## 基本用法
|
|
53
|
-
|
|
54
|
-
```typescript file="test/ollama-chat-model.test.ts" region="example-ollama-chat-model"
|
|
55
|
-
import { OllamaChatModel } from "@aigne/ollama";
|
|
56
|
-
|
|
57
|
-
const model = new OllamaChatModel({
|
|
58
|
-
// Specify base URL (defaults to http://localhost:11434)
|
|
59
|
-
baseURL: "http://localhost:11434",
|
|
60
|
-
// Specify Ollama model to use (defaults to 'llama3')
|
|
61
|
-
model: "llama3",
|
|
62
|
-
modelOptions: {
|
|
63
|
-
temperature: 0.8,
|
|
64
|
-
},
|
|
65
|
-
});
|
|
66
|
-
|
|
67
|
-
const result = await model.invoke({
|
|
68
|
-
messages: [{ role: "user", content: "Tell me what model you're using" }],
|
|
69
|
-
});
|
|
70
|
-
|
|
71
|
-
console.log(result);
|
|
72
|
-
/* Output:
|
|
73
|
-
{
|
|
74
|
-
text: "I'm an AI assistant running on Ollama with the llama3 model.",
|
|
75
|
-
model: "llama3"
|
|
76
|
-
}
|
|
77
|
-
*/
|
|
78
|
-
```
|
|
79
|
-
|
|
80
|
-
## 流式响应
|
|
81
|
-
|
|
82
|
-
```typescript file="test/ollama-chat-model.test.ts" region="example-ollama-chat-model-streaming"
|
|
83
|
-
import { isAgentResponseDelta } from "@aigne/core";
|
|
84
|
-
import { OllamaChatModel } from "@aigne/ollama";
|
|
85
|
-
|
|
86
|
-
const model = new OllamaChatModel({
|
|
87
|
-
baseURL: "http://localhost:11434",
|
|
88
|
-
model: "llama3",
|
|
89
|
-
});
|
|
90
|
-
|
|
91
|
-
const stream = await model.invoke(
|
|
92
|
-
{
|
|
93
|
-
messages: [{ role: "user", content: "Tell me what model you're using" }],
|
|
94
|
-
},
|
|
95
|
-
{ streaming: true },
|
|
96
|
-
);
|
|
97
|
-
|
|
98
|
-
let fullText = "";
|
|
99
|
-
const json = {};
|
|
100
|
-
|
|
101
|
-
for await (const chunk of stream) {
|
|
102
|
-
if (isAgentResponseDelta(chunk)) {
|
|
103
|
-
const text = chunk.delta.text?.text;
|
|
104
|
-
if (text) fullText += text;
|
|
105
|
-
if (chunk.delta.json) Object.assign(json, chunk.delta.json);
|
|
106
|
-
}
|
|
107
|
-
}
|
|
108
|
-
|
|
109
|
-
console.log(fullText); // Output: "I'm an AI assistant running on Ollama with the llama3 model."
|
|
110
|
-
console.log(json); // { model: "llama3" }
|
|
111
|
-
```
|
|
112
|
-
|
|
113
|
-
## 许可证
|
|
114
|
-
|
|
115
|
-
Elastic-2.0
|