@kenkaiiii/gg-agent 3.6.4 → 3.6.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +96 -0
- package/package.json +3 -3
package/README.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
1
|
+
# @kenkaiiii/gg-agent
|
|
2
|
+
|
|
3
|
+
<p align="center">
|
|
4
|
+
<strong>Agent loop with multi-turn tool execution. Build agents that think, act, and loop.</strong>
|
|
5
|
+
</p>
|
|
6
|
+
|
|
7
|
+
<p align="center">
|
|
8
|
+
<a href="https://www.npmjs.com/package/@kenkaiiii/gg-agent"><img src="https://img.shields.io/npm/v/@kenkaiiii/gg-agent?style=for-the-badge" alt="npm version"></a>
|
|
9
|
+
<a href="../../LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue.svg?style=for-the-badge" alt="MIT License"></a>
|
|
10
|
+
</p>
|
|
11
|
+
|
|
12
|
+
Give an LLM tools. It calls them. Results go back in. It loops until it's done. That's it.
|
|
13
|
+
|
|
14
|
+
Built on top of [`@kenkaiiii/gg-ai`](../gg-ai/README.md). Part of the [GG Framework](../../README.md) monorepo.
|
|
15
|
+
|
|
16
|
+
---
|
|
17
|
+
|
|
18
|
+
## Install
|
|
19
|
+
|
|
20
|
+
```bash
|
|
21
|
+
npm i @kenkaiiii/gg-agent
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
---
|
|
25
|
+
|
|
26
|
+
## How it works
|
|
27
|
+
|
|
28
|
+
Create an `Agent` with a provider, model, and tools. Call `agent.prompt()` to start a conversation.
|
|
29
|
+
|
|
30
|
+
- **`for await`** gives you streaming events (`text_delta`, `tool_call_start`, `tool_call_end`, `agent_done`, etc.)
|
|
31
|
+
- **`await`** gives you the final result (`message`, `totalTurns`, `totalUsage`)
|
|
32
|
+
|
|
33
|
+
Same dual-nature pattern as `@kenkaiiii/gg-ai`. The `Agent` class maintains conversation history — each `prompt()` call continues the conversation.
|
|
34
|
+
|
|
35
|
+
For full control, use `agentLoop()` directly — a pure async generator that takes a messages array and options.
|
|
36
|
+
|
|
37
|
+
### Tools
|
|
38
|
+
|
|
39
|
+
Define tools with a name, description, Zod schema for parameters, and an `execute` function. The execute function receives typed args and a `ToolContext` with `signal`, `toolCallId`, and `onUpdate`.
|
|
40
|
+
|
|
41
|
+
Return a string, or a `{ content, details }` object for structured results. If `execute` throws, the error becomes a tool result (not a crash). The agent sees the error and can retry or adjust.
|
|
42
|
+
|
|
43
|
+
### Safety
|
|
44
|
+
|
|
45
|
+
- `maxTurns` (default: 40) prevents runaway loops
|
|
46
|
+
- `AbortSignal` support for cancellation
|
|
47
|
+
- Zod validation on tool args
|
|
48
|
+
- `maxContinuations` (default: 5) caps consecutive `pause_turn` continuations
|
|
49
|
+
|
|
50
|
+
---
|
|
51
|
+
|
|
52
|
+
## Events
|
|
53
|
+
|
|
54
|
+
| Event | Description |
|
|
55
|
+
|---|---|
|
|
56
|
+
| `text_delta` | Incremental text output |
|
|
57
|
+
| `thinking_delta` | Extended thinking output |
|
|
58
|
+
| `tool_call_start` | Tool invocation started (name, args) |
|
|
59
|
+
| `tool_call_update` | Progress update from a running tool |
|
|
60
|
+
| `tool_call_end` | Tool finished (result, duration, isError) |
|
|
61
|
+
| `server_tool_call` | Server-side tool invocation |
|
|
62
|
+
| `server_tool_result` | Server-side tool result |
|
|
63
|
+
| `turn_end` | One LLM call completed (stop reason, usage) |
|
|
64
|
+
| `agent_done` | All turns finished (total turns, total usage) |
|
|
65
|
+
| `error` | Fatal error |
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Options
|
|
70
|
+
|
|
71
|
+
| Option | Type | Description |
|
|
72
|
+
|---|---|---|
|
|
73
|
+
| `provider` | `"anthropic" \| "openai" \| "glm" \| "moonshot"` | Required |
|
|
74
|
+
| `model` | `string` | Required |
|
|
75
|
+
| `system` | `string` | System prompt |
|
|
76
|
+
| `tools` | `AgentTool[]` | Tools with Zod schemas and execute functions |
|
|
77
|
+
| `serverTools` | `ServerToolDefinition[]` | Server-side tool definitions |
|
|
78
|
+
| `maxTurns` | `number` | Max LLM calls (default: 40) |
|
|
79
|
+
| `maxTokens` | `number` | Max output tokens per turn |
|
|
80
|
+
| `temperature` | `number` | Sampling temperature |
|
|
81
|
+
| `thinking` | `"low" \| "medium" \| "high" \| "max"` | Extended thinking |
|
|
82
|
+
| `apiKey` | `string` | Provider API key |
|
|
83
|
+
| `baseUrl` | `string` | Custom endpoint |
|
|
84
|
+
| `signal` | `AbortSignal` | Cancellation |
|
|
85
|
+
| `cacheRetention` | `"none" \| "short" \| "long"` | Prompt cache preference |
|
|
86
|
+
| `compaction` | `boolean` | Server-side compaction (Anthropic only) |
|
|
87
|
+
| `maxContinuations` | `number` | Max pause_turn continuations (default: 5) |
|
|
88
|
+
| `transformContext` | `(messages) => messages` | Transform messages before each LLM call |
|
|
89
|
+
|
|
90
|
+
`transformContext` is called before each LLM call. Use it for compaction, truncation, or injecting dynamic context.
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## License
|
|
95
|
+
|
|
96
|
+
MIT
|
package/package.json
CHANGED
|
@@ -1,12 +1,12 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@kenkaiiii/gg-agent",
|
|
3
|
-
"version": "3.6.
|
|
3
|
+
"version": "3.6.5",
|
|
4
4
|
"type": "module",
|
|
5
5
|
"description": "Agentic loop system with tool execution for LLMs",
|
|
6
6
|
"license": "MIT",
|
|
7
7
|
"repository": {
|
|
8
8
|
"type": "git",
|
|
9
|
-
"url": "git+https://github.com/kenkaiiii/gg-
|
|
9
|
+
"url": "git+https://github.com/kenkaiiii/gg-framework.git",
|
|
10
10
|
"directory": "packages/gg-agent"
|
|
11
11
|
},
|
|
12
12
|
"exports": {
|
|
@@ -20,7 +20,7 @@
|
|
|
20
20
|
],
|
|
21
21
|
"dependencies": {
|
|
22
22
|
"zod": "^4.3.6",
|
|
23
|
-
"@kenkaiiii/gg-ai": "3.6.
|
|
23
|
+
"@kenkaiiii/gg-ai": "3.6.5"
|
|
24
24
|
},
|
|
25
25
|
"devDependencies": {
|
|
26
26
|
"typescript": "^5.9.3",
|