@getenki/ai-darwin-arm64 0.2.5 → 0.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +169 -2
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,3 +1,170 @@
1
- # `@getenki/ai-darwin-arm64`
1
+ # `@getenki/ai`
2
2
 
3
- This is the **aarch64-apple-darwin** binary for `@getenki/ai`
3
+ JavaScript bindings for Enki's Rust agent runtime, published as a native Node.js package via `napi-rs`.
4
+
5
+ ## Install
6
+
7
+ ```bash
8
+ npm install @getenki/ai
9
+ ```
10
+
11
+ The package ships prebuilt native binaries for:
12
+
13
+ - Windows x64 and arm64
14
+ - macOS x64 and arm64
15
+ - Linux x64 and arm64 (GNU libc)
16
+
17
+ ## API
18
+
19
+ The package exposes two layers:
20
+
21
+ - `EnkiAgent`: thin wrapper over the native runtime
22
+ - `Agent`: higher-level JavaScript wrapper for tools, memories, and custom LLM providers
23
+
24
+ It also exports `NativeEnkiAgent`, `Tool`, `MemoryModule`, `MemoryBackend`, `LlmProviderBackend`, `RunContext`, and `AgentRunResult`.
25
+
26
+ ## `EnkiAgent`
27
+
28
+ Use `EnkiAgent` when you want a simple session-oriented interface backed directly by the native runtime.
29
+
30
+ ```js
31
+ const { EnkiAgent } = require('@getenki/ai')
32
+
33
+ async function main() {
34
+ const agent = new EnkiAgent({
35
+ name: 'Assistant',
36
+ systemPromptPreamble: 'Answer clearly and keep responses short.',
37
+ model: 'ollama::llama3.2:latest',
38
+ maxIterations: 20,
39
+ workspaceHome: process.cwd(),
40
+ })
41
+
42
+ const output = await agent.run('session-1', 'Explain what this project does.')
43
+ console.log(output)
44
+ }
45
+
46
+ main().catch(console.error)
47
+ ```
48
+
49
+ Constructor options:
50
+
51
+ - `name?: string`
52
+ - `systemPromptPreamble?: string`
53
+ - `model?: string`
54
+ - `maxIterations?: number`
55
+ - `workspaceHome?: string`
56
+
57
+ ## `Agent`
58
+
59
+ Use `Agent` when you want to register JavaScript tools or plug in your own LLM provider.
60
+
61
+ ```js
62
+ const { Agent } = require('@getenki/ai')
63
+
64
+ async function main() {
65
+ const agent = new Agent('demo-model', {
66
+ instructions: 'You are a dice game.',
67
+ workspaceHome: process.cwd(),
68
+ })
69
+
70
+ agent.toolPlain(
71
+ function rollDice() {
72
+ return '4'
73
+ },
74
+ {
75
+ description: 'Roll a six-sided die and return the result.',
76
+ parametersJson: JSON.stringify({
77
+ type: 'object',
78
+ properties: {},
79
+ additionalProperties: false,
80
+ }),
81
+ },
82
+ )
83
+
84
+ const result = await agent.run('My guess is 4', {
85
+ sessionId: 'session-tools-1',
86
+ })
87
+
88
+ console.log(result.output)
89
+ }
90
+
91
+ main().catch(console.error)
92
+ ```
93
+
94
+ ### Context-aware tools
95
+
96
+ `agent.tool()` injects a `RunContext` as the first argument so your tool can access runtime dependencies.
97
+
98
+ ```js
99
+ const { Agent } = require('@getenki/ai')
100
+
101
+ const agent = new Agent('demo-model')
102
+
103
+ agent.tool(
104
+ function getPlayerName(ctx) {
105
+ return ctx.deps.playerName
106
+ },
107
+ {
108
+ description: "Get the player's name.",
109
+ parametersJson: JSON.stringify({
110
+ type: 'object',
111
+ properties: {},
112
+ additionalProperties: false,
113
+ }),
114
+ },
115
+ )
116
+ ```
117
+
118
+ Then pass dependencies at run time:
119
+
120
+ ```js
121
+ const result = await agent.run('Say hello.', {
122
+ sessionId: 'session-ctx-1',
123
+ deps: { playerName: 'Anne' },
124
+ })
125
+ ```
126
+
127
+ ## Custom LLM providers
128
+
129
+ Pass either a subclass of `LlmProviderBackend` or a function through the `llm` option.
130
+
131
+ ```js
132
+ const { Agent, LlmProviderBackend } = require('@getenki/ai')
133
+
134
+ class DemoProvider extends LlmProviderBackend {
135
+ complete(model, messages, tools) {
136
+ return {
137
+ model,
138
+ content: `Received ${messages.length} message(s) and ${tools.length} tool(s).`,
139
+ }
140
+ }
141
+ }
142
+
143
+ const agent = new Agent('demo-model', {
144
+ llm: new DemoProvider(),
145
+ })
146
+ ```
147
+
148
+ ## Development
149
+
150
+ From [`crates/bindings/enki-js`](/I:/projects/enki/core-next/crates/bindings/enki-js):
151
+
152
+ ```bash
153
+ npm install
154
+ npm run build
155
+ npm test
156
+ ```
157
+
158
+ Useful scripts:
159
+
160
+ - `npm run build`: build the native addon in release mode
161
+ - `npm run build:debug`: build without release optimizations
162
+ - `npm test`: run the AVA test suite
163
+ - `npm run lint`: run `oxlint`
164
+ - `npm run format`: run Prettier, `cargo fmt`, and `taplo format`
165
+
166
+ ## Notes
167
+
168
+ - `Agent` can register tools, memories, and custom LLM providers in JavaScript.
169
+ - The default native constructor uses `20` max iterations when none is provided.
170
+ - `workspaceHome` lets you control where the runtime creates and resolves workspace state.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@getenki/ai-darwin-arm64",
3
- "version": "0.2.5",
3
+ "version": "0.2.6",
4
4
  "cpu": [
5
5
  "arm64"
6
6
  ],