@rong/agentscript 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/INSTALL.md CHANGED
@@ -4,18 +4,18 @@ AgentScript is distributed as an npm package and can also be run from source.
4
4
 
5
5
  ## Requirements
6
6
 
7
- - Node.js compatible with the runtime features used by this project.
7
+ - Node.js >= 22.5.
8
8
  - npm.
9
9
  - Optional: Ollama, OpenAI, or Anthropic credentials when running with `--real-llm`.
10
10
 
11
- The current development setup uses Node.js 25 types and the SQLite memory backend uses Node's built-in `node:sqlite` module.
11
+ The SQLite memory backend uses Node's built-in `node:sqlite` module, so Node.js 22.5 or newer is required.
12
12
 
13
13
  ## Install from npm
14
14
 
15
15
  After the package is published:
16
16
 
17
17
  ```bash
18
- npm install -g agentscript
18
+ npm install -g @rong/agentscript
19
19
  agentscript examples/review.as --input '{"path":"src"}'
20
20
  ```
21
21
 
@@ -24,7 +24,7 @@ agentscript examples/review.as --input '{"path":"src"}'
24
24
  After the package is published:
25
25
 
26
26
  ```bash
27
- npx agentscript examples/review.as --input '{"path":"src"}'
27
+ npx @rong/agentscript examples/review.as --input '{"path":"src"}'
28
28
  ```
29
29
 
30
30
  ## Run from source
package/README.md CHANGED
@@ -1,69 +1,40 @@
1
1
  # AgentScript
2
2
 
3
- > **Prompt context as a first-class citizen.**
4
- > `use` declares what the model sees. `generate` defines what it returns.
3
+ > **Agent context as code.**
4
+ > `use` declares what the model can see.
5
+ > `generate` defines the only LLM call site and its return shape.
5
6
  > Zero runtime dependencies. TypeScript-powered.
6
7
 
7
8
  ```agentscript
8
9
  use scratch.summary < 2k
9
- return generate({ input: "Answer from observations" }) {
10
- return { ok boolean, text string }
10
+ return generate({ input: "Answer from observations" }) -> {
11
+ ok boolean
12
+ text string
11
13
  }
12
14
  ```
13
15
 
14
- [![License: ISC](https://img.shields.io/badge/license-ISC-blue.svg)](LICENSE)
16
+ [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
15
17
  ![Zero Dependencies](https://img.shields.io/badge/dependencies-0-brightgreen)
16
- ![Node >= 25](https://img.shields.io/badge/node-%3E%3D25-green)
18
+ ![Node >= 22.5](https://img.shields.io/badge/node-%3E%3D22.5-green)
17
19
 
18
20
  [中文版](./README-CN.md)
19
21
 
20
- LLMs are stateless by nature. Each call is a fresh start. To give an agent continuity of thought, every input must be carefully assembled — what researchers and practitioners call context engineering.
21
-
22
- After building agents with Python and TypeScript, the author kept running into the same problem: prompt context management. What data actually reaches the LLM? Where does one agent's context end and another's begin? How do you audit what the model saw?
23
-
24
- AgentScript was designed to solve this — not as a general-purpose language, nor a declarative config, nor a prompt template, but as a **DSL** that mixes imperative control flow with explicit, scope-governed context declarations.
25
-
26
- It gives you two things that general-purpose languages don't: a first-class `use` keyword that declares *which* data enters the LLM prompt, and a first-class `generate` expression that defines *what* the LLM must return. Everything else — variables, functions, agents, imports, loops — exists to support this core workflow. Scopes enforce context boundaries naturally: what's `use`d in one function stays there; child scopes inherit but never leak upward.
27
-
28
- The result is a language purpose-built for composing agent patterns — ReAct, Plan-and-Execute, Reflection, Multi-Agent — where prompt context is always visible, auditable, and under your control.
29
-
30
- ## How it works
22
+ ## Install
31
23
 
32
- ```mermaid
33
- graph LR
34
- A[".as source"] --> B["Parser"]
35
- B --> C["AST"]
36
- C --> D["Semantic Analyzer"]
37
- D --> E["Runtime"]
38
- E --> F["LLM Provider<br/>(OpenAI / Anthropic / Ollama)"]
39
- E --> G["Tools<br/>(Find / Grep / File / HTTP / ...)"]
40
- E --> H["Memory<br/>(JSONL / SQLite)"]
41
- E --> I["Trace Output"]
24
+ ```bash
25
+ npm install -g @rong/agentscript
42
26
  ```
43
27
 
44
- ## Agent patterns as composable primitives
45
-
46
- AgentScript doesn't hardcode agent patterns as keywords. You compose them from the same primitives:
47
-
48
- | Pattern | Tutorial | What it demonstrates |
49
- |---------|----------|---------------------|
50
- | **ReAct** | `tutorials/react.as` | Reason → Act → Observe loop with explicit context |
51
- | **Plan-and-Execute** | `tutorials/plan-execute.as` | Generate plan, execute steps, verify, re-plan on failure |
52
- | **Reflection / Self-Improvement** | `tutorials/self-improve.as` | Query past lessons → generate → reflect → persist new lessons |
53
- | **Multi-Agent** | `tutorials/plan-execute.as` | Independent agents with isolated context boundaries |
54
-
55
- Every pattern is explicit — which data enters the prompt, which tools each agent can use, and which output shape each LLM call must satisfy.
56
-
57
- ## Install
28
+ Then run the CLI:
58
29
 
59
30
  ```bash
60
- npm install -g agentscript
31
+ agentscript --help
61
32
  ```
62
33
 
63
34
  Or run without installing:
64
35
 
65
36
  ```bash
66
- npx agentscript examples/review.as --input '{"path":"src"}'
37
+ npx @rong/agentscript examples/review.as --input '{"path":"src"}'
67
38
  ```
68
39
 
69
40
  ## Quick start
@@ -96,13 +67,11 @@ main agent FileSummarizer {
96
67
  return generate({
97
68
  input: "Summarize the file for a busy teammate"
98
69
  limit: 1000
99
- }) {
100
- return {
101
- title string
102
- summary string
103
- key_points list[string]
104
- action_items list[string]
105
- }
70
+ }) -> {
71
+ title string
72
+ summary string
73
+ key_points list[string]
74
+ action_items list[string]
106
75
  }
107
76
  }
108
77
  }
@@ -124,6 +93,76 @@ Expected output (with mock LLM):
124
93
 
125
94
  With `--real-llm`, the fields are populated by the model.
126
95
 
96
+ The block after `generate` is a return schema, not ordinary object construction.
97
+
98
+ ## What problem it solves
99
+
100
+ LLMs are stateless by nature. Each call is a fresh start. To give an agent continuity of thought, every input must be carefully assembled — what researchers and practitioners call context engineering.
101
+
102
+ After building agents with Python and TypeScript, the author kept running into the same problem: prompt context management. What data actually reaches the LLM? Where does one agent's context end and another's begin? How do you audit what the model saw?
103
+
104
+ ## What makes AgentScript different?
105
+
106
+ AgentScript is not:
107
+
108
+ - a prompt template
109
+ - a YAML config format
110
+ - a general-purpose agent framework
111
+
112
+ It is a small language for one thing:
113
+
114
+ > making LLM prompt context explicit, scoped, typed, traceable, and compilable.
115
+
116
+ It gives you two things that general-purpose languages don't: a first-class `use` keyword that declares *which* data enters the LLM prompt, and a first-class `generate` expression that defines *what* the LLM must return. Everything else — variables, functions, agents, imports, loops — exists to support this core workflow. Scopes enforce context boundaries naturally: what's `use`d in one function stays there; child scopes inherit but never leak upward.
117
+
118
+ ## How it works
119
+
120
+ ```mermaid
121
+ graph LR
122
+ A[".as source"] --> B["Parser"]
123
+ B --> C["AST"]
124
+ C --> D["Semantic Analyzer"]
125
+ D --> E["Runtime"]
126
+ E --> F["LLM Provider<br/>(OpenAI / Anthropic / Ollama)"]
127
+ E --> G["Tools<br/>(Find / Grep / File / HTTP / ...)"]
128
+ E --> H["Memory<br/>(JSONL / SQLite)"]
129
+ E --> I["Trace Output"]
130
+ ```
131
+
132
+ ## Status
133
+
134
+ AgentScript is experimental.
135
+
136
+ Currently implemented:
137
+
138
+ - parser
139
+ - semantic checker
140
+ - mock runtime
141
+ - OpenAI / Anthropic / Ollama LLM adapters
142
+ - file and environment tools
143
+ - JSONL and SQLite memory backends
144
+ - trace output
145
+
146
+ Planned:
147
+
148
+ - stable IR
149
+ - richer diagnostics
150
+ - VS Code syntax support
151
+ - package publishing hardening
152
+
153
+ ## Agent patterns as composable primitives
154
+
155
+ AgentScript doesn't hardcode agent patterns as keywords. You compose them from the same primitives:
156
+
157
+ | Pattern | Tutorial | What it demonstrates |
158
+ |---------|----------|---------------------|
159
+ | **ReAct** | `tutorials/react.as` | Reason → Act → Observe loop with explicit context |
160
+ | **Plan-and-Execute** | `tutorials/plan-execute.as` | Generate plan, execute steps, verify, re-plan on failure |
161
+ | **Reflection / Self-Improvement** | `tutorials/self-improve.as` | Query past lessons → generate → reflect → persist new lessons |
162
+ | **Multi-Agent** | `tutorials/plan-execute.as` | Independent agents with isolated context boundaries |
163
+
164
+ Every pattern is explicit — which data enters the prompt, which tools each agent can use, and which output shape each LLM call must satisfy.
165
+
127
166
  ## Language at a glance
128
167
 
129
168
  ```agentscript
@@ -158,12 +197,10 @@ main agent ResearchAgent {
158
197
  func answer(question, scratch) {
159
198
  use question
160
199
  use scratch.summary < 2k
161
- return generate({ input: "Answer using only the observations" }) {
162
- return {
163
- ok boolean
164
- text string
165
- error string
166
- }
200
+ return generate({ input: "Answer using only the observations" }) -> {
201
+ ok boolean
202
+ text string
203
+ error string
167
204
  }
168
205
  }
169
206
  }
@@ -408,10 +408,8 @@ class Parser {
408
408
  this.consume("(");
409
409
  const options = this.parseGenerateOptions();
410
410
  this.consume(")");
411
- this.consume("{");
412
- this.consume("return");
411
+ this.consume("->");
413
412
  const returnShape = this.parseShapeObject();
414
- this.consume("}");
415
413
  return {
416
414
  kind: "GenerateExpr",
417
415
  options,
@@ -40,6 +40,7 @@ const SYMBOLS = new Set([
40
40
  "=",
41
41
  "!",
42
42
  "<",
43
+ "-",
43
44
  "*"
44
45
  ]);
45
46
  export function tokenize(source) {
@@ -77,7 +78,7 @@ class Scanner {
77
78
  }
78
79
  if (SYMBOLS.has(char)) {
79
80
  const twoChar = `${char}${this.peekNext()}`;
80
- if (twoChar === "==" || twoChar === "!=") {
81
+ if (twoChar === "==" || twoChar === "!=" || twoChar === "->") {
81
82
  this.advance();
82
83
  this.advance();
83
84
  tokens.push({
@@ -10,13 +10,13 @@ AgentScript 表面上有变量、函数、循环和 Agent 调用,但它的主
10
10
 
11
11
  AgentScript 的控制流服务于 prompt context 构建。
12
12
 
13
- 普通语句负责组织数据、调用工具、调用 Agent 和更新中间状态。LLM 调用只能通过 `generate(...) { return ... }` 发生,而 `generate` 能看到的上下文必须由 `use` 显式声明。
13
+ 普通语句负责组织数据、调用工具、调用 Agent 和更新中间状态。LLM 调用只能通过 `generate(...) -> { ... }` 发生,而 `generate` 能看到的上下文必须由 `use` 显式声明。
14
14
 
15
15
  AgentScript 的核心对象是:
16
16
 
17
17
  - `Data`:普通变量、JSON、list、file import、tool observation 和 Agent 返回值。
18
18
  - `Context Source`:由 `use expr < budget` 声明的 prompt context 来源。
19
- - `Generation Site`:由 `generate({ input, limit, attempts, debug }) { return shape }` 声明的一次 LLM 调用。
19
+ - `Generation Site`:由 `generate({ input, limit, attempts, debug }) -> shape` 声明的一次 LLM 调用。
20
20
  - `Boundary`:由 Agent、function 和 block scope 形成的 context 可见性边界。
21
21
 
22
22
  ## `use` 的语义
@@ -35,12 +35,10 @@ func answer(question, scratch) {
35
35
  return generate({
36
36
  input: "Answer the question using collected facts"
37
37
  limit: 800
38
- }) {
39
- return {
40
- ok boolean
41
- text string
42
- error string
43
- }
38
+ }) -> {
39
+ ok boolean
40
+ text string
41
+ error string
44
42
  }
45
43
  }
46
44
  ```
@@ -63,10 +61,8 @@ main func(input) {
63
61
  scratch.add({ fact: "A" })
64
62
  scratch.add({ fact: "B" })
65
63
 
66
- return generate({ input: "Answer from scratch" }) {
67
- return {
68
- text string
69
- }
64
+ return generate({ input: "Answer from scratch" }) -> {
65
+ text string
70
66
  }
71
67
  }
72
68
  ```
@@ -128,10 +124,8 @@ func caller(input) {
128
124
 
129
125
  func helper(input) {
130
126
  use input.detail
131
- return generate({ input: "Work on detail" }) {
132
- return {
133
- ok boolean
134
- }
127
+ return generate({ input: "Work on detail" }) -> {
128
+ ok boolean
135
129
  }
136
130
  }
137
131
  ```
@@ -171,10 +165,8 @@ result = Worker({
171
165
  if condition {
172
166
  temp = compute(input)
173
167
  use temp
174
- result = generate({ input: "Use temp" }) {
175
- return {
176
- ok boolean
177
- }
168
+ result = generate({ input: "Use temp" }) -> {
169
+ ok boolean
178
170
  }
179
171
  }
180
172
  ```
@@ -231,11 +223,9 @@ source label 有助于模型理解 context,也有助于人类审计。
231
223
  Instruction 层来自 `generate(...)` 参数对象中的 `input` 字段。
232
224
 
233
225
  ```agentscript
234
- generate({ input: "Answer the question using only collected facts" }) {
235
- return {
236
- ok boolean
237
- text string
238
- }
226
+ generate({ input: "Answer the question using only collected facts" }) -> {
227
+ ok boolean
228
+ text string
239
229
  }
240
230
  ```
241
231
 
@@ -273,10 +263,8 @@ use scratch.summary < 2k
273
263
  return generate({
274
264
  input: "Summarize"
275
265
  limit: 500
276
- }) {
277
- return {
278
- text string
279
- }
266
+ }) -> {
267
+ text string
280
268
  }
281
269
  ```
282
270
 
@@ -1,6 +1,6 @@
1
1
  # AgentScript 语言参考
2
2
 
3
- 本文档描述 AgentScript v1.0.0 的当前语言规范。
3
+ 本文档描述 AgentScript v0.1.x 的当前语言规范。
4
4
 
5
5
  ## 设计原则
6
6
 
@@ -27,11 +27,9 @@ main agent Assistant {
27
27
  question string
28
28
  }) {
29
29
  use input.question
30
- return generate({ input: "Answer the question" }) {
31
- return {
32
- ok boolean
33
- answer string
34
- }
30
+ return generate({ input: "Answer the question" }) -> {
31
+ ok boolean
32
+ answer string
35
33
  }
36
34
  }
37
35
  }
@@ -140,13 +138,11 @@ func careful(input) {
140
138
  Shape 用于输入校验和 `generate` 输出校验:
141
139
 
142
140
  ```agentscript
143
- return generate({ input: "Extract facts" }) {
144
- return {
145
- ok boolean
146
- title string
147
- items list[json]
148
- meta json
149
- }
141
+ return generate({ input: "Extract facts" }) -> {
142
+ ok boolean
143
+ title string
144
+ items list[json]
145
+ meta json
150
146
  }
151
147
  ```
152
148
 
@@ -189,12 +185,10 @@ answer = generate({
189
185
  limit: 800
190
186
  attempts: 3
191
187
  debug: true
192
- }) {
193
- return {
194
- ok boolean
195
- answer string
196
- reason string
197
- }
188
+ }) -> {
189
+ ok boolean
190
+ answer string
191
+ reason string
198
192
  }
199
193
  ```
200
194
 
@@ -204,7 +198,7 @@ answer = generate({
204
198
  - `limit`:生成预算(数字或 `2k` 格式)。可选。
205
199
  - `attempts`:JSON 解析失败或 shape 不匹配时的重试次数。可选,默认 1。
206
200
  - `debug`:将完整 prompt 打印到 stderr。可选,默认 false。
207
- - `return { ... }` 块声明期望的输出 shape。
201
+ - `-> { ... }` 块声明期望的输出 shape。
208
202
  - Provider 错误(认证、网络、超时、模型不存在)直接失败,不做重试。
209
203
  - Shape 校验包含类型强制转换(如 `"true"` -> `true`,`"42"` -> `42`)。
210
204
 
@@ -371,11 +365,9 @@ import file Config from "./config.json"
371
365
  func answer(input) {
372
366
  use Requirements < 4k
373
367
  use Config
374
- return generate({ input: "Answer from the referenced file." }) {
375
- return {
376
- ok boolean
377
- answer string
378
- }
368
+ return generate({ input: "Answer from the referenced file." }) -> {
369
+ ok boolean
370
+ answer string
379
371
  }
380
372
  }
381
373
  ```
@@ -466,7 +458,7 @@ main agent Controller {
466
458
 
467
459
  ## 非目标
468
460
 
469
- AgentScript v1.0.0 不包含:
461
+ AgentScript v0.1.x 不包含:
470
462
 
471
463
  - 通用工作流引擎。
472
464
  - 通用并行执行语法。
@@ -20,7 +20,7 @@ V0 支持:
20
20
  - Agent 内函数、变量赋值、对象/列表字面量、成员访问、函数调用。
21
21
  - `model`、`role`、`description` 作用域配置。
22
22
  - `use` 上下文声明和 `< n` 上下文预算。
23
- - `generate({ input, limit, attempts, debug }) { return { ... } }` LLM 调用。
23
+ - `generate({ input, limit, attempts, debug }) -> { ... }` LLM 调用。
24
24
  - `if` / `else`,`==`、`!=`、`and`、`or`、`not`。
25
25
  - `loop until condition < n` 有上限循环。
26
26
  - `repeat * n` 有上限重复执行。
@@ -56,11 +56,9 @@ main agent ResearchAgent {
56
56
  func answer(question) {
57
57
  use question
58
58
 
59
- return generate({ input: "Answer the question" }) {
60
- return {
61
- ok boolean
62
- text string
63
- }
59
+ return generate({ input: "Answer the question" }) -> {
60
+ ok boolean
61
+ text string
64
62
  }
65
63
  }
66
64
  }
@@ -105,10 +103,8 @@ agent A {
105
103
  model Strong
106
104
  description "Use a stronger model for this function."
107
105
 
108
- return generate({ input: "Answer carefully" }) {
109
- return {
110
- text string
111
- }
106
+ return generate({ input: "Answer carefully" }) -> {
107
+ text string
112
108
  }
113
109
  }
114
110
  }
@@ -134,13 +130,11 @@ V0 运行时数据以 JSON 为核心:
134
130
  `generate` 返回 shape 使用轻量标注:
135
131
 
136
132
  ```agentscript
137
- return generate({ input: "Extract facts" }) {
138
- return {
139
- facts list[string]
140
- source string
141
- meta json
142
- ok boolean
143
- }
133
+ return generate({ input: "Extract facts" }) -> {
134
+ facts list[string]
135
+ source string
136
+ meta json
137
+ ok boolean
144
138
  }
145
139
  ```
146
140
 
@@ -155,11 +149,9 @@ func compose(question, scratch) {
155
149
  use question
156
150
  use scratch.summary < 2k
157
151
 
158
- return generate({ input: "Answer using only the context" }) {
159
- return {
160
- ok boolean
161
- text string
162
- }
152
+ return generate({ input: "Answer using only the context" }) -> {
153
+ ok boolean
154
+ text string
163
155
  }
164
156
  }
165
157
  ```
@@ -175,19 +167,17 @@ func compose(question, scratch) {
175
167
 
176
168
  ## Generate
177
169
 
178
- `generate({ input, limit, attempts, debug }) { return { ... } }` 表示一次 LLM call。`input` 是本次 call 的最后用户指令,可以是字符串、对象或其它 JSON 值。`limit`、`attempts`、`debug` 都是可选参数。
170
+ `generate({ input, limit, attempts, debug }) -> { ... }` 表示一次 LLM call。`input` 是本次 call 的最后用户指令,可以是字符串、对象或其它 JSON 值。`limit`、`attempts`、`debug` 都是可选参数。
179
171
 
180
172
  ```agentscript
181
173
  answer = generate({
182
174
  input: "Answer using collected facts"
183
175
  limit: 800
184
176
  attempts: 3
185
- }) {
186
- return {
187
- ok boolean
188
- text string
189
- error string
190
- }
177
+ }) -> {
178
+ ok boolean
179
+ text string
180
+ error string
191
181
  }
192
182
  ```
193
183
 
@@ -198,7 +188,7 @@ answer = generate({
198
188
  - `limit` 是本次 LLM call 的预算,支持 `800` 或 `2k` 这类 budget 字面量。
199
189
  - `attempts` 表示最多生成次数;它不是额外重试次数。
200
190
  - `debug` 是 boolean,缺省值为 `false`;为 `true` 时 runtime 将完整 prompt 打印到 stderr。
201
- - 块内 `return { ... }` 描述 LLM 输出 JSON shape,不会从外层函数返回。
191
+ - `-> { ... }` 描述 LLM 输出 JSON shape,不会从外层函数返回。
202
192
  - LLM 输出会先按 shape 做有限容错转换。
203
193
  - 当输出不是 JSON 或转换后仍不符合 shape,且 `attempts > 1` 时,下一次调用会把上一次输出和错误信息附加到 `input` 后,请模型 repair。
204
194
  - provider 网络、认证、超时、模型不存在等基础设施错误不会被 repair 重试。
@@ -1,6 +1,6 @@
1
1
  # AgentScript V0 Implement
2
2
 
3
- 本文档是 V0 阶段的历史实现快照。语言设计见 `v0-design.md`。当前 v1.0.0 的实现入口和模块概览见 `../en/language.md` 与 `../cn/language.md`。
3
+ 本文档是 V0 阶段的历史实现快照。语言设计见 `v0-design.md`。当前 v0.1.x 的实现入口和模块概览见 `../en/language.md` 与 `../cn/language.md`。
4
4
 
5
5
  ## 执行管线
6
6
 
@@ -92,7 +92,7 @@ V0 AST 节点:
92
92
  - `main agent { ... }` 映射为匿名入口 Agent,内部名 `__main_agent`。
93
93
  - `main func(input) { ... }` 映射为匿名入口函数,内部名 `__main`。
94
94
  - `AgentName(input)` 是普通 `CallExpr`,语义和运行时把它解析为目标 Agent 的 `main func` 调用。
95
- - `generate({ input: "...", limit: 500 }) { return { ... } }` 把生成预算保存在 `GenerateExpr` options 中。
95
+ - `generate({ input: "...", limit: 500 }) -> { ... }` 把生成预算保存在 `GenerateExpr` options 中。
96
96
  - `use value < 2k` 的预算在 `UseStmt` 上。
97
97
  - `loop until done < 6` 的 `< 6` 是循环上限,不是比较表达式。
98
98
 
@@ -198,7 +198,7 @@ V0 的预算裁剪按字符数实现:`2k` 约等于 2000 字符。这是当前
198
198
 
199
199
  ```ts
200
200
  interface ToolProvider {
201
- call(request: ToolCallRequest): Promise<RuntimeValue>;
201
+ call(request: ToolCallRequest): Promise<RuntimeValue>;
202
202
  }
203
203
  ```
204
204
 
@@ -113,10 +113,8 @@ for item in list < n {
113
113
  V1 不需要专门的 `plan` 类型。推荐结构:
114
114
 
115
115
  ```agentscript
116
- return generate({ input: "Create a short executable plan" }) {
117
- return {
118
- steps list[json]
119
- }
116
+ return generate({ input: "Create a short executable plan" }) -> {
117
+ steps list[json]
120
118
  }
121
119
  ```
122
120
 
@@ -280,11 +278,9 @@ func answer(input) {
280
278
  use Requirements < 4k
281
279
  use input.question
282
280
 
283
- return generate({ input: "Answer from the referenced file" }) {
284
- return {
285
- ok boolean
286
- text string
287
- }
281
+ return generate({ input: "Answer from the referenced file" }) -> {
282
+ ok boolean
283
+ text string
288
284
  }
289
285
  }
290
286
  ```
@@ -213,12 +213,10 @@ agent Reflector {
213
213
  return generate({
214
214
  input: "Extract one reusable lesson from this run.",
215
215
  attempts: 3
216
- }) {
217
- return {
218
- insight string
219
- mistake string
220
- next_rule string
221
- }
216
+ }) -> {
217
+ insight string
218
+ mistake string
219
+ next_rule string
222
220
  }
223
221
  }
224
222
  }
@@ -270,12 +268,10 @@ main agent Learner {
270
268
  result = generate({
271
269
  input: "Answer the goal using relevant past lessons.",
272
270
  attempts: 3
273
- }) {
274
- return {
275
- ok boolean
276
- answer string
277
- reason string
278
- }
271
+ }) -> {
272
+ ok boolean
273
+ answer string
274
+ reason string
279
275
  }
280
276
 
281
277
  reflection = reflect({
@@ -299,10 +295,8 @@ main agent Learner {
299
295
  return generate({
300
296
  input: "Extract one reusable lesson from this run.",
301
297
  attempts: 3
302
- }) {
303
- return {
304
- insight string
305
- }
298
+ }) -> {
299
+ insight string
306
300
  }
307
301
  }
308
302
  }
@@ -8,13 +8,13 @@ AgentScript has variables, functions, loops, and agent calls, but its primary pu
8
8
 
9
9
  AgentScript's control flow serves prompt context construction.
10
10
 
11
- Ordinary statements organize data, call tools, call agents, and update intermediate state. LLM calls happen only through `generate(...) { return ... }`, and the context visible to `generate` must be declared explicitly with `use`.
11
+ Ordinary statements organize data, call tools, call agents, and update intermediate state. LLM calls happen only through `generate(...) -> { ... }`, and the context visible to `generate` must be declared explicitly with `use`.
12
12
 
13
13
  The core objects are:
14
14
 
15
15
  - **Data**: ordinary variables, JSON values, lists, file imports, tool observations, agent return values.
16
16
  - **Context Source**: a prompt context origin declared by `use expr < budget`.
17
- - **Generation Site**: an LLM call declared by `generate({ input, limit, attempts, debug }) { return shape }`.
17
+ - **Generation Site**: an LLM call declared by `generate({ input, limit, attempts, debug }) -> shape`.
18
18
  - **Boundary**: context visibility boundaries formed by agent, function, and block scopes.
19
19
 
20
20
  ## Semantics of `use`
@@ -33,12 +33,10 @@ func answer(question, scratch) {
33
33
  return generate({
34
34
  input: "Answer the question using collected facts"
35
35
  limit: 800
36
- }) {
37
- return {
38
- ok boolean
39
- text string
40
- error string
41
- }
36
+ }) -> {
37
+ ok boolean
38
+ text string
39
+ error string
42
40
  }
43
41
  }
44
42
  ```
@@ -57,10 +55,8 @@ main func(input) {
57
55
  scratch.add({ fact: "A" })
58
56
  scratch.add({ fact: "B" })
59
57
 
60
- return generate({ input: "Answer from scratch" }) {
61
- return {
62
- text string
63
- }
58
+ return generate({ input: "Answer from scratch" }) -> {
59
+ text string
64
60
  }
65
61
  }
66
62
  ```
@@ -108,10 +104,8 @@ func caller(input) {
108
104
 
109
105
  func helper(input) {
110
106
  use input.detail
111
- return generate({ input: "Work on detail" }) {
112
- return {
113
- ok boolean
114
- }
107
+ return generate({ input: "Work on detail" }) -> {
108
+ ok boolean
115
109
  }
116
110
  }
117
111
  ```
@@ -145,10 +139,8 @@ Blocks (`if`, `repeat`, `loop`, `for`) create child scopes. `use` declarations i
145
139
  if condition {
146
140
  temp = compute(input)
147
141
  use temp
148
- result = generate({ input: "Use temp" }) {
149
- return {
150
- ok boolean
151
- }
142
+ result = generate({ input: "Use temp" }) -> {
143
+ ok boolean
152
144
  }
153
145
  }
154
146
  ```
@@ -205,11 +197,9 @@ Source labels help models understand context and help humans audit prompts.
205
197
  Comes from the `input` field of `generate(...)` options.
206
198
 
207
199
  ```agentscript
208
- generate({ input: "Answer the question using only collected facts" }) {
209
- return {
210
- ok boolean
211
- text string
212
- }
200
+ generate({ input: "Answer the question using only collected facts" }) -> {
201
+ ok boolean
202
+ text string
213
203
  }
214
204
  ```
215
205
 
@@ -1,6 +1,6 @@
1
1
  # AgentScript Language
2
2
 
3
- This document describes the current AgentScript v1.0.0 language.
3
+ This document describes AgentScript v0.1.x.
4
4
 
5
5
  ## Design principles
6
6
 
@@ -27,11 +27,9 @@ main agent Assistant {
27
27
  question string
28
28
  }) {
29
29
  use input.question
30
- return generate({ input: "Answer the question" }) {
31
- return {
32
- ok boolean
33
- answer string
34
- }
30
+ return generate({ input: "Answer the question" }) -> {
31
+ ok boolean
32
+ answer string
35
33
  }
36
34
  }
37
35
  }
@@ -140,13 +138,11 @@ Runtime values are JSON-oriented:
140
138
  Shapes are used for input validation and `generate` output validation:
141
139
 
142
140
  ```agentscript
143
- return generate({ input: "Extract facts" }) {
144
- return {
145
- ok boolean
146
- title string
147
- items list[json]
148
- meta json
149
- }
141
+ return generate({ input: "Extract facts" }) -> {
142
+ ok boolean
143
+ title string
144
+ items list[json]
145
+ meta json
150
146
  }
151
147
  ```
152
148
 
@@ -189,12 +185,10 @@ answer = generate({
189
185
  limit: 800
190
186
  attempts: 3
191
187
  debug: true
192
- }) {
193
- return {
194
- ok boolean
195
- answer string
196
- reason string
197
- }
188
+ }) -> {
189
+ ok boolean
190
+ answer string
191
+ reason string
198
192
  }
199
193
  ```
200
194
 
@@ -204,7 +198,7 @@ answer = generate({
204
198
  - `limit` is the generation budget (number or `2k` style). Optional.
205
199
  - `attempts` controls retry for JSON parse errors or shape mismatch. Optional, defaults to 1.
206
200
  - `debug` prints the full prompt to stderr. Optional, defaults to false.
207
- - The `return { ... }` block declares the expected output shape.
201
+ - The `-> { ... }` block declares the expected output shape.
208
202
  - Provider errors (auth, network, timeout, missing model) fail directly without retry.
209
203
  - Shape validation includes coercion (e.g. `"true"` -> `true`, `"42"` -> `42`).
210
204
 
@@ -371,11 +365,9 @@ import file Config from "./config.json"
371
365
  func answer(input) {
372
366
  use Requirements < 4k
373
367
  use Config
374
- return generate({ input: "Answer from the referenced file." }) {
375
- return {
376
- ok boolean
377
- answer string
378
- }
368
+ return generate({ input: "Answer from the referenced file." }) -> {
369
+ ok boolean
370
+ answer string
379
371
  }
380
372
  }
381
373
  ```
@@ -466,7 +458,7 @@ The interpreter is a tree-walking evaluator. There is no IR, bytecode, or compil
466
458
 
467
459
  ## Non-goals
468
460
 
469
- AgentScript v1.0.0 does not provide:
461
+ AgentScript v0.1.x does not provide:
470
462
 
471
463
  - A general workflow engine.
472
464
  - General parallel execution syntax.
@@ -16,14 +16,12 @@ main agent ChangelogWriter {
16
16
  use input.diff_path
17
17
  use diff < 10k
18
18
 
19
- return generate({ input: "Write a changelog from this git diff", limit: 1200 }) {
20
- return {
21
- title string
22
- highlights list[string]
23
- breaking_changes list[string]
24
- fixes list[string]
25
- notes string
26
- }
19
+ return generate({ input: "Write a changelog from this git diff", limit: 1200 }) -> {
20
+ title string
21
+ highlights list[string]
22
+ breaking_changes list[string]
23
+ fixes list[string]
24
+ notes string
27
25
  }
28
26
  }
29
27
  }
@@ -17,13 +17,11 @@ main agent ApiExtractor {
17
17
  use input.url
18
18
  use response < 8k
19
19
 
20
- return generate({ input: "Extract normalized data from the API response", limit: 1200 }) {
21
- return {
22
- records list[json]
23
- fields list[string]
24
- warnings list[string]
25
- summary string
26
- }
20
+ return generate({ input: "Extract normalized data from the API response", limit: 1200 }) -> {
21
+ records list[json]
22
+ fields list[string]
23
+ warnings list[string]
24
+ summary string
27
25
  }
28
26
  }
29
27
  }
@@ -26,13 +26,11 @@ main agent CodeReviewAssistant {
26
26
  use todos < 4k
27
27
  use fixmes < 4k
28
28
 
29
- return generate({ input: "Turn TODO and FIXME scan results into prioritized repair suggestions", limit: 1200 }) {
30
- return {
31
- summary string
32
- findings list[string]
33
- suggested_fixes list[string]
34
- next_steps list[string]
35
- }
29
+ return generate({ input: "Turn TODO and FIXME scan results into prioritized repair suggestions", limit: 1200 }) -> {
30
+ summary string
31
+ findings list[string]
32
+ suggested_fixes list[string]
33
+ next_steps list[string]
36
34
  }
37
35
  }
38
36
  }
@@ -16,13 +16,11 @@ main agent FileSummarizer {
16
16
  use input.path
17
17
  use content < 8k
18
18
 
19
- return generate({ input: "Summarize the file for a busy teammate", limit: 1000 }) {
20
- return {
21
- title string
22
- summary string
23
- key_points list[string]
24
- action_items list[string]
25
- }
19
+ return generate({ input: "Summarize the file for a busy teammate", limit: 1000 }) -> {
20
+ title string
21
+ summary string
22
+ key_points list[string]
23
+ action_items list[string]
26
24
  }
27
25
  }
28
26
  }
@@ -21,13 +21,11 @@ main agent MarkdownTranslator {
21
21
  use input.target_language
22
22
  use files < 4k
23
23
 
24
- return generate({ input: "Create a practical markdown translation plan", limit: 1000 }) {
25
- return {
26
- target_language string
27
- files list[string]
28
- glossary_notes list[string]
29
- instructions string
30
- }
24
+ return generate({ input: "Create a practical markdown translation plan", limit: 1000 }) -> {
25
+ target_language string
26
+ files list[string]
27
+ glossary_notes list[string]
28
+ instructions string
31
29
  }
32
30
  }
33
31
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rong/agentscript",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "AgentScript context engineering language runtime",
5
5
  "main": "dist/index.js",
6
6
  "bin": {
@@ -47,9 +47,17 @@
47
47
  ],
48
48
  "author": "Rong Zhou",
49
49
  "license": "MIT",
50
+ "repository": {
51
+ "type": "git",
52
+ "url": "git+https://github.com/rongzhou/agentscript.git"
53
+ },
54
+ "bugs": {
55
+ "url": "https://github.com/rongzhou/agentscript/issues"
56
+ },
57
+ "homepage": "https://github.com/rongzhou/agentscript#readme",
50
58
  "type": "module",
51
59
  "engines": {
52
- "node": ">=25"
60
+ "node": ">=22.5"
53
61
  },
54
62
  "devDependencies": {
55
63
  "@types/node": "^25.6.2",
package/tutorials/cli.as CHANGED
@@ -12,11 +12,9 @@ main agent {
12
12
  use input.name
13
13
  use input.request
14
14
 
15
- return generate({ input: "Reply to the CLI user by name", limit: 300 }) {
16
- return {
17
- ok boolean
18
- message string
19
- }
15
+ return generate({ input: "Reply to the CLI user by name", limit: 300 }) -> {
16
+ ok boolean
17
+ message string
20
18
  }
21
19
  }
22
20
  }
@@ -75,12 +75,10 @@ main agent PlanAndExecute {
75
75
  use goal
76
76
  use results.summary < 2k
77
77
 
78
- return generate({ input: "Create the final answer from executed steps", limit: 800 }) {
79
- return {
80
- ok boolean
81
- text string
82
- error string
83
- }
78
+ return generate({ input: "Create the final answer from executed steps", limit: 800 }) -> {
79
+ ok boolean
80
+ text string
81
+ error string
84
82
  }
85
83
  }
86
84
  }
@@ -95,12 +93,10 @@ agent Planner {
95
93
  use input.problem
96
94
  use input.previous < 1k
97
95
 
98
- return generate({ input: "Create a three step plan", limit: 600 }) {
99
- return {
100
- step1 string
101
- step2 string
102
- step3 string
103
- }
96
+ return generate({ input: "Create a three step plan", limit: 600 }) -> {
97
+ step1 string
98
+ step2 string
99
+ step3 string
104
100
  }
105
101
  }
106
102
  }
@@ -125,12 +121,10 @@ agent Executor {
125
121
 
126
122
  use observation
127
123
 
128
- return generate({ input: "Report the result of this step", limit: 500 }) {
129
- return {
130
- ok boolean
131
- output json
132
- error string
133
- }
124
+ return generate({ input: "Report the result of this step", limit: 500 }) -> {
125
+ ok boolean
126
+ output json
127
+ error string
134
128
  }
135
129
  }
136
130
  }
@@ -145,11 +139,9 @@ agent Verifier {
145
139
  use input.step
146
140
  use input.result
147
141
 
148
- return generate({ input: "Verify this step result", limit: 300 }) {
149
- return {
150
- ok boolean
151
- reason string
152
- }
142
+ return generate({ input: "Verify this step result", limit: 300 }) -> {
143
+ ok boolean
144
+ reason string
153
145
  }
154
146
  }
155
147
  }
@@ -32,11 +32,9 @@ main agent ResearchAgent {
32
32
  use question
33
33
  use scratch.summary < 1k
34
34
 
35
- return generate({ input: "Choose the next search focus", limit: 300 }) {
36
- return {
37
- focus string
38
- why string
39
- }
35
+ return generate({ input: "Choose the next search focus", limit: 300 }) -> {
36
+ focus string
37
+ why string
40
38
  }
41
39
  }
42
40
 
@@ -62,11 +60,9 @@ main agent ResearchAgent {
62
60
 
63
61
  use raw
64
62
 
65
- return generate({ input: "Summarize the useful observation", limit: 400 }) {
66
- return {
67
- facts list[string]
68
- source string
69
- }
63
+ return generate({ input: "Summarize the useful observation", limit: 400 }) -> {
64
+ facts list[string]
65
+ source string
70
66
  }
71
67
  }
72
68
 
@@ -74,10 +70,8 @@ main agent ResearchAgent {
74
70
  use question
75
71
  use scratch.summary < 1k
76
72
 
77
- verdict = generate({ input: "Decide whether the observations are enough", limit: 200 }) {
78
- return {
79
- done boolean
80
- }
73
+ verdict = generate({ input: "Decide whether the observations are enough", limit: 200 }) -> {
74
+ done boolean
81
75
  }
82
76
 
83
77
  return verdict.done
@@ -87,12 +81,10 @@ main agent ResearchAgent {
87
81
  use question
88
82
  use scratch.summary < 2k
89
83
 
90
- return generate({ input: "Answer using only the observations", limit: 800 }) {
91
- return {
92
- ok boolean
93
- text string
94
- error string
95
- }
84
+ return generate({ input: "Answer using only the observations", limit: 800 }) -> {
85
+ ok boolean
86
+ text string
87
+ error string
96
88
  }
97
89
  }
98
90
  }
@@ -21,12 +21,10 @@ main agent SelfImprover {
21
21
  result = generate({
22
22
  input: "Answer the goal using any relevant lessons.",
23
23
  attempts: 3
24
- }) {
25
- return {
26
- ok boolean
27
- answer string
28
- reason string
29
- }
24
+ }) -> {
25
+ ok boolean
26
+ answer string
27
+ reason string
30
28
  }
31
29
 
32
30
  lesson = reflect({
@@ -51,10 +49,8 @@ main agent SelfImprover {
51
49
  return generate({
52
50
  input: "Extract one durable lesson that could improve a future run.",
53
51
  attempts: 3
54
- }) {
55
- return {
56
- insight string
57
- }
52
+ }) -> {
53
+ insight string
58
54
  }
59
55
  }
60
56
  }