@rong/agentscript 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,19 @@
2
2
 
3
3
  All notable changes to AgentScript will be documented in this file.
4
4
 
5
+ ## 0.1.3 - 2026-05-08
6
+
7
+ ### Added
8
+
9
+ - Added final expression return for functions.
10
+ - Added literal context labels with `use context as label`.
11
+ - Added agent role and description to prompt identity construction.
12
+ - Added trace and built context output for context labels.
13
+
14
+ ### Changed
15
+
16
+ - Updated README, examples, and language references for labeled context usage.
17
+
5
18
  ## 1.0.0 - 2026-05-07
6
19
 
7
20
  ### Added
package/README.md CHANGED
@@ -1,13 +1,13 @@
1
1
  # AgentScript
2
2
 
3
3
  > **Agent context as code.**
4
- > `use` declares what the model can see.
5
- > `generate` defines the only LLM call site and its return shape.
4
+ > `use` declares what the model can see, with optional labels for prompt sections.
5
+ > `generate` defines the only LLM call site and optional output shape.
6
6
  > Zero runtime dependencies. TypeScript-powered.
7
7
 
8
8
  ```agentscript
9
- use scratch.summary < 2k
10
- return generate({ input: "Answer from observations" }) -> {
9
+ use scratch.summary < 2k as observations
10
+ generate({ input: "Answer from observations" }) -> {
11
11
  ok boolean
12
12
  text string
13
13
  }
@@ -61,10 +61,10 @@ main agent FileSummarizer {
61
61
 
62
62
  main func(input { path string }) {
63
63
  content = File.read({ path: input.path })
64
- use input.path
65
- use content < 8k
64
+ use input.path as source path
65
+ use content < 8k as file content
66
66
 
67
- return generate({
67
+ generate({
68
68
  input: "Summarize the file for a busy teammate"
69
69
  limit: 1000
70
70
  }) -> {
@@ -93,7 +93,7 @@ Expected output (with mock LLM):
93
93
 
94
94
  With `--real-llm`, the fields are populated by the model.
95
95
 
96
- The block after `generate` is a return schema, not ordinary object construction.
96
+ The optional block after `generate` is an output schema, not ordinary object construction.
97
97
 
98
98
  ## What problem it solves
99
99
 
@@ -113,7 +113,7 @@ It is a small language for one thing:
113
113
 
114
114
  > making LLM prompt context explicit, scoped, typed, traceable, and compilable.
115
115
 
116
- It gives you two things that general-purpose languages don't: a first-class `use` keyword that declares *which* data enters the LLM prompt, and a first-class `generate` expression that defines *what* the LLM must return. Everything else — variables, functions, agents, imports, loops — exists to support this core workflow. Scopes enforce context boundaries naturally: what's `use`d in one function stays there; child scopes inherit but never leak upward.
116
+ It gives you two things that general-purpose languages don't: a first-class `use` keyword that declares *which* data enters the LLM prompt and what role it plays via `as label`, and a first-class `generate` expression that defines an LLM call with an optional output contract. Everything else — variables, functions, agents, imports, loops — exists to support this core workflow. Scopes enforce context boundaries naturally: what's `use`d in one function stays there; child scopes inherit but never leak upward. Functions can also return their final top-level expression directly, which keeps typical LLM workflows concise.
117
117
 
118
118
  ## How it works
119
119
 
@@ -161,7 +161,7 @@ AgentScript doesn't hardcode agent patterns as keywords. You compose them from t
161
161
  | **Reflection / Self-Improvement** | `tutorials/self-improve.as` | Query past lessons → generate → reflect → persist new lessons |
162
162
  | **Multi-Agent** | `tutorials/plan-execute.as` | Independent agents with isolated context boundaries |
163
163
 
164
- Every pattern is explicit — which data enters the prompt, which tools each agent can use, and which output shape each LLM call must satisfy.
164
+ Every pattern is explicit — which data enters the prompt, which tools each agent can use, and which output shape each LLM call must satisfy when one is declared.
165
165
 
166
166
  ## Language at a glance
167
167
 
@@ -178,10 +178,10 @@ main agent ResearchAgent {
178
178
  main func(input {
179
179
  question string
180
180
  }) {
181
- use input.question
181
+ use input.question as user question
182
182
 
183
183
  scratch = []
184
- use scratch.summary < 2k
184
+ use scratch.summary < 2k as observations
185
185
 
186
186
  done = false
187
187
  loop until done < 6 {
@@ -191,13 +191,13 @@ main agent ResearchAgent {
191
191
  done = enough(input.question, scratch)
192
192
  }
193
193
 
194
- return answer(input.question, scratch)
194
+ answer(input.question, scratch)
195
195
  }
196
196
 
197
197
  func answer(question, scratch) {
198
- use question
199
- use scratch.summary < 2k
200
- return generate({ input: "Answer using only the observations" }) -> {
198
+ use question as user question
199
+ use scratch.summary < 2k as observations
200
+ generate({ input: "Answer using only the observations" }) -> {
201
201
  ok boolean
202
202
  text string
203
203
  error string
@@ -208,17 +208,18 @@ main agent ResearchAgent {
208
208
 
209
209
  ## Key ideas
210
210
 
211
- 1. **`use` is explicit context** — nothing enters the LLM prompt unless `use`d
212
- 2. **`generate` is the only LLM call site** — with a required input instruction and a return shape
213
- 3. **Scope is context boundary** — functions, agents, and blocks isolate prompt visibility
214
- 4. **Tools, memory, and files are imported resources** with auditable access
215
- 5. **Trace is built in** every `generate` and `use` is recorded for debugging
211
+ 1. **`use` is explicit context** — nothing enters the LLM prompt unless `use`d; `as label` names the context section
212
+ 2. **`generate` is the only LLM call site** — with a required input instruction and optional output shape
213
+ 3. **Final expression return keeps flows concise** — a function returns its final top-level expression
214
+ 4. **Scope is context boundary** functions, agents, and blocks isolate prompt visibility
215
+ 5. **Tools, memory, and files are imported resources** with auditable access
216
+ 6. **Trace is built in** — every `generate` and `use` is recorded for debugging
216
217
 
217
218
  ## Why not just Python or TypeScript?
218
219
 
219
220
  | | Python / TypeScript | AgentScript |
220
221
  |---|---|---|
221
- | Context management | Implicit (string concatenation, array append) | Explicit (`use` declaration) |
222
+ | Context management | Implicit (string concatenation, array append) | Explicit (`use` declaration, optional `as label`) |
222
223
  | LLM call site | Anywhere in the code | One `generate` expression |
223
224
  | Context isolation | Manual discipline | Scope-inherited, auto-isolated |
224
225
  | Trace / audit | External tooling needed | Built-in, per-call |
@@ -176,13 +176,26 @@ class Parser {
176
176
  const start = this.consume("use").range.start;
177
177
  const value = this.parseLogicalOr();
178
178
  const budget = this.match("<") ? this.parseBudget() : undefined;
179
+ const label = this.match("as") ? this.parseUseLabel() : undefined;
179
180
  return {
180
181
  kind: "UseStmt",
181
182
  value,
182
183
  budget,
184
+ label,
183
185
  range: { start, end: this.previous().range.end }
184
186
  };
185
187
  }
188
+ parseUseLabel() {
189
+ const tokens = [];
190
+ const line = this.previous().range.start.line;
191
+ while (!this.isAtEnd() && !this.check("}") && this.peek().range.start.line === line) {
192
+ tokens.push(this.advance());
193
+ }
194
+ if (tokens.length === 0) {
195
+ throw this.error("Expected context label after 'as'");
196
+ }
197
+ return stringifyLabelTokens(tokens);
198
+ }
186
199
  parseConfigValue(key) {
187
200
  if (key === "model") {
188
201
  return this.parsePostfix();
@@ -408,8 +421,7 @@ class Parser {
408
421
  this.consume("(");
409
422
  const options = this.parseGenerateOptions();
410
423
  this.consume(")");
411
- this.consume("->");
412
- const returnShape = this.parseShapeObject();
424
+ const returnShape = this.match("->") ? this.parseShapeObject() : undefined;
413
425
  return {
414
426
  kind: "GenerateExpr",
415
427
  options,
@@ -657,3 +669,15 @@ class Parser {
657
669
  function isImportResourceKind(value) {
658
670
  return IMPORT_RESOURCE_KINDS.has(value);
659
671
  }
672
+ function stringifyLabelTokens(tokens) {
673
+ let label = tokens[0]?.value ?? "";
674
+ for (let index = 1; index < tokens.length; index += 1) {
675
+ const previous = tokens[index - 1];
676
+ const current = tokens[index];
677
+ if (current.range.start.column > previous.range.end.column) {
678
+ label += " ";
679
+ }
680
+ label += current.value;
681
+ }
682
+ return label;
683
+ }
@@ -19,7 +19,8 @@ export async function callAnthropic(request, parsed, options, fetchImpl, timeout
19
19
  "x-api-key": apiKey,
20
20
  "anthropic-version": "2023-06-01",
21
21
  });
22
- return parseJsonText(readAnthropicText(response));
22
+ const text = readAnthropicText(response);
23
+ return request.returnShape ? parseJsonText(text) : text;
23
24
  }
24
25
  function readAnthropicText(value) {
25
26
  if (!value || typeof value !== "object" || Array.isArray(value) || !Array.isArray(value.content)) {
@@ -4,16 +4,19 @@ export async function callOllama(request, parsed, fetchImpl, timeoutMs, baseUrl)
4
4
  model: parsed.model,
5
5
  stream: false,
6
6
  think: false,
7
- format: request.builtContext.returnSchema,
8
7
  messages: [
9
8
  { role: "system", content: request.builtContext.system },
10
9
  { role: "user", content: request.builtContext.finalUserMessage },
11
10
  ],
12
11
  };
12
+ if (request.builtContext.returnSchema) {
13
+ body.format = request.builtContext.returnSchema;
14
+ }
13
15
  const maxTokens = budgetToTokenLimit(request);
14
16
  if (maxTokens) {
15
17
  body.options = { num_predict: maxTokens };
16
18
  }
17
19
  const response = await postJson(fetchImpl, `${baseUrl}/api/chat`, body, timeoutMs);
18
- return parseJsonText(readPath(response, ["message", "content"]));
20
+ const text = readPath(response, ["message", "content"]);
21
+ return request.returnShape ? parseJsonText(text) : text;
19
22
  }
@@ -11,15 +11,17 @@ export async function callOpenAI(request, parsed, options, fetchImpl, timeoutMs,
11
11
  { role: "system", content: request.builtContext.system },
12
12
  { role: "user", content: request.builtContext.finalUserMessage },
13
13
  ],
14
- response_format: {
14
+ };
15
+ if (request.builtContext.returnSchema) {
16
+ body.response_format = {
15
17
  type: "json_schema",
16
18
  json_schema: {
17
19
  name: "agentscript_generate",
18
20
  strict: true,
19
21
  schema: request.builtContext.returnSchema,
20
22
  },
21
- },
22
- };
23
+ };
24
+ }
23
25
  const maxTokens = budgetToTokenLimit(request);
24
26
  if (maxTokens) {
25
27
  body.max_completion_tokens = maxTokens;
@@ -27,5 +29,6 @@ export async function callOpenAI(request, parsed, options, fetchImpl, timeoutMs,
27
29
  const response = await postJson(fetchImpl, `${baseUrl}/chat/completions`, body, timeoutMs, {
28
30
  authorization: `Bearer ${apiKey}`,
29
31
  });
30
- return parseJsonText(readPath(response, ["choices", 0, "message", "content"]));
32
+ const text = readPath(response, ["choices", 0, "message", "content"]);
33
+ return request.returnShape ? parseJsonText(text) : text;
31
34
  }
@@ -2,7 +2,7 @@ import { sanitizeForJson } from "../../runtime/json.js";
2
2
  import { buildValueFromShape } from "../../runtime/shape.js";
3
3
  export class MockLlmProvider {
4
4
  async generate(request) {
5
- return buildValueFromShape(request.returnShape);
5
+ return request.returnShape ? buildValueFromShape(request.returnShape) : null;
6
6
  }
7
7
  }
8
8
  export class MockToolProvider {
@@ -4,7 +4,7 @@ export function buildContext(input) {
4
4
  const instruction = sanitizeForJson(input.instruction);
5
5
  const instructionText = renderJson(instruction);
6
6
  const system = buildSystemPrompt(input.agentName, input.identity);
7
- const returnSchema = shapeToSchema(input.returnShape);
7
+ const returnSchema = input.returnShape ? shapeToSchema(input.returnShape) : undefined;
8
8
  return {
9
9
  agentName: input.agentName,
10
10
  model: input.model,
@@ -51,6 +51,7 @@ function buildContextItem(item, index) {
51
51
  return {
52
52
  index,
53
53
  source: item.source,
54
+ label: item.label,
54
55
  value: clipped.value,
55
56
  text: clipped.text,
56
57
  budget: item.budget,
@@ -60,8 +61,15 @@ function buildContextItem(item, index) {
60
61
  };
61
62
  }
62
63
  function buildSystemPrompt(agentName, identity) {
63
- const lines = [`You are ${agentName}.`];
64
+ const role = typeof identity.role === "string" ? identity.role : agentName;
65
+ const lines = [`You are ${role}.`];
66
+ if (typeof identity.description === "string") {
67
+ lines.push(identity.description);
68
+ }
64
69
  for (const [key, value] of Object.entries(identity)) {
70
+ if (key === "role" || key === "description") {
71
+ continue;
72
+ }
65
73
  lines.push(`${key}: ${renderJson(value)}`);
66
74
  }
67
75
  return lines.join("\n");
@@ -71,12 +79,15 @@ function buildFinalUserMessage(context, instructionText, returnSchema) {
71
79
  if (context.length > 0) {
72
80
  sections.push("Context:");
73
81
  for (const item of context) {
74
- const label = item.source ? `${item.source}:\n` : "";
75
- sections.push(`[${item.index}] ${label}${item.text}`);
82
+ const label = item.label ?? String(item.index);
83
+ const source = item.source ? `source: ${item.source}\n` : "";
84
+ sections.push(`[${label}]\n${source}${item.text}`);
76
85
  }
77
86
  }
78
87
  sections.push("Instruction:", instructionText);
79
- sections.push("Return JSON matching this schema:", renderJson(returnSchema));
88
+ if (returnSchema) {
89
+ sections.push("Return JSON matching this schema:", renderJson(returnSchema));
90
+ }
80
91
  return sections.join("\n");
81
92
  }
82
93
  function renderJson(value) {
@@ -145,6 +156,7 @@ export function builtContextToJson(context) {
145
156
  context: context.context.map((item) => ({
146
157
  index: item.index,
147
158
  source: item.source ?? null,
159
+ label: item.label ?? null,
148
160
  value: item.value,
149
161
  text: item.text,
150
162
  budget: budgetToJson(item.budget),
@@ -153,7 +165,7 @@ export function builtContextToJson(context) {
153
165
  clippedSize: item.clippedSize,
154
166
  })),
155
167
  instruction: context.instruction,
156
- returnSchema: context.returnSchema,
168
+ returnSchema: context.returnSchema ?? null,
157
169
  budget: budgetToJson(context.budget),
158
170
  finalUserMessage: context.finalUserMessage,
159
171
  };
@@ -80,6 +80,7 @@ export class Evaluator {
80
80
  for (const item of scope.visibleUses()) {
81
81
  uses.push({
82
82
  source: item.source,
83
+ label: item.label,
83
84
  value: await this.evaluate(item.expr, item.scope),
84
85
  budget: item.budget,
85
86
  });
@@ -58,8 +58,10 @@ export class GenerateRuntime {
58
58
  continue;
59
59
  }
60
60
  try {
61
- const result = coerceValueToShape(rawResult, expr.returnShape);
62
- validateValueAgainstShape(result, expr.returnShape, expr.range);
61
+ const result = expr.returnShape ? coerceValueToShape(rawResult, expr.returnShape) : rawResult;
62
+ if (expr.returnShape) {
63
+ validateValueAgainstShape(result, expr.returnShape, expr.range);
64
+ }
63
65
  this.trace.push({
64
66
  kind: "generate",
65
67
  data: {
@@ -151,11 +151,8 @@ class Interpreter {
151
151
  this.callDepth += 1;
152
152
  const scope = await this.buildFunctionScope(agent, fn, args);
153
153
  try {
154
- const signal = await this.executeBlock(fn.body, scope);
155
- if (!signal) {
156
- throw new RuntimeError(`Function '${agent.name}.${name}' completed without return`, fn.range);
157
- }
158
- return signal.value;
154
+ const signal = await this.executeBlock(fn.body, scope, true);
155
+ return signal?.value ?? null;
159
156
  }
160
157
  finally {
161
158
  this.callDepth -= 1;
@@ -194,8 +191,11 @@ class Interpreter {
194
191
  findFunction(agent, name) {
195
192
  return agent.functions.find((fn) => fn.name === name);
196
193
  }
197
- async executeBlock(statements, scope) {
198
- for (const stmt of statements) {
194
+ async executeBlock(statements, scope, allowFinalExpressionReturn = false) {
195
+ for (const [index, stmt] of statements.entries()) {
196
+ if (allowFinalExpressionReturn && index === statements.length - 1 && stmt.kind === "ExprStmt") {
197
+ return { kind: "return", value: await this.evaluator.evaluate(stmt.expr, scope) };
198
+ }
199
199
  const result = await this.executeStatement(stmt, scope);
200
200
  if (result)
201
201
  return result;
@@ -209,8 +209,11 @@ class Interpreter {
209
209
  return undefined;
210
210
  case "UseStmt": {
211
211
  const source = formatExpressionSource(stmt.value);
212
- scope.addUse(stmt.value, source, stmt.budget);
213
- this.trace.push({ kind: "use", data: { source, budget: budgetToJson(stmt.budget) } });
212
+ scope.addUse(stmt.value, source, stmt.budget, stmt.label);
213
+ this.trace.push({
214
+ kind: "use",
215
+ data: { source, label: stmt.label ?? null, budget: budgetToJson(stmt.budget) },
216
+ });
214
217
  return undefined;
215
218
  }
216
219
  case "AssignStmt": {
@@ -29,8 +29,8 @@ export class RuntimeScope {
29
29
  }
30
30
  this.define(name, value);
31
31
  }
32
- addUse(expr, source, budget) {
33
- this.uses.push({ expr, source, budget, scope: this });
32
+ addUse(expr, source, budget, label) {
33
+ this.uses.push({ expr, source, budget, label, scope: this });
34
34
  }
35
35
  setConfig(name, value) {
36
36
  this.config.set(name, value);
@@ -171,6 +171,9 @@ class Analyzer {
171
171
  if (stmt.budget && stmt.budget.amount <= 0) {
172
172
  this.error("INVALID_BUDGET", "Budget amount must be greater than 0", stmt.range);
173
173
  }
174
+ if (stmt.label && RESERVED_CONTEXT_LABELS.has(stmt.label)) {
175
+ this.error("RESERVED_CONTEXT_LABEL", `Context label '${stmt.label}' is reserved`, stmt.range);
176
+ }
174
177
  break;
175
178
  case "AssignStmt":
176
179
  this.checkAssignment(stmt, scope);
@@ -366,7 +369,9 @@ class Analyzer {
366
369
  this.checkExpression(expr.options.input, scope);
367
370
  }
368
371
  this.checkGenerateInput(expr);
369
- this.checkShapeObject(expr.returnShape);
372
+ if (expr.returnShape) {
373
+ this.checkShapeObject(expr.returnShape);
374
+ }
370
375
  this.checkGenerateConfig("model", expr, scope);
371
376
  this.checkGenerateConfig("role", expr, scope);
372
377
  this.checkGenerateConfig("description", expr, scope);
@@ -499,6 +504,7 @@ class Analyzer {
499
504
  const IMPORTED_BINDING_KINDS = new Set(["tool", "llm", "file", "agent", "memory"]);
500
505
  const NON_CONTEXT_BINDING_KINDS = new Set(["tool", "llm", "agent", "function", "memory"]);
501
506
  const VALID_MEMORY_METHODS = new Set(["add", "query"]);
507
+ const RESERVED_CONTEXT_LABELS = new Set(["system", "assistant", "tool", "developer"]);
502
508
  function functionBinding(agentName, fn) {
503
509
  return {
504
510
  kind: "function",
@@ -32,7 +32,7 @@ func answer(question, scratch) {
32
32
  use question
33
33
  use scratch.summary < 2k
34
34
 
35
- return generate({
35
+ generate({
36
36
  input: "Answer the question using collected facts"
37
37
  limit: 800
38
38
  }) -> {
@@ -61,7 +61,7 @@ main func(input) {
61
61
  scratch.add({ fact: "A" })
62
62
  scratch.add({ fact: "B" })
63
63
 
64
- return generate({ input: "Answer from scratch" }) -> {
64
+ generate({ input: "Answer from scratch" }) -> {
65
65
  text string
66
66
  }
67
67
  }
@@ -119,12 +119,12 @@ AgentScript 中的作用域不仅是变量可见性规则,也是 prompt contex
119
119
  ```agentscript
120
120
  func caller(input) {
121
121
  use input.goal
122
- return helper(input)
122
+ helper(input)
123
123
  }
124
124
 
125
125
  func helper(input) {
126
126
  use input.detail
127
- return generate({ input: "Work on detail" }) -> {
127
+ generate({ input: "Work on detail" }) -> {
128
128
  ok boolean
129
129
  }
130
130
  }
@@ -235,10 +235,10 @@ Instruction 是本次 LLM 调用的局部任务,不应混入长期 context。
235
235
 
236
236
  ### Output contract
237
237
 
238
- Output contract 来自 `return shape`。
238
+ Output contract 来自 `generate` 表达式上可选的 `-> { ... }` shape
239
239
 
240
240
  ```agentscript
241
- return {
241
+ generate({ input: "Answer" }) -> {
242
242
  ok boolean
243
243
  text string
244
244
  error string
@@ -260,7 +260,7 @@ use scratch.summary < 2k
260
260
  `generate({ limit: budget }) { ... }` 是 generation budget。
261
261
 
262
262
  ```agentscript
263
- return generate({
263
+ generate({
264
264
  input: "Summarize"
265
265
  limit: 500
266
266
  }) -> {