@rong/agentscript 0.1.3 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,476 @@
1
+ # `generate`
2
+
3
+ This document defines the semantics of `generate` in AgentScript: generation sites, prompt layers, agent identity, selected context, output contracts, generation configuration, retries, validation, debug output, and trace output.
4
+
5
+ For context selection and labels, see [`use ... as ...`](./use-as.md). For the broader design map, see [Context Engineering](./context-engineering.md).
6
+
7
+ ## Purpose
8
+
9
+ `generate` is the only LLM call site in AgentScript.
10
+
11
+ ```agentscript
12
+ generate({ input: "Answer using the selected context." }) -> {
13
+ ok boolean
14
+ answer string
15
+ }
16
+ ```
17
+
18
+ Ordinary code can compute values, call tools, call agents, and organize state. Only `generate` asks the current model to produce new output.
19
+
20
+ ## Recommended syntax
21
+
22
+ ```agentscript
23
+ generate({
24
+ input: "Classify the issue",
25
+ max_output: 300,
26
+ attempts: 2,
27
+ temperature: 0.2,
28
+ think: "medium",
29
+ strict: true,
30
+ debug: false
31
+ }) -> {
32
+ category string
33
+ confidence number
34
+ }
35
+ ```
36
+
37
+ The output shape after `->` is optional:
38
+
39
+ ```agentscript
40
+ generate({ input: "Draft a response." })
41
+ ```
42
+
43
+ When no shape is declared, the runtime should not inject a schema or require structured JSON output. Free-form generate is allowed but not recommended for agent workflows; prefer an explicit output shape so retries, validation, trace, and downstream agent calls stay auditable.
44
+
45
+ ## Configuration fields
46
+
47
+ | Field | Required | Type | Meaning |
48
+ |---|---:|---|---|
49
+ | `input` | yes | `string` | Per-generation task instruction. |
50
+ | `max_output` | no | `number` / budget literal | Requested output generation budget. |
51
+ | `attempts` | no | `number` | Retry count for JSON parse failures or shape validation failures. |
52
+ | `temperature` | no | `number` | Sampling temperature passed to providers that support it. |
53
+ | `think` | no | `boolean` / `string` | Request model reasoning / thinking mode. |
54
+ | `strict` | no | `boolean` | Controls whether output shape validation is strict. |
55
+ | `debug` | no | `boolean` | Enables prompt / trace debug output for this generation. |
56
+
57
+ Recommended defaults:
58
+
59
+ ```text
60
+ max_output: provider/runtime default
61
+ attempts: 1
62
+ temperature: provider/model default
63
+ think: false
64
+ strict: false
65
+ debug: false
66
+ ```
67
+
68
+ The fields fall into three groups:
69
+
70
+ ```text
71
+ Generation instruction:
72
+ input
73
+
74
+ Generation provider hints:
75
+ max_output
76
+ temperature
77
+ think
78
+
79
+ AgentScript runtime behavior:
80
+ attempts
81
+ strict
82
+ debug
83
+ ```
84
+
85
+ ## Prompt construction
86
+
87
+ A `generate` prompt has four conceptual layers.
88
+
89
+ ### Agent identity
90
+
91
+ Agent identity comes from the current agent configuration:
92
+
93
+ ```agentscript
94
+ role "Senior Researcher"
95
+ description "Answer questions with search and structured reasoning."
96
+ ```
97
+
98
+ It is rendered into the provider system prompt as identity:
99
+
100
+ ```text
101
+ You are Senior Researcher.
102
+ Answer questions with search and structured reasoning.
103
+ ```
104
+
105
+ Agent `role` is an AgentScript identity concept. It is not the same as a provider message role such as `system`, `user`, or `assistant`.
106
+
107
+ ### Selected context
108
+
109
+ Selected context comes from visible `use` declarations:
110
+
111
+ ```agentscript
112
+ use input.question as user question
113
+ use scratch.summary max 2k as observations
114
+ ```
115
+
116
+ A rendered prompt section may look like:
117
+
118
+ ```text
119
+ Context:
120
+ [user question]
121
+ source: input.question
122
+ What is AgentScript?
123
+
124
+ [observations]
125
+ source: scratch.summary
126
+ [
127
+ { "fact": "..." }
128
+ ]
129
+ ```
130
+
131
+ Context labels organize prompt sections. They do not create provider messages.
132
+
133
+ ### Instruction
134
+
135
+ The instruction comes from the `input` field of `generate(...)`:
136
+
137
+ ```agentscript
138
+ generate({ input: "Answer using only the selected context." }) -> {
139
+ answer string
140
+ }
141
+ ```
142
+
143
+ The instruction is the local task for this one LLM call. It is distinct from long-lived context.
144
+
145
+ ### Output contract
146
+
147
+ The output contract comes from the optional shape after `->`:
148
+
149
+ ```agentscript
150
+ generate({ input: "Answer" }) -> {
151
+ ok boolean
152
+ answer string
153
+ citations list[string]
154
+ }
155
+ ```
156
+
157
+ The runtime asks the provider for structured output when possible and validates the returned value against the shape.
158
+
159
+ ## `max_output`
160
+
161
+ `max_output` is the requested provider-side generation budget.
162
+
163
+ ```agentscript
164
+ generate({
165
+ input: "Answer briefly",
166
+ max_output: 300
167
+ }) -> {
168
+ answer string
169
+ }
170
+ ```
171
+
172
+ Semantics:
173
+
174
+ ```text
175
+ max_output = provider-side generation budget requested by AgentScript
176
+ ```
177
+
178
+ It is separate from `use` input context budgets:
179
+
180
+ ```agentscript
181
+ use docs.summary max 4k
182
+
183
+ generate({
184
+ input: "Answer from the selected docs",
185
+ max_output: 800
186
+ }) -> {
187
+ answer string
188
+ }
189
+ ```
190
+
191
+ Difference:
192
+
193
+ ```text
194
+ use ... max 4k = input context budget
195
+ max_output: 800 = output generation budget
196
+ ```
197
+
198
+ ## `attempts`
199
+
200
+ `attempts` controls how many times the runtime may try to obtain a valid structured result. `attempts` is the maximum total number of attempts, including the first one.
201
+
202
+ ```agentscript
203
+ generate({
204
+ input: "Extract metadata",
205
+ max_output: 500,
206
+ attempts: 3
207
+ }) -> {
208
+ title string
209
+ tags list[string]
210
+ }
211
+ ```
212
+
213
+ Retryable failures:
214
+
215
+ ```text
216
+ JSON parse failed
217
+ shape validation failed
218
+ required field missing
219
+ type mismatch
220
+ strict mode violation
221
+ ```
222
+
223
+ Non-retryable failures:
224
+
225
+ ```text
226
+ provider auth error
227
+ network error
228
+ model not found
229
+ quota exceeded
230
+ timeout, unless runtime policy decides it is retryable
231
+ ```
232
+
233
+ Default:
234
+
235
+ ```text
236
+ attempts: 1
237
+ ```
238
+
239
+ ## `temperature`
240
+
241
+ `temperature` is the sampling temperature.
242
+
243
+ ```agentscript
244
+ generate({
245
+ input: "Brainstorm alternatives",
246
+ max_output: 1000,
247
+ temperature: 0.7
248
+ }) -> {
249
+ ideas list[string]
250
+ }
251
+ ```
252
+
253
+ Semantics:
254
+
255
+ ```text
256
+ If supported by the selected provider/model, pass through as sampling temperature.
257
+ If unsupported, adapter may ignore, warn, or fail according to capability policy.
258
+ ```
259
+
260
+ ## `think`
261
+
262
+ `think` is a model reasoning / thinking mode request.
263
+
264
+ Recommended values:
265
+
266
+ ```agentscript
267
+ think: false,
268
+ think: true,
269
+ think: "auto",
270
+ think: "low",
271
+ think: "medium",
272
+ think: "high"
273
+ ```
274
+
275
+ Semantics:
276
+
277
+ ```text
278
+ false do not request thinking/reasoning mode
279
+ true request provider default thinking/reasoning mode
280
+ "auto" let provider/model decide
281
+ "low" request low-intensity reasoning
282
+ "medium" request medium-intensity reasoning
283
+ "high" request high-intensity reasoning
284
+ ```
285
+
286
+ Example:
287
+
288
+ ```agentscript
289
+ generate({
290
+ input: "Analyze the tradeoffs",
291
+ max_output: 1200,
292
+ think: "high"
293
+ }) -> {
294
+ decision string
295
+ tradeoffs list[string]
296
+ risks list[string]
297
+ }
298
+ ```
299
+
300
+ `think` is a provider/model capability hint. Not every model guarantees support. If unsupported, the adapter may ignore, warn, or fail according to capability policy.
301
+
302
+ ## Capability policy for provider hints
303
+
304
+ `temperature` and `think` are provider/model capability hints. Unsupported provider hints default to warn in debug mode and ignore otherwise.
305
+
306
+ Adapters may ignore, warn, or fail for unsupported hints according to runtime capability policy, but documentation should treat the default as:
307
+
308
+ ```text
309
+ unsupported provider hints default to warn in debug mode and ignore otherwise
310
+ ```
311
+
312
+ ## `strict`
313
+
314
+ `strict` controls output shape validation.
315
+
316
+ ```agentscript
317
+ generate({
318
+ input: "Classify the issue",
319
+ max_output: 300,
320
+ strict: true
321
+ }) -> {
322
+ category string
323
+ confidence number
324
+ }
325
+ ```
326
+
327
+ Default:
328
+
329
+ ```text
330
+ strict: false
331
+ ```
332
+
333
+ ### `strict: false`
334
+
335
+ Allows limited coercion:
336
+
337
+ ```text
338
+ "true" -> true
339
+ "false" -> false
340
+ "42" -> 42
341
+ "3.14" -> 3.14
342
+ ```
343
+
344
+ Still required:
345
+
346
+ ```text
347
+ output is parseable
348
+ required fields exist
349
+ unsafe conversions fail
350
+ ```
351
+
352
+ ### `strict: true`
353
+
354
+ Strict validation:
355
+
356
+ ```text
357
+ coercion is forbidden
358
+ required fields must exist
359
+ field types must match exactly
360
+ extra fields are rejected
361
+ shape mismatch triggers retry if attempts > 1
362
+ ```
363
+
364
+ In one sentence:
365
+
366
+ ```text
367
+ strict is AgentScript runtime control over the output contract.
368
+ ```
369
+
370
+ ## `debug`
371
+
372
+ `debug` controls debug output for this `generate` call.
373
+
374
+ ```agentscript
375
+ generate({
376
+ input: "Answer the question",
377
+ max_output: 800,
378
+ debug: true
379
+ }) -> {
380
+ answer string
381
+ }
382
+ ```
383
+
384
+ Recommended debug output:
385
+
386
+ ```text
387
+ resolved agent identity
388
+ generate input
389
+ selected context entries
390
+ context labels
391
+ budgets
392
+ rendered prompt/messages
393
+ output shape
394
+ raw model output
395
+ validation result
396
+ ```
397
+
398
+ `debug` only affects debug output. It does not change prompt semantics.
399
+
400
+ ## Trace requirements
401
+
402
+ A `generate` trace should explain the actual prompt inputs, configuration, validation, and result:
403
+
404
+ ```json
405
+ {
406
+ "kind": "generate",
407
+ "data": {
408
+ "instruction": "Answer from observations",
409
+ "config": {
410
+ "max_output": 800,
411
+ "attempts": 1,
412
+ "temperature": 0.2,
413
+ "think": "medium",
414
+ "strict": false,
415
+ "debug": false
416
+ },
417
+ "context": {
418
+ "context": [
419
+ {
420
+ "index": 0,
421
+ "source": "scratch.summary",
422
+ "label": "observations",
423
+ "value": [{ "fact": "A" }],
424
+ "text": "[...]",
425
+ "budget": { "amount": 2, "unit": "k" },
426
+ "clipped": false
427
+ }
428
+ ]
429
+ },
430
+ "attempts": 1,
431
+ "validation": { "ok": true, "strict": false },
432
+ "result": { "answer": "..." }
433
+ }
434
+ }
435
+ ```
436
+
437
+ Trace answers:
438
+
439
+ - Which agent identity generated this output?
440
+ - What instruction was used?
441
+ - What selected context was visible?
442
+ - Which context items were clipped?
443
+ - What provider hints and runtime behavior were requested?
444
+ - What shape was requested?
445
+ - How many attempts were needed?
446
+ - What validation mode was used?
447
+ - What value was returned?
448
+
449
+ ## Final expression return
450
+
451
+ A `generate` expression can be the final top-level expression in a function body. In that case, the function returns the generated value implicitly.
452
+
453
+ ```agentscript
454
+ func answer(question) {
455
+ use question as user question
456
+
457
+ generate({ input: "Answer" }) -> {
458
+ answer string
459
+ }
460
+ }
461
+ ```
462
+
463
+ This is equivalent to explicitly returning the generate result when no earlier explicit `return` is executed.
464
+
465
+ ## Design checklist
466
+
467
+ Before changing `generate`, verify:
468
+
469
+ - Does it remain the only LLM call site?
470
+ - Does it use only visible `use` context sources?
471
+ - Does it keep agent identity, selected context, instruction, and output contract distinct?
472
+ - Does `role` remain agent identity rather than provider role control?
473
+ - Does `max_output` remain an output generation budget, not a context item budget?
474
+ - Does `strict` remain runtime validation behavior rather than provider configuration?
475
+ - Does `think` remain a provider/model capability hint rather than guaranteed reasoning access?
476
+ - Does trace explain the actual prompt, configuration, validation, and result?