@rong/agentscript 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,225 @@
1
+ # `use ... as ...`
2
+
3
+ This document defines how `use` selects prompt context in AgentScript, including context labels, budgets, scope visibility, deferred evaluation, and trace requirements.
4
+
5
+ For the larger mental model, see [Context Engineering](./context-engineering.md). For prompt construction and output contracts, see [`generate`](./generate.md).
6
+
7
+ ## Purpose
8
+
9
+ `use` is not a variable read, assignment, or namespace import. It declares a prompt context source.
10
+
11
+ ```agentscript
12
+ use input.question as user question
13
+ use scratch.summary < 2k as observations
14
+ ```
15
+
16
+ The meaning is:
17
+
18
+ ```text
19
+ make this source visible to later generate calls in the current scope and child scopes
20
+ ```
21
+
22
+ Local variables not selected with `use` do not enter the prompt.
23
+
24
+ ## Syntax
25
+
26
+ ```agentscript
27
+ use expr
28
+ use expr < budget
29
+ use expr as label
30
+ use expr < budget as label
31
+ ```
32
+
33
+ The fixed order is:
34
+
35
+ ```text
36
+ what to select -> how much to include -> what role it plays as context
37
+ ```
38
+
39
+ Examples:
40
+
41
+ ```agentscript
42
+ use input.question as user question
43
+ use docs.summary < 4k as retrieved evidence
44
+ use scratch.summary < 2k as observations
45
+ ```
46
+
47
+ ## Context labels
48
+
49
+ The label after `as` is literal label text. It is not an expression, is not evaluated, and does not read variables from scope.
50
+
51
+ ```agentscript
52
+ use docs as evidence
53
+ use docs.summary < 4k as retrieved evidence
54
+ use input.question as user
55
+ ```
56
+
57
+ `as evidence` labels the context section as `evidence` even if a variable named `evidence` exists.
58
+
59
+ Labels affect:
60
+
61
+ - prompt section labels
62
+ - trace display
63
+ - context organization
64
+ - debug and audit readability
65
+
66
+ Labels do not affect:
67
+
68
+ - agent identity
69
+ - provider message roles
70
+ - tool permissions
71
+ - system/user/assistant authority
72
+
73
+ ## Provider roles are not context labels
74
+
75
+ Provider roles such as `system`, `user`, `assistant`, and `tool` are LLM API transport details. AgentScript context labels are prompt-internal organization labels.
76
+
77
+ The following labels are reserved to avoid confusion with provider roles:
78
+
79
+ ```text
80
+ system
81
+ assistant
82
+ tool
83
+ developer
84
+ ```
85
+
86
+ `user` is allowed as a context label because it often means "this context item is user input". It still does not create a separate provider `user` message.
87
+
88
+ ## Deferred evaluation
89
+
90
+ `use expr` declares a source, not a snapshot. The expression is evaluated when a visible `generate` builds its prompt.
91
+
92
+ ```agentscript
93
+ main func(input) {
94
+ scratch = []
95
+ use scratch.summary < 2k as observations
96
+
97
+ scratch.add({ fact: "A" })
98
+ scratch.add({ fact: "B" })
99
+
100
+ generate({ input: "Answer from observations" }) -> {
101
+ text string
102
+ }
103
+ }
104
+ ```
105
+
106
+ The `generate` call sees both facts.
107
+
108
+ This keeps `use` as a context contract instead of a value copy.
109
+
110
+ ## Scope visibility
111
+
112
+ A `use` declaration is visible to later `generate` calls in the same scope and child scopes.
113
+
114
+ ```agentscript
115
+ use input.question as user question
116
+
117
+ if input.needs_detail {
118
+ use input.detail as detail
119
+ generate({ input: "Answer with detail" }) -> {
120
+ text string
121
+ }
122
+ }
123
+ ```
124
+
125
+ The inner `generate` can see both `user question` and `detail`. The `detail` context does not leak outside the block.
126
+
127
+ Function and agent calls form independent context boundaries. A callee does not automatically inherit the caller's selected context.
128
+
129
+ ## What cannot be used
130
+
131
+ Runtime capabilities must not enter prompt context:
132
+
133
+ - imported tools
134
+ - imported LLM/model bindings
135
+ - imported agent bindings
136
+ - memory handles
137
+ - function bindings
138
+ - provider URIs and workspace/runtime configuration
139
+
140
+ Invalid examples:
141
+
142
+ ```agentscript
143
+ use Search
144
+ use Qwen
145
+ use Worker
146
+ use helper
147
+ ```
148
+
149
+ Use the data returned by these capabilities instead:
150
+
151
+ ```agentscript
152
+ results = Search.search(input.question)
153
+ use results < 4k as search results
154
+ ```
155
+
156
+ ## Budget semantics
157
+
158
+ `use expr < budget` is a context item budget. It limits how much of that source may be rendered into the prompt.
159
+
160
+ ```agentscript
161
+ use docs.summary < 4k as evidence
162
+ ```
163
+
164
+ This is different from `generate({ max_output: ... })`, which is an output generation budget. See [`generate`](./generate.md).
165
+
166
+ ## Prompt rendering
167
+
168
+ A labeled context item is rendered as a context section:
169
+
170
+ ```text
171
+ Context:
172
+ [user question]
173
+ source: input.question
174
+ What is AgentScript?
175
+
176
+ [observations]
177
+ source: scratch.summary
178
+ [
179
+ { "fact": "..." }
180
+ ]
181
+ ```
182
+
183
+ If no label is provided, the renderer may use the numeric context index and include the source expression separately.
184
+
185
+ ## Trace requirements
186
+
187
+ A `use` trace event records the declaration:
188
+
189
+ ```json
190
+ {
191
+ "kind": "use",
192
+ "data": {
193
+ "source": "scratch.summary",
194
+ "label": "observations",
195
+ "budget": { "amount": 2, "unit": "k" }
196
+ }
197
+ }
198
+ ```
199
+
200
+ A generated built context item records the resolved value and rendering metadata:
201
+
202
+ ```json
203
+ {
204
+ "index": 0,
205
+ "source": "scratch.summary",
206
+ "label": "observations",
207
+ "value": [{ "fact": "A" }],
208
+ "text": "[...]",
209
+ "budget": { "amount": 2, "unit": "k" },
210
+ "clipped": false
211
+ }
212
+ ```
213
+
214
+ Trace must make the selected source, label, budget, clipping status, and resolved value auditable.
215
+
216
+ ## Design checklist
217
+
218
+ Before changing `use`, verify:
219
+
220
+ - Does unused data stay out of prompts?
221
+ - Is the source evaluated at `generate` time?
222
+ - Does the label remain literal text rather than an expression?
223
+ - Are provider roles still separate from context labels?
224
+ - Are budgets attached to context items, not the whole generation?
225
+ - Do function and agent boundaries prevent context leakage?
@@ -13,10 +13,10 @@ main agent ChangelogWriter {
13
13
  path: input.diff_path
14
14
  })
15
15
 
16
- use input.diff_path
17
- use diff < 10k
16
+ use input.diff_path as diff path
17
+ use diff < 10k as git diff
18
18
 
19
- generate({ input: "Write a changelog from this git diff", limit: 1200 }) -> {
19
+ generate({ input: "Write a changelog from this git diff", max_output: 1200 }) -> {
20
20
  title string
21
21
  highlights list[string]
22
22
  breaking_changes list[string]
@@ -14,10 +14,10 @@ main agent ApiExtractor {
14
14
  timeout: 10000
15
15
  })
16
16
 
17
- use input.url
18
- use response < 8k
17
+ use input.url as endpoint
18
+ use response < 8k as api response
19
19
 
20
- generate({ input: "Extract normalized data from the API response", limit: 1200 }) -> {
20
+ generate({ input: "Extract normalized data from the API response", max_output: 1200 }) -> {
21
21
  records list[json]
22
22
  fields list[string]
23
23
  warnings list[string]
@@ -22,11 +22,11 @@ main agent CodeReviewAssistant {
22
22
  max: 100
23
23
  })
24
24
 
25
- use input.path
26
- use todos < 4k
27
- use fixmes < 4k
25
+ use input.path as source path
26
+ use todos < 4k as todo findings
27
+ use fixmes < 4k as fixme findings
28
28
 
29
- generate({ input: "Turn TODO and FIXME scan results into prioritized repair suggestions", limit: 1200 }) -> {
29
+ generate({ input: "Turn TODO and FIXME scan results into prioritized repair suggestions", max_output: 1200 }) -> {
30
30
  summary string
31
31
  findings list[string]
32
32
  suggested_fixes list[string]
@@ -13,10 +13,10 @@ main agent FileSummarizer {
13
13
  path: input.path
14
14
  })
15
15
 
16
- use input.path
17
- use content < 8k
16
+ use input.path as source path
17
+ use content < 8k as file content
18
18
 
19
- generate({ input: "Summarize the file for a busy teammate", limit: 1000 }) -> {
19
+ generate({ input: "Summarize the file for a busy teammate", max_output: 1000 }) -> {
20
20
  title string
21
21
  summary string
22
22
  key_points list[string]
@@ -17,11 +17,11 @@ main agent MarkdownTranslator {
17
17
  max: 50
18
18
  })
19
19
 
20
- use input.path
21
- use input.target_language
22
- use files < 4k
20
+ use input.path as source path
21
+ use input.target_language as target language
22
+ use files < 4k as markdown files
23
23
 
24
- generate({ input: "Create a practical markdown translation plan", limit: 1000 }) -> {
24
+ generate({ input: "Create a practical markdown translation plan", max_output: 1000 }) -> {
25
25
  target_language string
26
26
  files list[string]
27
27
  glossary_notes list[string]
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rong/agentscript",
3
- "version": "0.1.2",
3
+ "version": "0.1.4",
4
4
  "description": "AgentScript context engineering language runtime",
5
5
  "main": "dist/index.js",
6
6
  "bin": {
package/tutorials/cli.as CHANGED
@@ -12,7 +12,7 @@ main agent {
12
12
  use input.name
13
13
  use input.request
14
14
 
15
- generate({ input: "Reply to the CLI user by name", limit: 300 }) -> {
15
+ generate({ input: "Reply to the CLI user by name", max_output: 300 }) -> {
16
16
  ok boolean
17
17
  message string
18
18
  }
@@ -75,7 +75,7 @@ main agent PlanAndExecute {
75
75
  use goal
76
76
  use results.summary < 2k
77
77
 
78
- generate({ input: "Create the final answer from executed steps", limit: 800 }) -> {
78
+ generate({ input: "Create the final answer from executed steps", max_output: 800 }) -> {
79
79
  ok boolean
80
80
  text string
81
81
  error string
@@ -93,7 +93,7 @@ agent Planner {
93
93
  use input.problem
94
94
  use input.previous < 1k
95
95
 
96
- generate({ input: "Create a three step plan", limit: 600 }) -> {
96
+ generate({ input: "Create a three step plan", max_output: 600 }) -> {
97
97
  step1 string
98
98
  step2 string
99
99
  step3 string
@@ -121,7 +121,7 @@ agent Executor {
121
121
 
122
122
  use observation
123
123
 
124
- generate({ input: "Report the result of this step", limit: 500 }) -> {
124
+ generate({ input: "Report the result of this step", max_output: 500 }) -> {
125
125
  ok boolean
126
126
  output json
127
127
  error string
@@ -139,7 +139,7 @@ agent Verifier {
139
139
  use input.step
140
140
  use input.result
141
141
 
142
- generate({ input: "Verify this step result", limit: 300 }) -> {
142
+ generate({ input: "Verify this step result", max_output: 300 }) -> {
143
143
  ok boolean
144
144
  reason string
145
145
  }
@@ -32,7 +32,7 @@ main agent ResearchAgent {
32
32
  use question
33
33
  use scratch.summary < 1k
34
34
 
35
- generate({ input: "Choose the next search focus", limit: 300 }) -> {
35
+ generate({ input: "Choose the next search focus", max_output: 300 }) -> {
36
36
  focus string
37
37
  why string
38
38
  }
@@ -60,7 +60,7 @@ main agent ResearchAgent {
60
60
 
61
61
  use raw
62
62
 
63
- generate({ input: "Summarize the useful observation", limit: 400 }) -> {
63
+ generate({ input: "Summarize the useful observation", max_output: 400 }) -> {
64
64
  facts list[string]
65
65
  source string
66
66
  }
@@ -70,7 +70,7 @@ main agent ResearchAgent {
70
70
  use question
71
71
  use scratch.summary < 1k
72
72
 
73
- verdict = generate({ input: "Decide whether the observations are enough", limit: 200 }) -> {
73
+ verdict = generate({ input: "Decide whether the observations are enough", max_output: 200 }) -> {
74
74
  done boolean
75
75
  }
76
76
 
@@ -81,7 +81,7 @@ main agent ResearchAgent {
81
81
  use question
82
82
  use scratch.summary < 2k
83
83
 
84
- generate({ input: "Answer using only the observations", limit: 800 }) -> {
84
+ generate({ input: "Answer using only the observations", max_output: 800 }) -> {
85
85
  ok boolean
86
86
  text string
87
87
  error string