@ax-llm/ax 19.0.20 → 19.0.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ax-llm/ax",
3
- "version": "19.0.20",
3
+ "version": "19.0.21",
4
4
  "type": "module",
5
5
  "description": "The best library to work with LLMs",
6
6
  "repository": {
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-agent
3
3
  description: This skill helps an LLM generate correct AxAgent code using @ax-llm/ax. Use when the user asks about agent(), child agents, namespaced functions, discovery mode, shared fields, llmQuery(...), RLM code execution, or offline tuning with agent.optimize(...).
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # AxAgent Codegen Rules (@ax-llm/ax)
@@ -880,11 +880,25 @@ Rules:
880
880
 
881
881
  - `llmQuery(...)` forwards only the explicit `context` argument.
882
882
  - Parent inputs are not automatically available to `llmQuery(...)` children.
883
+ - In `mode: 'simple'`, `llmQuery(...)` is a direct semantic helper.
884
+ - In `mode: 'advanced'`, `llmQuery(...)` delegates a focused subtask to a child `AxAgent` with its own runtime and action log while recursion depth remains.
885
+ - In advanced mode, no parent `contextFields` are auto-inserted into recursive children. Only explicit `llmQuery(..., context)` payload is available there.
886
+ - If `context` is a plain object, safe keys are exposed as child runtime globals and the full payload is also available as `context`.
887
+ - In advanced mode, use `llmQuery(...)` to offload discovery-heavy, tool-heavy, or multi-turn semantic branches so the parent action log stays smaller and more focused.
888
+ - In advanced mode, use batched `llmQuery([...])` only for independent subtasks. Use serial calls when later work depends on earlier results.
889
+ - In advanced mode, a good pattern is: parent does coarse discovery and JS narrowing, child `llmQuery(...)` calls handle focused branch analysis, then parent merges child outputs and finishes.
890
+ - In advanced mode with `functions.discovery: true`, prefer putting noisy tool discovery, `getFunctionDefinitions(...)`, and branch-specific tool chatter inside delegated child calls when those branches are independent or semantically distinct.
891
+ - In advanced mode, pass compact named object context to children instead of huge raw parent payloads. This makes the delegated prompt easier to follow and gives the child useful top-level globals.
892
+ - In advanced mode, do not assume child-created variables, discovered docs, or action-log history come back to the parent. Only the child return value comes back.
893
+ - In advanced mode, if a child calls `ask_clarification(...)`, that clarification bubbles up and ends the top-level run.
894
+ - In advanced mode, recursion is depth-limited: `maxDepth: 0` makes top-level `llmQuery(...)` simple, `maxDepth: 1` makes top-level `llmQuery(...)` advanced and child `llmQuery(...)` simple.
895
+ - In advanced mode, batched delegated children are cancelled when a sibling child asks for clarification or aborts, so use batched form only when those branches are truly independent.
896
+ - `maxSubAgentCalls` is a shared budget across the whole top-level run, including recursive children.
883
897
  - Single-call `llmQuery(...)` may return `[ERROR] ...` on non-abort failures.
884
898
  - Batched `llmQuery([...])` returns per-item `[ERROR] ...`.
885
899
  - If a result starts with `[ERROR]`, inspect or branch on it instead of assuming success.
886
900
 
887
- Example:
901
+ Minimal example:
888
902
 
889
903
  ```javascript
890
904
  const summary = await llmQuery('Summarize this incident', inputs.context);
@@ -895,6 +909,70 @@ if (summary.startsWith('[ERROR]')) {
895
909
  }
896
910
  ```
897
911
 
912
+ Advanced recursive discovery example:
913
+
914
+ ```javascript
915
+ const narrowedIncidents = incidents.map((incident) => ({
916
+ id: incident.id,
917
+ timeline: incident.timeline,
918
+ notes: incident.notes.slice(0, 1200),
919
+ }));
920
+
921
+ const [severityReview, followupReview] = await llmQuery([
922
+ {
923
+ query:
924
+ 'Use discovery and available tools to review severity policy alignment. Return compact findings.',
925
+ context: {
926
+ incidents: narrowedIncidents,
927
+ rubric: 'severity-policy',
928
+ },
929
+ },
930
+ {
931
+ query:
932
+ 'Use discovery and available tools to review postmortem and follow-up obligations. Return compact findings.',
933
+ context: {
934
+ incidents: narrowedIncidents,
935
+ rubric: 'postmortem-followup',
936
+ },
937
+ },
938
+ ]);
939
+
940
+ const merged = await llmQuery(
941
+ 'Merge these delegated reviews into one manager-ready summary with next steps.',
942
+ {
943
+ severityReview,
944
+ followupReview,
945
+ audience: inputs.audience,
946
+ }
947
+ );
948
+ ```
949
+
950
+ Delegation decision guide:
951
+
952
+ - **JS-only** — deterministic logic (filter, sort, count, regex, date math) → do it inline, don't delegate.
953
+ - **Single-shot semantic** — needs LLM reasoning but no tools or multi-step exploration → single `llmQuery` with narrow context.
954
+ - **Full delegation** — needs its own discovery, tool calls, or >2 turns of exploratory work → `llmQuery` as child agent.
955
+ - **Parallel fan-out** — 2+ independent subtasks each qualifying for delegation → batched `llmQuery([...])`.
956
+
957
+ Context handling:
958
+
959
+ - In advanced mode, the `context` object is injected into the child's JS runtime as named globals — it does NOT go into the child's LLM prompt. The child's prompt sees only a compact metadata summary (types, sizes, element keys) of the delegated context.
960
+ - The child actor explores the delegated context with code, the same way the parent explores `inputs.*`.
961
+ - Always narrow with JS before delegating — never pass raw `inputs.*`. Name context keys semantically (e.g. `{ emails: filtered, rubric: 'classify-urgency' }`).
962
+ - Estimate total sub-agent calls before fanning out. `maxSubAgentCalls` is a shared budget across all recursion levels.
963
+
964
+ Divide-and-conquer patterns:
965
+
966
+ - **Fan-Out / Fan-In**: JS narrows into categories → `llmQuery([...])` fans out per category → JS or one more `llmQuery` merges results.
967
+ - **Pipeline**: serial `llmQuery` calls where each depends on the prior result.
968
+ - **Scout-then-Execute**: first child explores (e.g. check availability) → parent processes with JS → second child acts (e.g. draft invite).
969
+
970
+ Notes:
971
+
972
+ - Use these patterns when one task naturally splits into focused semantic branches with their own discovery or tool usage.
973
+ - Keep the parent responsible for orchestration, cheap JS narrowing, and final assembly.
974
+ - See `src/examples/rlm-discovery.ts` for the full recursive discovery demo.
975
+
898
976
  ## Short API Reference
899
977
 
900
978
  ### `agentIdentity`
@@ -958,6 +1036,7 @@ agentIdentity?: {
958
1036
  mode?: 'simple' | 'advanced';
959
1037
  recursionOptions?: Partial<Omit<AxProgramForwardOptions, 'functions'>> & {
960
1038
  maxDepth?: number;
1039
+ promptLevel?: 'detailed' | 'basic';
961
1040
  };
962
1041
  actorOptions?: Partial<AxProgramForwardOptions & { description?: string; promptLevel?: 'detailed' | 'basic' }>;
963
1042
  responderOptions?: Partial<AxProgramForwardOptions & { description?: string }>;
@@ -965,6 +1044,8 @@ agentIdentity?: {
965
1044
  }
966
1045
  ```
967
1046
 
1047
+ - `actorTurnCallback` fires for the root agent and for recursive child agents that run actor turns.
1048
+
968
1049
  ## Examples
969
1050
 
970
1051
  Fetch these for full working code:
@@ -975,7 +1056,7 @@ Fetch these for full working code:
975
1056
  - [Smart Home](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/smart-home.ts) — state management
976
1057
  - [RLM](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm.ts) — RLM basic
977
1058
  - [RLM Long Task](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-long-task.ts) — RLM context policy
978
- - [RLM Discovery](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-discovery.ts) — discovery mode
1059
+ - [RLM Discovery](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-discovery.ts) — advanced recursive `llmQuery` plus discovery-heavy delegated subtasks
979
1060
  - [RLM Shared Fields](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-shared-fields.ts) — shared fields
980
1061
  - [RLM Adaptive Replay](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-adaptive-replay.ts) — adaptive replay
981
1062
  - [RLM Live Runtime State](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-live-runtime-state.ts) — structured runtime-state rendering
package/skills/ax-ai.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-ai
3
3
  description: This skill helps an LLM generate correct AI provider setup and configuration code using @ax-llm/ax. Use when the user asks about ai(), providers, models, presets, embeddings, extended thinking, context caching, or mentions OpenAI/Anthropic/Google/Azure/Groq/DeepSeek/Mistral/Cohere/Together/Ollama/HuggingFace/Reka/OpenRouter with @ax-llm/ax.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # AI Provider Codegen Rules (@ax-llm/ax)
package/skills/ax-flow.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-flow
3
3
  description: This skill helps an LLM generate correct AxFlow workflow code using @ax-llm/ax. Use when the user asks about flow(), AxFlow, workflow orchestration, parallel execution, DAG workflows, conditional routing, map/reduce patterns, or multi-node AI pipelines.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # AxFlow Codegen Rules (@ax-llm/ax)
package/skills/ax-gen.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-gen
3
3
  description: This skill helps an LLM generate correct AxGen code using @ax-llm/ax. Use when the user asks about ax(), AxGen, generators, forward(), streamingForward(), assertions, field processors, step hooks, self-tuning, or structured outputs.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # AxGen Codegen Rules (@ax-llm/ax)
package/skills/ax-gepa.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-gepa
3
3
  description: This skill helps an LLM generate correct AxGEPA optimization code using @ax-llm/ax. Use when the user asks about AxGEPA, GEPA, Pareto optimization, multi-objective prompt tuning, reflective prompt evolution, validationExamples, maxMetricCalls, or optimizing a generator, flow, or agent tree.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # AxGEPA Codegen Rules (@ax-llm/ax)
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-learn
3
3
  description: This skill helps an LLM generate correct AxLearn code using @ax-llm/ax. Use when the user asks about self-improving agents, trace-backed learning, feedback-aware updates, or AxLearn modes.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # AxLearn Codegen Rules (@ax-llm/ax)
package/skills/ax-llm.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax
3
3
  description: This skill helps with using the @ax-llm/ax TypeScript library for building LLM applications. Use when the user asks about ax(), ai(), f(), s(), agent(), flow(), AxGen, AxAgent, AxFlow, signatures, streaming, or mentions @ax-llm/ax.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # Ax Library (@ax-llm/ax) Quick Reference
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-signature
3
3
  description: This skill helps an LLM generate correct DSPy signature code using @ax-llm/ax. Use when the user asks about signatures, s(), f(), field types, string syntax, fluent builder API, validation constraints, or type-safe inputs/outputs.
4
- version: "19.0.20"
4
+ version: "19.0.21"
5
5
  ---
6
6
 
7
7
  # Ax Signature Reference