@ax-llm/ax 19.0.26 → 19.0.27

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ax-llm/ax",
3
- "version": "19.0.26",
3
+ "version": "19.0.27",
4
4
  "type": "module",
5
5
  "description": "The best library to work with LLMs",
6
6
  "repository": {
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-agent-optimize
3
3
  description: This skill helps an LLM generate correct AxAgent tuning and evaluation code using @ax-llm/ax. Use when the user asks about agent.optimize(...), judgeOptions, eval datasets, optimization targets, saved optimizedProgram artifacts, or recursive optimization guidance.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AxAgent Optimize Codegen Rules (@ax-llm/ax)
@@ -143,7 +143,7 @@ const assistant = agent('query:string -> answer:string', {
143
143
  contextFields: [],
144
144
  runtime: new AxJSRuntime(),
145
145
  functions: { local: tools },
146
- contextPolicy: { preset: 'adaptive' },
146
+ contextPolicy: { preset: 'checkpointed', budget: 'balanced' },
147
147
  judgeOptions: {
148
148
  description: 'Prefer correct tool use over polished wording.',
149
149
  model: 'judge-model',
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-agent
3
3
  description: This skill helps an LLM generate correct AxAgent code using @ax-llm/ax. Use when the user asks about agent(), child agents, namespaced functions, discovery mode, shared fields, llmQuery(...), RLM code execution, recursionOptions, or agent runtime behavior. For tuning and eval with agent.optimize(...), use ax-agent-optimize.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AxAgent Codegen Rules (@ax-llm/ax)
@@ -25,9 +25,9 @@ Your job is not just to write valid code. Your job is to choose the smallest cor
25
25
  - If `functions.discovery` is `true`, discover callables from modules before using them.
26
26
  - In stdout-mode RLM, use one observable `console.log(...)` step per non-final actor turn.
27
27
  - Prefer `promptLevel: 'default'` for normal use; use `promptLevel: 'detailed'` when you want extra anti-pattern examples and tighter teaching scaffolding in the actor prompt.
28
- - For long RLM tasks, prefer `contextPolicy: { preset: 'adaptive' }` so older successful turns collapse into checkpoint summaries while live runtime state stays visible.
29
- - Prefer `contextPolicy: { preset: 'checkpointed' }` when you want debugging-friendly full replay first and only want summaries after prompt pressure becomes real.
30
- - Prefer `actorModelPolicy` when the actor may need to upgrade under whole-prompt pressure or repeated error turns without also upgrading the responder.
28
+ - Default to `contextPolicy: { preset: 'checkpointed', budget: 'balanced' }` for most RLM tasks.
29
+ - Prefer `contextPolicy: { preset: 'adaptive', budget: 'balanced' }` when older successful turns should collapse sooner while live runtime state stays visible.
30
+ - Prefer `actorModelPolicy` when the actor may need to upgrade after repeated error turns or discovery in specific namespaces without also upgrading the responder.
31
31
  - Use `actorTurnCallback` when the user needs per-turn observability into generated code, raw runtime result, formatted output, or provider thoughts.
32
32
 
33
33
  ## Decision Guide
@@ -35,7 +35,7 @@ Your job is not just to write valid code. Your job is to choose the smallest cor
35
35
  Map user intent to agent shape before writing code:
36
36
 
37
37
  - "Use tools and answer" -> plain `agent(...)` with local functions, no recursion, no extra observability.
38
- - "Inspect large context with code" -> add `runtime`, `contextFields`, and usually `contextPolicy: { preset: 'adaptive' }`.
38
+ - "Inspect large context with code" -> add `runtime`, `contextFields`, and usually `contextPolicy: { preset: 'checkpointed', budget: 'balanced' }`.
39
39
  - "Delegate focused semantic subtasks" -> use `llmQuery(...)`; add `mode: 'advanced'` only when child tasks need their own runtime, tools, or discovery loop.
40
40
  - "Need child agents with distinct responsibilities" -> use `agents.local`, and add `fields.shared` only when parent inputs truly need to flow into children.
41
41
  - "Need tool discovery because names/schemas are not stable" -> use `functions.discovery: true` and generate discovery-first code.
@@ -46,7 +46,7 @@ Choose options based on user needs, not feature completeness:
46
46
 
47
47
  - Prefer `mode: 'simple'` unless recursive child agents materially improve the task.
48
48
  - Prefer `maxSubAgentCalls` only when advanced recursion is enabled or the user needs explicit delegation limits.
49
- - Prefer `contextPolicy.preset: 'adaptive'` for long RLM tasks, `checkpointed` when you want "full first, summarize later", `full` for debugging, and `lean` only under real token pressure.
49
+ - Prefer `contextPolicy: { preset: 'checkpointed', budget: 'balanced' }` by default, switch to `adaptive` when you want earlier summarization, use `full` for debugging, and reserve `lean` for real prompt pressure.
50
50
 
51
51
  ## Mental Model
52
52
 
@@ -62,14 +62,14 @@ Treat `AxAgent` as a long-running JavaScript REPL that the actor steers over mul
62
62
  Use these meanings consistently when writing or explaining `contextPolicy.preset`:
63
63
 
64
64
  - `full`: Keep prior actions fully replayed. Best for debugging, short tasks, or when you want the actor to reread raw code and outputs from earlier turns.
65
- - `adaptive`: Keep runtime state visible, keep recent or dependency-relevant actions in full, and collapse older successful work into a `Checkpoint Summary` when context grows. This is the default recommendation for long multi-turn tasks.
66
- - `checkpointed`: Keep full replay until the rendered actor prompt crosses the checkpoint threshold, then replace older successful history with a `Checkpoint Summary` while keeping recent actions and unresolved errors fully visible.
65
+ - `adaptive`: Keep runtime state visible, keep recent or dependency-relevant actions in full, and collapse older successful work into a `Checkpoint Summary` when context grows.
66
+ - `checkpointed`: Keep full replay until the rendered actor prompt grows beyond the selected budget, then replace older successful history with a `Checkpoint Summary` while keeping recent actions and unresolved errors fully visible.
67
67
  - `lean`: Most aggressive compression. Keep `Live Runtime State`, checkpoint older successful work, and summarize replay-pruned successful turns instead of showing their full code blocks. Use when token pressure matters more than raw replay detail.
68
68
 
69
69
  Practical rule:
70
70
 
71
- - Start with `adaptive` for most long RLM tasks.
72
- - Use `checkpointed` when you want conservative replay until there is actual pressure to summarize.
71
+ - Start with `checkpointed + balanced` for most tasks.
72
+ - Use `adaptive + balanced` when you want older successful work summarized sooner.
73
73
  - Use `lean` only when the task can mostly continue from current runtime state plus compact summaries.
74
74
  - Use `full` when you are debugging the actor loop itself or need exact prior code/output in prompt.
75
75
 
@@ -93,8 +93,8 @@ Treat these knobs as a bundle:
93
93
  Recommended combinations:
94
94
 
95
95
  - Short task, debugging, or weaker/cheaper model: `preset: 'full'`.
96
- - Long multi-turn task, general default, medium-to-strong model: `preset: 'adaptive'`.
97
- - Long task where you want raw replay until the log is actually large: `preset: 'checkpointed'`.
96
+ - Long multi-turn task, general default, medium-to-strong model: `preset: 'checkpointed', budget: 'balanced'`.
97
+ - Long task where you want older successful work summarized sooner: `preset: 'adaptive', budget: 'balanced'`.
98
98
  - Very long task under token pressure, stronger model only: `preset: 'lean'`.
99
99
  - Discovery-heavy work with a cheaper default actor: keep the responder cheap and add `actorModelPolicy` so only the actor upgrades under pressure.
100
100
 
@@ -102,8 +102,8 @@ Practical rule:
102
102
 
103
103
  - The leaner the replay policy, the stronger the model should usually be.
104
104
  - `full` gives the model more raw evidence, so smaller models often do better there.
105
- - `adaptive` is the default middle ground for real agent work.
106
- - `checkpointed` is the conservative middle ground when you want full replay first and summarization only after a threshold.
105
+ - `checkpointed + balanced` is the default middle ground for real agent work.
106
+ - `adaptive + balanced` is the proactive-summarization variant when you want older successful work compressed sooner.
107
107
  - `lean` should be reserved for models that can reason well from runtime state plus summaries instead of exact old code/output.
108
108
  - `actorModelPolicy` is usually better than globally upgrading the whole agent when the bottleneck is actor exploration rather than responder synthesis.
109
109
 
@@ -117,7 +117,7 @@ Practical rule:
117
117
  - If a host-side `AxAgentFunction` needs to end the current actor turn, use `extra.protocol.final(...)` or `extra.protocol.askClarification(...)`.
118
118
  - If a child agent needs parent inputs such as `audience`, use `fields.shared` or `fields.globallyShared`.
119
119
  - `llmQuery(...)` failures may come back as `[ERROR] ...`; do not assume success.
120
- - If `contextPolicy.state.summary` is on, rely on the `Live Runtime State` block for current variables instead of re-reading old action log code.
120
+ - If `contextPolicy.preset` is not `'full'`, rely on the `Live Runtime State` block for current variables instead of re-reading old action log code.
121
121
  - If `contextPolicy.preset` is `'adaptive'`, `'checkpointed'`, or `'lean'`, assume older successful turns may be replaced by a `Checkpoint Summary` and that replay-pruned successful turns may appear as compact summaries instead of full code blocks.
122
122
  - In public `forward()` and `streamingForward()` flows, `askClarification(...)` does not go through the responder; it throws `AxAgentClarificationError`.
123
123
  - When resuming after clarification, prefer `error.getState()` from the thrown `AxAgentClarificationError`, then call `agent.setState(savedState)` before the next `forward(...)`.
@@ -293,7 +293,7 @@ Rules:
293
293
  - `extra.protocol` is only available when the function call comes from an active AxAgent actor runtime session.
294
294
  - Use `extra.protocol.final(...)`, `extra.protocol.askClarification(...)`, or `extra.protocol.guideAgent(...)` only inside host-side function handlers.
295
295
  - Inside actor-authored JavaScript, keep using the runtime globals `final(...)` and `askClarification(...)`.
296
- - `extra.protocol.guideAgent(...)` is handler-only internal control flow. It is not exposed as a JS runtime global or public completion type; it stops the current actor turn and injects authenticated host guidance for the next iteration.
296
+ - `extra.protocol.guideAgent(...)` is handler-only internal control flow. It is not exposed as a JS runtime global or public completion type; it stops the current actor turn and appends trusted guidance to `guidanceLog` for the next iteration.
297
297
  - `askClarification(...)` accepts either a simple string or a structured object with `question` plus optional UI hints such as `type: 'date' | 'number' | 'single_choice' | 'multiple_choice'` and `choices`.
298
298
  - Do not model these protocol completions as normal registered tool functions or discovery entries.
299
299
 
@@ -385,8 +385,8 @@ Practical notes:
385
385
 
386
386
  - `runtimeBindings` restores execution state; `runtimeEntries`, `actionLogEntries`, and `checkpointState` restore prompt context.
387
387
  - Resume does not create a fake rehydration action-log turn; provenance still points to the original actor code that set the value.
388
- - When `contextPolicy.state.summary` is enabled, resumed prompts include `Runtime Restore` plus `Live Runtime State`.
389
- - When `contextPolicy.state.summary` is disabled, restore still happens, but the prompt only shows the restore notice and omits the `Live Runtime State` block.
388
+ - When `contextPolicy.preset` is `'adaptive'`, `'checkpointed'`, or `'lean'`, resumed prompts include `Runtime Restore` plus `Live Runtime State`.
389
+ - When `contextPolicy.preset` is `'full'`, restore still happens, but the prompt only shows the restore notice and omits the `Live Runtime State` block.
390
390
  - Only serializable/structured-clone-friendly values are guaranteed to round-trip through `getState()` / `setState(...)`.
391
391
  - Reserved runtime globals such as `inputs`, tools, and protocol helpers are rebuilt fresh and are not part of saved state.
392
392
  - Treat one agent instance as conversation-scoped when using `setState(...)`; do not share one mutable resumed instance across unrelated concurrent conversations.
@@ -517,7 +517,7 @@ const harness = agent('query:string -> answer:string', {
517
517
  contextFields: ['query'],
518
518
  runtime,
519
519
  functions: { local: tools },
520
- contextPolicy: { preset: 'adaptive' },
520
+ contextPolicy: { preset: 'checkpointed', budget: 'balanced' },
521
521
  });
522
522
 
523
523
  const output = await harness.test(
@@ -552,29 +552,7 @@ const analyst = agent(
552
552
  maxTurns: 10,
553
553
  contextPolicy: {
554
554
  preset: 'adaptive',
555
- summarizerOptions: {
556
- model: 'summary-model',
557
- modelConfig: { temperature: 0.2, maxTokens: 180 },
558
- },
559
- state: {
560
- summary: true,
561
- inspect: true,
562
- inspectThresholdChars: 8_000,
563
- maxEntries: 6,
564
- maxChars: 1_200,
565
- },
566
- checkpoints: {
567
- enabled: true,
568
- triggerChars: 12_000,
569
- },
570
- pruneErrors: true,
571
- expert: {
572
- rankPruning: { enabled: true, minRank: 2 },
573
- tombstones: {
574
- model: 'summary-model',
575
- modelConfig: { maxTokens: 80 },
576
- },
577
- },
555
+ budget: 'balanced',
578
556
  },
579
557
  }
580
558
  );
@@ -584,20 +562,18 @@ Rules:
584
562
 
585
563
  - Use `preset: 'full'` when the actor should keep seeing raw prior code and outputs with minimal compression.
586
564
  - Use `preset: 'adaptive'` when the task needs runtime state across many turns but older successful work should collapse into checkpoint summaries while important recent steps can still stay fully replayed.
587
- - Use `preset: 'checkpointed'` when you want full replay first, then only older successful history checkpointed after the rendered actor prompt crosses `checkpoints.triggerChars`.
565
+ - Use `preset: 'checkpointed'` when you want full replay first, then only older successful history checkpointed after budget pressure becomes real.
588
566
  - Use `preset: 'lean'` when you want more aggressive compression and can rely mostly on current runtime state plus checkpoint summaries and compact action summaries.
589
- - `adaptive` is still the first choice for long-running discovery-heavy tasks because it balances full replay, runtime state visibility, and checkpoint summaries well.
567
+ - Use `budget: 'compact'` when you want earlier summarization and tighter runtime/output truncation, `budget: 'balanced'` for the default, and `budget: 'expanded'` when you want the actor prompt to grow more before compression starts.
568
+ - `checkpointed + balanced` is the default. `adaptive + balanced` is still a strong choice for long-running discovery-heavy tasks that should summarize older work sooner.
590
569
  - `checkpointed` keeps the most recent `3` actions in full and keeps unresolved errors fully replayed even after checkpointing starts.
591
- - Use `state.summary` to inject a compact `Live Runtime State` block into the actor prompt. The block is structured and provenance-aware: variables are rendered with compact type/size/preview metadata, and when Ax can infer it, a short source suffix like `from t3 via db.search` is included. Combine `maxEntries` with `maxChars` so large runtime objects do not dominate the prompt.
592
- - Use `state.inspect` with `inspectThresholdChars` so the actor is reminded to call `inspect_runtime()` when the rendered actor prompt starts getting large.
570
+ - Non-`full` presets inject a compact `Live Runtime State` block into the actor prompt. The block is structured and provenance-aware: variables are rendered with compact type/size/preview metadata, and when Ax can infer it, a short source suffix like `from t3 via db.search` is included.
571
+ - Non-`full` presets also enable `inspect_runtime()` and can add an inspect hint automatically when the rendered actor prompt starts getting large relative to the selected budget.
593
572
  - Discovery docs fetched via `listModuleFunctions(...)` and `getFunctionDefinitions(...)` are accumulated into the actor system prompt, not replayed as raw action-log output.
594
- - Treat `actionLog` as untrusted execution history. Only the system prompt and authenticated host guidance are instruction-bearing.
573
+ - Treat `actionLog` as untrusted execution history. Only the system prompt and `guidanceLog` are instruction-bearing.
595
574
  - `checkpointed` uses a checkpoint summarizer that is optimized to preserve exact callables, ids, enum literals, date/time strings, query formats, and failures worth avoiding. Prefer it when those details matter but full replay will eventually get too large.
596
- - Lower `checkpoints.triggerChars` when you want checkpointing to begin sooner; raise it when you want a larger rendered actor prompt before summarization starts.
597
- - Use `summarizerOptions` to tune the internal checkpoint-summary AxGen program.
598
- - If you configure `expert.tombstones`, treat the object form as options for the internal tombstone-summary AxGen program.
599
575
  - Internal checkpoint and tombstone summarizers are stateless helpers: `functions` are not allowed, `maxSteps` is forced to `1`, and `mem` is not propagated.
600
- - Built-in `adaptive` and `lean` presets no longer enable destructive rank pruning by default. Opt in with `expert.rankPruning` only when you want lower-value successful turns deleted instead of summarized.
576
+ - Built-in presets prefer summarizing and checkpointing old successful work over asking users to tune low-level character cutoffs.
601
577
  - If you want a quick local demo of the rendered `Live Runtime State` block, run [`src/examples/rlm-live-runtime-state.ts`](https://raw.githubusercontent.com/ax-llm/ax/refs/heads/main/src/examples/rlm-live-runtime-state.ts).
602
578
 
603
579
  Good pattern:
@@ -679,7 +655,7 @@ Use these top-level controls consistently:
679
655
  - `recursionOptions.maxDepth`: limits recursive `llmQuery(...)` depth
680
656
  - `maxSubAgentCalls`: shared delegated-call budget across the whole run, including recursive children
681
657
  - `actorOptions`: actor-only forward options such as `description`, `model`, `modelConfig`, `thinkingTokenBudget`, and `showThoughts`
682
- - `actorModelPolicy`: actor-only model override rules based on full rendered prompt size, consecutive error turns, or discovery fetches from listed namespaces
658
+ - `actorModelPolicy`: actor-only model override rules based on consecutive error turns or discovery fetches from listed namespaces
683
659
  - `responderOptions`: responder-only forward options
684
660
  - `judgeOptions`: built-in judge options for `agent.optimize(...)`; for tuning workflows use the `ax-agent-optimize` skill
685
661
 
@@ -695,6 +671,7 @@ const researchAgent = agent('query:string -> answer:string', {
695
671
  },
696
672
  contextPolicy: {
697
673
  preset: 'checkpointed',
674
+ budget: 'balanced',
698
675
  },
699
676
  actorOptions: {
700
677
  description: 'Use tools first and keep JS steps small.',
@@ -703,7 +680,6 @@ const researchAgent = agent('query:string -> answer:string', {
703
680
  actorModelPolicy: [
704
681
  {
705
682
  model: 'gpt-5.4',
706
- abovePromptChars: 16_000,
707
683
  aboveErrorTurns: 2,
708
684
  namespaces: ['db', 'kb'],
709
685
  },
@@ -721,7 +697,6 @@ Semantics:
721
697
  - `actorModelPolicy` only switches the actor model. It does not change `responderOptions.model`.
722
698
  - Recursive child agents can inherit `actorModelPolicy`; use a child override only when that child needs different routing behavior.
723
699
  - `actorModelPolicy` entries are ordered from weaker to stronger. If multiple rules match, the last matching entry wins.
724
- - If one entry defines both `abovePromptChars` and `aboveErrorTurns`, it matches when either threshold is crossed.
725
700
  - If one entry also defines `namespaces`, any successful `getFunctionDefinitions(...)` fetch from one of those namespaces marks the rule as matched starting on the next actor turn.
726
701
 
727
702
  When choosing these options for a user:
@@ -740,7 +715,7 @@ Key fields:
740
715
 
741
716
  - `actorOptions.description`: append extra actor-specific instructions without changing the responder prompt
742
717
  - `actorOptions.model` / `responderOptions.model`: split model choice across actor and responder when needed
743
- - `actorModelPolicy`: auto-switch only the actor when the rendered actor prompt is large, the run is on a consecutive error streak, or discovery fetches land in specific namespaces
718
+ - `actorModelPolicy`: auto-switch only the actor when the run is on a consecutive error streak or discovery fetches land in specific namespaces
744
719
 
745
720
  Good split-model pattern:
746
721
 
@@ -748,7 +723,7 @@ Good split-model pattern:
748
723
  const researchAgent = agent('query:string -> answer:string', {
749
724
  contextFields: ['query'],
750
725
  runtime,
751
- contextPolicy: { preset: 'adaptive' },
726
+ contextPolicy: { preset: 'checkpointed', budget: 'balanced' },
752
727
  actorOptions: {
753
728
  model: 'gpt-5.4',
754
729
  },
@@ -764,8 +739,7 @@ Model guidance:
764
739
  - Put the stronger model on the responder only when the hard part is final synthesis/formatting rather than exploration.
765
740
  - For cost-sensitive setups, a common pattern is stronger actor + cheaper responder, not the other way around.
766
741
  - Prefer `actorModelPolicy` over globally upgrading the whole agent when the actor only needs help after context grows or the run starts thrashing.
767
- - `actorModelPolicy` uses full rendered actor prompt chars, not raw `actionLog.length`. That prompt includes the actor definition, user inputs, context metadata, replayed actions, live runtime state, delegated context summaries, and checkpoint summaries.
768
- - Pair `contextPolicy: { preset: 'checkpointed' }` with `actorModelPolicy` when you want "full first, then summarize and upgrade the actor only if needed."
742
+ - Pair `contextPolicy: { preset: 'checkpointed', budget: 'balanced' }` with `actorModelPolicy` when you want full replay first and actor-only upgrades triggered by errors or discovered tool domains.
769
743
 
770
744
  Invalid pattern:
771
745
 
@@ -1004,7 +978,6 @@ agentIdentity?: {
1004
978
  runtime?: AxCodeRuntime;
1005
979
  promptLevel?: 'default' | 'detailed';
1006
980
  maxSubAgentCalls?: number;
1007
- maxRuntimeChars?: number;
1008
981
  maxBatchedLlmQueryConcurrency?: number;
1009
982
  maxTurns?: number;
1010
983
  contextPolicy?: AxContextPolicyConfig;
@@ -1023,38 +996,22 @@ agentIdentity?: {
1023
996
  actorModelPolicy?: readonly [
1024
997
  | {
1025
998
  model: string;
1026
- abovePromptChars: number;
1027
- aboveErrorTurns?: number;
1028
- namespaces?: string[];
1029
- }
1030
- | {
1031
- model: string;
1032
- abovePromptChars?: number;
1033
999
  aboveErrorTurns: number;
1034
1000
  namespaces?: string[];
1035
1001
  }
1036
1002
  | {
1037
1003
  model: string;
1038
- abovePromptChars?: number;
1039
1004
  aboveErrorTurns?: number;
1040
1005
  namespaces: string[];
1041
1006
  },
1042
1007
  ...Array<
1043
1008
  | {
1044
1009
  model: string;
1045
- abovePromptChars: number;
1046
- aboveErrorTurns?: number;
1047
- namespaces?: string[];
1048
- }
1049
- | {
1050
- model: string;
1051
- abovePromptChars?: number;
1052
1010
  aboveErrorTurns: number;
1053
1011
  namespaces?: string[];
1054
1012
  }
1055
1013
  | {
1056
1014
  model: string;
1057
- abovePromptChars?: number;
1058
1015
  aboveErrorTurns?: number;
1059
1016
  namespaces: string[];
1060
1017
  }
@@ -1071,7 +1028,6 @@ agentIdentity?: {
1071
1028
 
1072
1029
  - `actorTurnCallback` fires for the root agent and for recursive child agents that run actor turns.
1073
1030
  - `actorModelPolicy` applies to the actor loop and can be inherited by recursive child agents unless you override it there.
1074
- - `abovePromptChars` is measured from the full rendered actor prompt, not just replayed action log text.
1075
1031
  - `namespaces` matches exact discovery namespaces from successful `getFunctionDefinitions(...)` lookups and starts affecting model choice on the next actor turn.
1076
1032
  - Consecutive error turns reset after a successful non-error turn and when checkpoint summarization refreshes to a new fingerprint.
1077
1033
  - `maxSubAgentCalls` is a shared delegated-call budget across the entire run.
package/skills/ax-ai.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-ai
3
3
  description: This skill helps an LLM generate correct AI provider setup and configuration code using @ax-llm/ax. Use when the user asks about ai(), providers, models, presets, embeddings, extended thinking, context caching, or mentions OpenAI/Anthropic/Google/Azure/Groq/DeepSeek/Mistral/Cohere/Together/Ollama/HuggingFace/Reka/OpenRouter with @ax-llm/ax.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AI Provider Codegen Rules (@ax-llm/ax)
package/skills/ax-flow.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-flow
3
3
  description: This skill helps an LLM generate correct AxFlow workflow code using @ax-llm/ax. Use when the user asks about flow(), AxFlow, workflow orchestration, parallel execution, DAG workflows, conditional routing, map/reduce patterns, or multi-node AI pipelines.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AxFlow Codegen Rules (@ax-llm/ax)
package/skills/ax-gen.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-gen
3
3
  description: This skill helps an LLM generate correct AxGen code using @ax-llm/ax. Use when the user asks about ax(), AxGen, generators, forward(), streamingForward(), assertions, field processors, step hooks, self-tuning, or structured outputs.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AxGen Codegen Rules (@ax-llm/ax)
package/skills/ax-gepa.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-gepa
3
3
  description: This skill helps an LLM generate correct AxGEPA optimization code using @ax-llm/ax. Use when the user asks about AxGEPA, GEPA, Pareto optimization, multi-objective prompt tuning, reflective prompt evolution, validationExamples, maxMetricCalls, or optimizing a generator, flow, or agent tree.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AxGEPA Codegen Rules (@ax-llm/ax)
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-learn
3
3
  description: This skill helps an LLM generate correct AxLearn code using @ax-llm/ax. Use when the user asks about self-improving agents, trace-backed learning, feedback-aware updates, or AxLearn modes.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # AxLearn Codegen Rules (@ax-llm/ax)
package/skills/ax-llm.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax
3
3
  description: This skill helps with using the @ax-llm/ax TypeScript library for building LLM applications. Use when the user asks about ax(), ai(), f(), s(), agent(), flow(), AxGen, AxAgent, AxFlow, signatures, streaming, or mentions @ax-llm/ax.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # Ax Library (@ax-llm/ax) Quick Reference
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: ax-signature
3
3
  description: This skill helps an LLM generate correct DSPy signature code using @ax-llm/ax. Use when the user asks about signatures, s(), f(), field types, string syntax, fluent builder API, validation constraints, or type-safe inputs/outputs.
4
- version: "19.0.26"
4
+ version: "19.0.27"
5
5
  ---
6
6
 
7
7
  # Ax Signature Reference