mia-code 0.2.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (410) hide show
  1. package/.miette/260321.md +1 -0
  2. package/.miette/260323.md +9 -0
  3. package/.miette/260331.md +2 -0
  4. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/2604020008--d3417f2c-df12-4f0f-8a1b-d88e7968f822/d3417f2c-df12-4f0f-8a1b-d88e7968f822.md +63 -0
  5. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/2604020008--e6c3fc5d-4a70-4523-ba7d-a3250da4c235/e6c3fc5d-4a70-4523-ba7d-a3250da4c235.md +72 -0
  6. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/2604020008--efeb00a2-b17a-4d32-b1f0-b90c37a8d24e/efeb00a2-b17a-4d32-b1f0-b90c37a8d24e.md +62 -0
  7. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/83a2d7f9-24a5-4cf4-98d5-036c82f872e8.json +302 -0
  8. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/83a2d7f9-24a5-4cf4-98d5-036c82f872e8.md +149 -0
  9. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/AGENTS.md +31 -0
  10. package/.pde/2604011511--83a2d7f9-24a5-4cf4-98d5-036c82f872e8/meta-decomposition-3-children.md +67 -0
  11. package/.pde/2604040129--61f9dd4d-7aa6-45e6-a58b-e480b1aa6737/61f9dd4d-7aa6-45e6-a58b-e480b1aa6737--from-mia-openclaw-workspace.md +125 -0
  12. package/.pde/2604040129--61f9dd4d-7aa6-45e6-a58b-e480b1aa6737/STATUS.md +1 -0
  13. package/.pde/4f02ba94-9f52-422e-9389-b16f9b37f358.json +177 -0
  14. package/.pde/4f02ba94-9f52-422e-9389-b16f9b37f358.md +77 -0
  15. package/.pde/6ad9244d-5340-490f-b76c-c86728b9de52.json +222 -0
  16. package/.pde/6ad9244d-5340-490f-b76c-c86728b9de52.md +99 -0
  17. package/.pde/8b566792-ed15-4606-96f9-2b6f593d7e6b.json +111 -0
  18. package/.pde/8b566792-ed15-4606-96f9-2b6f593d7e6b.md +67 -0
  19. package/.pde/c7f1e74b-05a5-40e2-9f01-4cc48d2528f7.json +349 -0
  20. package/.pde/c7f1e74b-05a5-40e2-9f01-4cc48d2528f7.md +147 -0
  21. package/.pde/dfc00a78-1da0-4c09-8a16-c6982644051b.json +118 -0
  22. package/.pde/dfc00a78-1da0-4c09-8a16-c6982644051b.md +64 -0
  23. package/GUILLAUME.md +8 -0
  24. package/KINSHIP.md +9 -0
  25. package/MIA_CODE_ARCHITECTURE_REPORT.md +718 -0
  26. package/contextual_research/260119-MIA-CODE--98090899-8aff-4e11-9dc3-8b99466d1.md +1101 -0
  27. package/contextual_research/MIA.md +38 -0
  28. package/contextual_research/MIAWAPASCONE.md +59 -0
  29. package/contextual_research/MIETTE.md +38 -0
  30. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/2504.00218v2.pdf +7483 -12
  31. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/2505.00212v3.pdf +0 -0
  32. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/CONTENT.md +1014 -0
  33. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/DESIGN.gemini.md +242 -0
  34. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/INDEX.md +45 -0
  35. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/2504.00218v2.md +2025 -0
  36. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/2504.00218v2.pdf +7483 -12
  37. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/2505.00212v3.md +1755 -0
  38. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/2505.00212v3.pdf +0 -0
  39. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_12_decomposed_prompting.pdf +0 -0
  40. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_19_hugginggpt_planning.pdf +0 -0
  41. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_1_coordination_challenges.md +766 -0
  42. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_1_coordination_challenges.pdf +3431 -4
  43. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_28_guardrails_multi_agent.md +260 -0
  44. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_28_guardrails_multi_agent.pdf +0 -0
  45. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_2_navigating_complexity.md +558 -0
  46. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_2_navigating_complexity.pdf +0 -0
  47. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_34_hierarchical_multi_agent.pdf +0 -0
  48. package/contextual_research/PDE-generalization--caefee82-efb1-4dbb-8733-691b01581464--260130/sources/footnote_1_5_open_intent_extraction.pdf +0 -0
  49. package/contextual_research/PODCAST.md +109 -0
  50. package/contextual_research/langchain-principles-roadmap.md +157 -0
  51. package/contextual_research/persona-to-narrative-character-inquiry_260201.md +50 -0
  52. package/dist/cli.js +35 -11
  53. package/dist/geminiHeadless.js +8 -2
  54. package/dist/index.js +2 -1
  55. package/dist/mcp/miaco-server.js +10 -1
  56. package/dist/mcp/miatel-server.js +10 -1
  57. package/dist/mcp/miawa-server.js +10 -1
  58. package/dist/mcp/utils.d.ts +6 -1
  59. package/dist/mcp/utils.js +24 -3
  60. package/dist/sessionStore.d.ts +8 -2
  61. package/dist/sessionStore.js +39 -3
  62. package/dist/types.d.ts +1 -0
  63. package/miaco/README.md +124 -0
  64. package/miaco/dist/commands/chart.d.ts +6 -0
  65. package/miaco/dist/commands/chart.d.ts.map +1 -0
  66. package/miaco/dist/commands/chart.js +222 -0
  67. package/miaco/dist/commands/chart.js.map +1 -0
  68. package/miaco/dist/commands/decompose.d.ts +6 -0
  69. package/miaco/dist/commands/decompose.d.ts.map +1 -0
  70. package/miaco/dist/commands/decompose.js +98 -0
  71. package/miaco/dist/commands/decompose.js.map +1 -0
  72. package/miaco/dist/commands/schema.d.ts +6 -0
  73. package/miaco/dist/commands/schema.d.ts.map +1 -0
  74. package/miaco/dist/commands/schema.js +66 -0
  75. package/miaco/dist/commands/schema.js.map +1 -0
  76. package/miaco/dist/commands/stc.d.ts +11 -0
  77. package/miaco/dist/commands/stc.d.ts.map +1 -0
  78. package/miaco/dist/commands/stc.js +590 -0
  79. package/miaco/dist/commands/stc.js.map +1 -0
  80. package/miaco/dist/commands/trace.d.ts +6 -0
  81. package/miaco/dist/commands/trace.d.ts.map +1 -0
  82. package/miaco/dist/commands/trace.js +83 -0
  83. package/miaco/dist/commands/trace.js.map +1 -0
  84. package/miaco/dist/commands/validate.d.ts +6 -0
  85. package/miaco/dist/commands/validate.d.ts.map +1 -0
  86. package/miaco/dist/commands/validate.js +58 -0
  87. package/miaco/dist/commands/validate.js.map +1 -0
  88. package/miaco/dist/decompose.d.ts +93 -0
  89. package/miaco/dist/decompose.d.ts.map +1 -0
  90. package/miaco/dist/decompose.js +562 -0
  91. package/miaco/dist/decompose.js.map +1 -0
  92. package/miaco/dist/index.d.ts +18 -0
  93. package/miaco/dist/index.d.ts.map +1 -0
  94. package/miaco/dist/index.js +83 -0
  95. package/miaco/dist/index.js.map +1 -0
  96. package/miaco/dist/storage.d.ts +60 -0
  97. package/miaco/dist/storage.d.ts.map +1 -0
  98. package/miaco/dist/storage.js +100 -0
  99. package/miaco/dist/storage.js.map +1 -0
  100. package/miaco/package-lock.json +4103 -0
  101. package/miaco/package.json +40 -0
  102. package/miaco/tsconfig.json +18 -0
  103. package/miaco/version-patch-commit-and-publish.sh +1 -0
  104. package/miatel/MISSION_251231.md +3 -0
  105. package/miatel/README.md +107 -0
  106. package/miatel/dist/commands/analyze.d.ts +6 -0
  107. package/miatel/dist/commands/analyze.d.ts.map +1 -0
  108. package/miatel/dist/commands/analyze.js +100 -0
  109. package/miatel/dist/commands/analyze.js.map +1 -0
  110. package/miatel/dist/commands/arc.d.ts +6 -0
  111. package/miatel/dist/commands/arc.d.ts.map +1 -0
  112. package/miatel/dist/commands/arc.js +71 -0
  113. package/miatel/dist/commands/arc.js.map +1 -0
  114. package/miatel/dist/commands/beat.d.ts +6 -0
  115. package/miatel/dist/commands/beat.d.ts.map +1 -0
  116. package/miatel/dist/commands/beat.js +165 -0
  117. package/miatel/dist/commands/beat.js.map +1 -0
  118. package/miatel/dist/commands/theme.d.ts +6 -0
  119. package/miatel/dist/commands/theme.d.ts.map +1 -0
  120. package/miatel/dist/commands/theme.js +54 -0
  121. package/miatel/dist/commands/theme.js.map +1 -0
  122. package/miatel/dist/index.d.ts +18 -0
  123. package/miatel/dist/index.d.ts.map +1 -0
  124. package/miatel/dist/index.js +80 -0
  125. package/miatel/dist/index.js.map +1 -0
  126. package/miatel/dist/storage.d.ts +55 -0
  127. package/miatel/dist/storage.d.ts.map +1 -0
  128. package/miatel/dist/storage.js +100 -0
  129. package/miatel/dist/storage.js.map +1 -0
  130. package/miatel/package-lock.json +4103 -0
  131. package/miatel/package.json +35 -0
  132. package/miatel/src/commands/analyze.ts +109 -0
  133. package/miatel/src/commands/arc.ts +78 -0
  134. package/miatel/src/commands/beat.ts +176 -0
  135. package/miatel/src/commands/theme.ts +60 -0
  136. package/miatel/src/index.ts +94 -0
  137. package/miatel/src/storage.ts +156 -0
  138. package/miatel/tsconfig.json +18 -0
  139. package/miawa/MISSION_251231.md +144 -0
  140. package/miawa/README.md +133 -0
  141. package/miawa/dist/commands/beat.d.ts +6 -0
  142. package/miawa/dist/commands/beat.d.ts.map +1 -0
  143. package/miawa/dist/commands/beat.js +69 -0
  144. package/miawa/dist/commands/beat.js.map +1 -0
  145. package/miawa/dist/commands/ceremony.d.ts +6 -0
  146. package/miawa/dist/commands/ceremony.d.ts.map +1 -0
  147. package/miawa/dist/commands/ceremony.js +239 -0
  148. package/miawa/dist/commands/ceremony.js.map +1 -0
  149. package/miawa/dist/commands/circle.d.ts +6 -0
  150. package/miawa/dist/commands/circle.d.ts.map +1 -0
  151. package/miawa/dist/commands/circle.js +75 -0
  152. package/miawa/dist/commands/circle.js.map +1 -0
  153. package/miawa/dist/commands/eva.d.ts +6 -0
  154. package/miawa/dist/commands/eva.d.ts.map +1 -0
  155. package/miawa/dist/commands/eva.js +73 -0
  156. package/miawa/dist/commands/eva.js.map +1 -0
  157. package/miawa/dist/commands/wound.d.ts +6 -0
  158. package/miawa/dist/commands/wound.d.ts.map +1 -0
  159. package/miawa/dist/commands/wound.js +74 -0
  160. package/miawa/dist/commands/wound.js.map +1 -0
  161. package/miawa/dist/index.d.ts +19 -0
  162. package/miawa/dist/index.d.ts.map +1 -0
  163. package/miawa/dist/index.js +91 -0
  164. package/miawa/dist/index.js.map +1 -0
  165. package/miawa/dist/storage.d.ts +73 -0
  166. package/miawa/dist/storage.d.ts.map +1 -0
  167. package/miawa/dist/storage.js +100 -0
  168. package/miawa/dist/storage.js.map +1 -0
  169. package/miawa/package-lock.json +4103 -0
  170. package/miawa/package.json +36 -0
  171. package/miawa/src/commands/beat.ts +74 -0
  172. package/miawa/src/commands/ceremony.ts +256 -0
  173. package/miawa/src/commands/circle.ts +83 -0
  174. package/miawa/src/commands/eva.ts +84 -0
  175. package/miawa/src/commands/wound.ts +79 -0
  176. package/miawa/src/index.ts +108 -0
  177. package/miawa/src/storage.ts +179 -0
  178. package/miawa/tsconfig.json +18 -0
  179. package/package.json +7 -5
  180. package/references/acp/CLAUDE.md +7 -0
  181. package/references/acp/agent-plan.md +84 -0
  182. package/references/acp/clients.md +31 -0
  183. package/references/acp/extensibility.md +137 -0
  184. package/references/acp/initialization.md +225 -0
  185. package/references/acp/prompt-turn.md +321 -0
  186. package/references/acp/proxy-chains.md +562 -0
  187. package/references/acp/schema.md +3171 -0
  188. package/references/acp/session-list.md +334 -0
  189. package/references/acp/session-modes.md +170 -0
  190. package/references/acp/slash-commands.md +99 -0
  191. package/references/acp/terminals.md +281 -0
  192. package/references/acp/tool-calls.md +311 -0
  193. package/references/acp/typescript.md +29 -0
  194. package/references/claude/agent-teams.md +399 -0
  195. package/references/claude/chrome.md +231 -0
  196. package/references/claude/headless.md +158 -0
  197. package/references/claude/hooks-guide.md +708 -0
  198. package/references/claude/output-styles.md +112 -0
  199. package/references/claude/plugins.md +432 -0
  200. package/references/claude/skills.md +693 -0
  201. package/references/claude/sub-agents.md +816 -0
  202. package/references/copilot/acp/agents.md +32 -0
  203. package/references/copilot/acp/architecture.md +37 -0
  204. package/references/copilot/acp/clients.md +31 -0
  205. package/references/copilot/acp/introduction.md +42 -0
  206. package/references/copilot/acp/registry.md +339 -0
  207. package/references/copilot/acp-server.md +117 -0
  208. package/references/copilot/create-copilot-instructions.md +840 -0
  209. package/references/langchain/llms.txt +833 -0
  210. package/references/langchain/python/agents.md +677 -0
  211. package/references/langchain/python/context-engineering.md +1195 -0
  212. package/references/langchain/python/human-in-the-loop.md +326 -0
  213. package/references/langchain/python/long-term-memory.md +168 -0
  214. package/references/langchain/python/mcp.md +949 -0
  215. package/references/langchain/python/multi-agents/custom-workflow.md +187 -0
  216. package/references/langchain/python/multi-agents/handoffs.md +436 -0
  217. package/references/langchain/python/multi-agents/overview.md +295 -0
  218. package/references/langchain/python/multi-agents/router.md +150 -0
  219. package/references/langchain/python/multi-agents/skills.md +92 -0
  220. package/references/langchain/python/multi-agents/subagents.md +486 -0
  221. package/references/langchain/python/retrieval.md +320 -0
  222. package/references/langchain/python/runtime.md +141 -0
  223. package/references/langchain/python/short-term-memory.md +658 -0
  224. package/references/langchain/python/structured-output.md +712 -0
  225. package/references/langfuse/llms.txt +148 -0
  226. package/references/langgraph/javascript/llms.txt +275 -0
  227. package/references/skills/home.md +259 -0
  228. package/references/skills/integrate-skills.md +103 -0
  229. package/references/skills/specification.md +254 -0
  230. package/references/skills/what-are-skills.md +74 -0
  231. package/rispecs/README.md +164 -0
  232. package/rispecs/_sync_/miadi-code/SPEC.md +313 -0
  233. package/rispecs/_sync_/miadi-code/STATUS.md +177 -0
  234. package/rispecs/_sync_/miadi-code/dashboard/SPEC.md +465 -0
  235. package/rispecs/_sync_/miadi-code/dashboard/STATUS.md +212 -0
  236. package/rispecs/_sync_/miadi-code/multiline-input/SPEC.md +232 -0
  237. package/rispecs/_sync_/miadi-code/multiline-input/STATUS.md +108 -0
  238. package/rispecs/_sync_/miadi-code/pde/SPEC.md +253 -0
  239. package/rispecs/_sync_/miadi-code/pde/STATUS.md +56 -0
  240. package/rispecs/_sync_/miadi-code/stc/SPEC.md +397 -0
  241. package/rispecs/_sync_/miadi-code/stc/STATUS.md +70 -0
  242. package/rispecs/ava-langstack/inquiry-routing-upgrade.spec.md +119 -0
  243. package/rispecs/borrowed_from_opencode/001-client-server-architecture.rispec.md +98 -0
  244. package/rispecs/borrowed_from_opencode/002-event-bus-system.rispec.md +125 -0
  245. package/rispecs/borrowed_from_opencode/003-instance-state-pattern.rispec.md +136 -0
  246. package/rispecs/borrowed_from_opencode/004-namespace-module-pattern.rispec.md +151 -0
  247. package/rispecs/borrowed_from_opencode/005-zod-schema-validation.rispec.md +139 -0
  248. package/rispecs/borrowed_from_opencode/006-named-error-system.rispec.md +155 -0
  249. package/rispecs/borrowed_from_opencode/007-structured-logging.rispec.md +138 -0
  250. package/rispecs/borrowed_from_opencode/008-lazy-initialization.rispec.md +127 -0
  251. package/rispecs/borrowed_from_opencode/009-multi-agent-system.rispec.md +97 -0
  252. package/rispecs/borrowed_from_opencode/010-agent-definition-config.rispec.md +135 -0
  253. package/rispecs/borrowed_from_opencode/011-agent-permission-rulesets.rispec.md +151 -0
  254. package/rispecs/borrowed_from_opencode/012-agent-prompt-templates.rispec.md +141 -0
  255. package/rispecs/borrowed_from_opencode/013-agent-generation.rispec.md +142 -0
  256. package/rispecs/borrowed_from_opencode/014-plan-build-mode-toggle.rispec.md +155 -0
  257. package/rispecs/borrowed_from_opencode/015-subagent-task-delegation.rispec.md +146 -0
  258. package/rispecs/borrowed_from_opencode/016-agent-model-selection.rispec.md +151 -0
  259. package/rispecs/borrowed_from_opencode/017-compaction-agent.rispec.md +150 -0
  260. package/rispecs/borrowed_from_opencode/018-session-persistence.rispec.md +125 -0
  261. package/rispecs/borrowed_from_opencode/019-session-compaction.rispec.md +132 -0
  262. package/rispecs/borrowed_from_opencode/020-session-forking.rispec.md +134 -0
  263. package/rispecs/borrowed_from_opencode/021-session-revert-snapshot.rispec.md +135 -0
  264. package/rispecs/borrowed_from_opencode/022-session-sharing.rispec.md +165 -0
  265. package/rispecs/borrowed_from_opencode/023-session-summary-diffs.rispec.md +165 -0
  266. package/rispecs/borrowed_from_opencode/024-child-sessions.rispec.md +164 -0
  267. package/rispecs/borrowed_from_opencode/025-session-title-generation.rispec.md +162 -0
  268. package/rispecs/borrowed_from_opencode/026-message-parts-model.rispec.md +201 -0
  269. package/rispecs/borrowed_from_opencode/027-streaming-message-deltas.rispec.md +212 -0
  270. package/rispecs/borrowed_from_opencode/028-multi-provider-architecture.rispec.md +184 -0
  271. package/rispecs/borrowed_from_opencode/029-provider-authentication.rispec.md +225 -0
  272. package/rispecs/borrowed_from_opencode/030-model-registry.rispec.md +222 -0
  273. package/rispecs/borrowed_from_opencode/031-cost-tracking.rispec.md +243 -0
  274. package/rispecs/borrowed_from_opencode/032-provider-transform-pipeline.rispec.md +282 -0
  275. package/rispecs/borrowed_from_opencode/033-provider-sdk-abstraction.rispec.md +338 -0
  276. package/rispecs/borrowed_from_opencode/034-tool-registry.rispec.md +110 -0
  277. package/rispecs/borrowed_from_opencode/035-tool-context-injection.rispec.md +155 -0
  278. package/rispecs/borrowed_from_opencode/036-tool-output-truncation.rispec.md +138 -0
  279. package/rispecs/borrowed_from_opencode/037-batch-tool.rispec.md +129 -0
  280. package/rispecs/borrowed_from_opencode/038-multi-edit-tool.rispec.md +167 -0
  281. package/rispecs/borrowed_from_opencode/039-apply-patch-tool.rispec.md +161 -0
  282. package/rispecs/borrowed_from_opencode/040-code-search-tool.rispec.md +143 -0
  283. package/rispecs/borrowed_from_opencode/041-web-fetch-tool.rispec.md +131 -0
  284. package/rispecs/borrowed_from_opencode/042-web-search-tool.rispec.md +159 -0
  285. package/rispecs/borrowed_from_opencode/043-todo-tool.rispec.md +156 -0
  286. package/rispecs/borrowed_from_opencode/044-plan-mode-tool.rispec.md +139 -0
  287. package/rispecs/borrowed_from_opencode/045-task-tool.rispec.md +146 -0
  288. package/rispecs/borrowed_from_opencode/046-question-tool.rispec.md +170 -0
  289. package/rispecs/borrowed_from_opencode/047-external-directory-tool.rispec.md +166 -0
  290. package/rispecs/borrowed_from_opencode/048-file-read-write-tools.rispec.md +205 -0
  291. package/rispecs/borrowed_from_opencode/049-lsp-server-management.rispec.md +104 -0
  292. package/rispecs/borrowed_from_opencode/050-lsp-hover-completion.rispec.md +102 -0
  293. package/rispecs/borrowed_from_opencode/051-lsp-diagnostics.rispec.md +86 -0
  294. package/rispecs/borrowed_from_opencode/052-lsp-root-detection.rispec.md +109 -0
  295. package/rispecs/borrowed_from_opencode/053-remote-mcp-servers.rispec.md +119 -0
  296. package/rispecs/borrowed_from_opencode/054-mcp-oauth-flow.rispec.md +107 -0
  297. package/rispecs/borrowed_from_opencode/055-mcp-tool-conversion.rispec.md +118 -0
  298. package/rispecs/borrowed_from_opencode/056-mcp-connection-monitoring.rispec.md +106 -0
  299. package/rispecs/borrowed_from_opencode/057-local-mcp-servers.rispec.md +116 -0
  300. package/rispecs/borrowed_from_opencode/058-rich-tui.rispec.md +108 -0
  301. package/rispecs/borrowed_from_opencode/059-streaming-display.rispec.md +116 -0
  302. package/rispecs/borrowed_from_opencode/060-permission-prompts.rispec.md +130 -0
  303. package/rispecs/borrowed_from_opencode/061-session-navigation.rispec.md +155 -0
  304. package/rispecs/borrowed_from_opencode/062-syntax-highlighting.rispec.md +151 -0
  305. package/rispecs/borrowed_from_opencode/063-keybinding-system.rispec.md +181 -0
  306. package/rispecs/borrowed_from_opencode/064-multi-level-config.rispec.md +155 -0
  307. package/rispecs/borrowed_from_opencode/065-jsonc-config.rispec.md +190 -0
  308. package/rispecs/borrowed_from_opencode/066-config-env-variables.rispec.md +153 -0
  309. package/rispecs/borrowed_from_opencode/067-config-deep-merging.rispec.md +178 -0
  310. package/rispecs/borrowed_from_opencode/068-remote-org-config.rispec.md +183 -0
  311. package/rispecs/borrowed_from_opencode/069-config-markdown-frontmatter.rispec.md +206 -0
  312. package/rispecs/borrowed_from_opencode/070-managed-config-directory.rispec.md +232 -0
  313. package/rispecs/borrowed_from_opencode/071-plugin-architecture.rispec.md +104 -0
  314. package/rispecs/borrowed_from_opencode/072-plugin-hooks.rispec.md +123 -0
  315. package/rispecs/borrowed_from_opencode/073-plugin-auto-install.rispec.md +115 -0
  316. package/rispecs/borrowed_from_opencode/074-permission-system.rispec.md +133 -0
  317. package/rispecs/borrowed_from_opencode/075-git-worktree-management.rispec.md +126 -0
  318. package/rispecs/borrowed_from_opencode/076-snapshot-system.rispec.md +124 -0
  319. package/rispecs/borrowed_from_opencode/077-snapshot-diff.rispec.md +117 -0
  320. package/rispecs/borrowed_from_opencode/078-snapshot-restore.rispec.md +128 -0
  321. package/rispecs/borrowed_from_opencode/079-worktree-branch-naming.rispec.md +122 -0
  322. package/rispecs/borrowed_from_opencode/080-sqlite-storage.rispec.md +134 -0
  323. package/rispecs/borrowed_from_opencode/081-database-migrations.rispec.md +148 -0
  324. package/rispecs/borrowed_from_opencode/082-database-transactions.rispec.md +138 -0
  325. package/rispecs/borrowed_from_opencode/083-deferred-effects.rispec.md +148 -0
  326. package/rispecs/borrowed_from_opencode/084-permission-rules.rispec.md +123 -0
  327. package/rispecs/borrowed_from_opencode/085-permission-glob-patterns.rispec.md +113 -0
  328. package/rispecs/borrowed_from_opencode/086-permission-merging.rispec.md +134 -0
  329. package/rispecs/borrowed_from_opencode/087-permission-modes.rispec.md +145 -0
  330. package/rispecs/borrowed_from_opencode/088-http-api-server.rispec.md +165 -0
  331. package/rispecs/borrowed_from_opencode/089-openapi-spec-generation.rispec.md +164 -0
  332. package/rispecs/borrowed_from_opencode/090-websocket-support.rispec.md +136 -0
  333. package/rispecs/borrowed_from_opencode/091-sse-streaming.rispec.md +168 -0
  334. package/rispecs/borrowed_from_opencode/092-mdns-discovery.rispec.md +145 -0
  335. package/rispecs/borrowed_from_opencode/093-javascript-sdk.rispec.md +200 -0
  336. package/rispecs/borrowed_from_opencode/094-skill-system.rispec.md +187 -0
  337. package/rispecs/borrowed_from_opencode/095-skill-discovery.rispec.md +182 -0
  338. package/rispecs/borrowed_from_opencode/096-desktop-remote-driving.rispec.md +175 -0
  339. package/rispecs/borrowed_from_opencode/INDEX.md +255 -0
  340. package/rispecs/core.rispecs.md +261 -0
  341. package/rispecs/engines.rispecs.md +241 -0
  342. package/rispecs/formatting.rispecs.md +252 -0
  343. package/rispecs/living-specifications.rispecs.md +361 -0
  344. package/rispecs/mcp.rispecs.md +197 -0
  345. package/rispecs/pde.rispecs.md +399 -0
  346. package/rispecs/pi-mono-envisionning/ENVISIONING.md +366 -0
  347. package/rispecs/pi-mono-envisionning/storytelling-horizon.rispecs.md +76 -0
  348. package/rispecs/pi-mono-envisionning/widget.rispecs.md +2 -0
  349. package/rispecs/relation-to-mcp-structural-thinking.kin.md +72 -0
  350. package/rispecs/research-for-better-framework/CLAUDE.md +7 -0
  351. package/rispecs/research-for-better-framework/survey-pi-openclaw-opencode-openhands.md +210 -0
  352. package/rispecs/session.rispecs.md +277 -0
  353. package/rispecs/stc.rispecs.md +138 -0
  354. package/rispecs/unifier.rispecs.md +317 -0
  355. package/scripts/LAUNCH--mcp-mia-code--testing--2603141315--ac705a66-2c15-4a1c-a26d-9491018c5ba8.sh +2 -0
  356. package/scripts/RESUME--mia-code--mcps--260313--ac705a66-2c15-4a1c-a26d-9491018c5ba8.sh +1 -0
  357. package/scripts/install-widget-in-home-pi-agent-extensions.sh +4 -0
  358. package/scripts/sample-decompose--2604011535-prompt.sh +1 -0
  359. package/skills/deep-search/AGENTS.md +17 -0
  360. package/skills/deep-search/SKILL.md +281 -0
  361. package/skills/deep-search/agent-templates.md +224 -0
  362. package/skills/deep-search/orchestration-patterns.md +95 -0
  363. package/skills/miaco-pde-inquiry-routing-deep-search/AGENTS.md +13 -0
  364. package/skills/miaco-pde-inquiry-routing-deep-search/SKILL.md +136 -0
  365. package/skills/miaco-pde-inquiry-routing-internal-external-relationship/AGENTS.md +4 -0
  366. package/skills/miaco-pde-inquiry-routing-internal-external-relationship/SKILL.md +157 -0
  367. package/skills/miaco-pde-inquiry-routing-local-qmd/AGENTS.md +42 -0
  368. package/skills/miaco-pde-inquiry-routing-local-qmd/SKILL.md +135 -0
  369. package/skills/qmd/AGENTS.md +3 -0
  370. package/skills/qmd/SKILL.md +144 -0
  371. package/skills/qmd/references/mcp-setup.md +102 -0
  372. package/skills/rise-pde-inquiry-session-multi-agents-v3/SKILL.md +234 -0
  373. package/skills/rise-pde-inquiry-session-multi-agents-v3/agent-templates.md +436 -0
  374. package/skills/rise-pde-inquiry-session-multi-agents-v3/orchestration-patterns.md +197 -0
  375. package/skills/rise-pde-inquiry-session-multi-agents-v3/references/ceremonial-technology.md +102 -0
  376. package/skills/rise-pde-inquiry-session-multi-agents-v3/references/creative-orientation.md +99 -0
  377. package/skills/rise-pde-inquiry-session-multi-agents-v3/references/prompt-decomposition.md +73 -0
  378. package/skills/rise-pde-inquiry-session-multi-agents-v3/references/rise-framework.md +74 -0
  379. package/skills/rise-pde-inquiry-session-multi-agents-v3/references/structural-tension.md +82 -0
  380. package/src/cli.ts +35 -11
  381. package/src/geminiHeadless.ts +7 -2
  382. package/src/index.ts +2 -1
  383. package/src/mcp/miaco-server.ts +13 -1
  384. package/src/mcp/miatel-server.ts +13 -1
  385. package/src/mcp/miawa-server.ts +13 -1
  386. package/src/mcp/utils.ts +41 -8
  387. package/src/sessionStore.ts +44 -4
  388. package/src/types.ts +2 -1
  389. package/widget/mia-ceremony/README.md +36 -0
  390. package/widget/mia-ceremony/index.ts +143 -0
  391. package/widget/mia-interceptor/README.md +39 -0
  392. package/widget/mia-interceptor/index.ts +221 -0
  393. package/widget/mia-tools/README.md +37 -0
  394. package/widget/mia-tools/index.ts +569 -0
  395. package/widget/miette-echo/README.md +44 -0
  396. package/widget/miette-echo/index.ts +164 -0
  397. package/.claude/settings.local.json +0 -9
  398. package/.hch/issue_.env +0 -4
  399. package/.hch/issue_add__2601211715.json +0 -77
  400. package/.hch/issue_add__2601211715.md +0 -4
  401. package/.hch/issue_add__2602242020.json +0 -78
  402. package/.hch/issue_add__2602242020.md +0 -7
  403. package/.hch/issues.json +0 -2312
  404. package/.hch/issues.md +0 -30
  405. package/WS__mia-code__260214__IAIP_PDE.code-workspace +0 -29
  406. package/WS__mia-code__src332__260122.code-workspace +0 -23
  407. package/samples/copilot/session-state/be76abaa-a27f-4725-b2a9-22fb45f7e0f7/checkpoints/index.md +0 -6
  408. package/samples/copilot/session-state/be76abaa-a27f-4725-b2a9-22fb45f7e0f7/events.jsonl +0 -213
  409. package/samples/copilot/session-state/be76abaa-a27f-4725-b2a9-22fb45f7e0f7/plan.md +0 -243
  410. package/samples/copilot/session-state/be76abaa-a27f-4725-b2a9-22fb45f7e0f7/workspace.yaml +0 -5
@@ -0,0 +1,1195 @@
1
+ > ## Documentation Index
2
+ > Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
3
+ > Use this file to discover all available pages before exploring further.
4
+
5
+ # Context engineering in agents
6
+
7
+ ## Overview
8
+
9
+ The hard part of building agents (or any LLM application) is making them reliable enough. While they may work for a prototype, they often fail in real-world use cases.
10
+
11
+ ### Why do agents fail?
12
+
13
+ When agents fail, it's usually because the LLM call inside the agent took the wrong action / didn't do what we expected. LLMs fail for one of two reasons:
14
+
15
+ 1. The underlying LLM is not capable enough
16
+ 2. The "right" context was not passed to the LLM
17
+
18
+ More often than not - it's actually the second reason that causes agents to not be reliable.
19
+
20
+ **Context engineering** is providing the right information and tools in the right format so the LLM can accomplish a task. This is the number one job of AI Engineers. This lack of "right" context is the number one blocker for more reliable agents, and LangChain's agent abstractions are uniquely designed to facilitate context engineering.
21
+
22
+ <Tip>
23
+ New to context engineering? Start with the [conceptual overview](/oss/python/concepts/context) to understand the different types of context and when to use them.
24
+ </Tip>
25
+
26
+ ### The agent loop
27
+
28
+ A typical agent loop consists of two main steps:
29
+
30
+ 1. **Model call** - calls the LLM with a prompt and available tools, returns either a response or a request to execute tools
31
+ 2. **Tool execution** - executes the tools that the LLM requested, returns tool results
32
+
33
+ <div style={{ display: "flex", justifyContent: "center" }}>
34
+ <img src="https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=ac72e48317a9ced68fd1be64e89ec063" alt="Core agent loop diagram" className="rounded-lg" data-og-width="300" width="300" data-og-height="268" height="268" data-path="oss/images/core_agent_loop.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?w=280&fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=a4c4b766b6678ef52a6ed556b1a0b032 280w, https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?w=560&fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=111869e6e99a52c0eff60a1ef7ddc49c 560w, https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?w=840&fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=6c1e21de7b53bd0a29683aca09c6f86e 840w, https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?w=1100&fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=88bef556edba9869b759551c610c60f4 1100w, https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?w=1650&fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=9b0bdd138e9548eeb5056dc0ed2d4a4b 1650w, https://mintcdn.com/langchain-5e9cc07a/Tazq8zGc0yYUYrDl/oss/images/core_agent_loop.png?w=2500&fit=max&auto=format&n=Tazq8zGc0yYUYrDl&q=85&s=41eb4f053ed5e6b0ba5bad2badf6d755 2500w" />
35
+ </div>
36
+
37
+ This loop continues until the LLM decides to finish.
38
+
39
+ ### What you can control
40
+
41
+ To build reliable agents, you need to control what happens at each step of the agent loop, as well as what happens between steps.
42
+
43
+ | Context Type | What You Control | Transient or Persistent |
44
+ | --------------------------------------------- | ------------------------------------------------------------------------------------ | ----------------------- |
45
+ | **[Model Context](#model-context)** | What goes into model calls (instructions, message history, tools, response format) | Transient |
46
+ | **[Tool Context](#tool-context)** | What tools can access and produce (reads/writes to state, store, runtime context) | Persistent |
47
+ | **[Life-cycle Context](#life-cycle-context)** | What happens between model and tool calls (summarization, guardrails, logging, etc.) | Persistent |
48
+
49
+ <CardGroup>
50
+ <Card title="Transient context" icon="bolt" iconType="duotone">
51
+ What the LLM sees for a single call. You can modify messages, tools, or prompts without changing what's saved in state.
52
+ </Card>
53
+
54
+ <Card title="Persistent context" icon="database" iconType="duotone">
55
+ What gets saved in state across turns. Life-cycle hooks and tool writes modify this permanently.
56
+ </Card>
57
+ </CardGroup>
58
+
59
+ ### Data sources
60
+
61
+ Throughout this process, your agent accesses (reads / writes) different sources of data:
62
+
63
+ | Data Source | Also Known As | Scope | Examples |
64
+ | ------------------- | -------------------- | ------------------- | -------------------------------------------------------------------------- |
65
+ | **Runtime Context** | Static configuration | Conversation-scoped | User ID, API keys, database connections, permissions, environment settings |
66
+ | **State** | Short-term memory | Conversation-scoped | Current messages, uploaded files, authentication status, tool results |
67
+ | **Store** | Long-term memory | Cross-conversation | User preferences, extracted insights, memories, historical data |
68
+
69
+ ### How it works
70
+
71
+ LangChain [middleware](/oss/python/langchain/middleware) is the mechanism under the hood that makes context engineering practical for developers using LangChain.
72
+
73
+ Middleware allows you to hook into any step in the agent lifecycle and:
74
+
75
+ * Update context
76
+ * Jump to a different step in the agent lifecycle
77
+
78
+ Throughout this guide, you'll see frequent use of the middleware API as a means to the context engineering end.
79
+
80
+ ## Model context
81
+
82
+ Control what goes into each model call - instructions, available tools, which model to use, and output format. These decisions directly impact reliability and cost.
83
+
84
+ <CardGroup cols={2}>
85
+ <Card title="System Prompt" icon="message-lines" href="#system-prompt">
86
+ Base instructions from the developer to the LLM.
87
+ </Card>
88
+
89
+ <Card title="Messages" icon="comments" href="#messages">
90
+ The full list of messages (conversation history) sent to the LLM.
91
+ </Card>
92
+
93
+ <Card title="Tools" icon="wrench" href="#tools">
94
+ Utilities the agent has access to to take actions.
95
+ </Card>
96
+
97
+ <Card title="Model" icon="brain-circuit" href="#model">
98
+ The actual model (including configuration) to be called.
99
+ </Card>
100
+
101
+ <Card title="Response Format" icon="brackets-curly" href="#response-format">
102
+ Schema specification for the model's final response.
103
+ </Card>
104
+ </CardGroup>
105
+
106
+ All of these types of model context can draw from **state** (short-term memory), **store** (long-term memory), or **runtime context** (static configuration).
107
+
108
+ ### System Prompt
109
+
110
+ The system prompt sets the LLM's behavior and capabilities. Different users, contexts, or conversation stages need different instructions. Successful agents draw on memories, preferences, and configuration to provide the right instructions for the current state of the conversation.
111
+
112
+ <Tabs>
113
+ <Tab title="State">
114
+ Access message count or conversation context from state:
115
+
116
+ ```python theme={null}
117
+ from langchain.agents import create_agent
118
+ from langchain.agents.middleware import dynamic_prompt, ModelRequest
119
+
120
+ @dynamic_prompt
121
+ def state_aware_prompt(request: ModelRequest) -> str:
122
+ # request.messages is a shortcut for request.state["messages"]
123
+ message_count = len(request.messages)
124
+
125
+ base = "You are a helpful assistant."
126
+
127
+ if message_count > 10:
128
+ base += "\nThis is a long conversation - be extra concise."
129
+
130
+ return base
131
+
132
+ agent = create_agent(
133
+ model="gpt-4.1",
134
+ tools=[...],
135
+ middleware=[state_aware_prompt]
136
+ )
137
+ ```
138
+ </Tab>
139
+
140
+ <Tab title="Store">
141
+ Access user preferences from long-term memory:
142
+
143
+ ```python theme={null}
144
+ from dataclasses import dataclass
145
+ from langchain.agents import create_agent
146
+ from langchain.agents.middleware import dynamic_prompt, ModelRequest
147
+ from langgraph.store.memory import InMemoryStore
148
+
149
+ @dataclass
150
+ class Context:
151
+ user_id: str
152
+
153
+ @dynamic_prompt
154
+ def store_aware_prompt(request: ModelRequest) -> str:
155
+ user_id = request.runtime.context.user_id
156
+
157
+ # Read from Store: get user preferences
158
+ store = request.runtime.store
159
+ user_prefs = store.get(("preferences",), user_id)
160
+
161
+ base = "You are a helpful assistant."
162
+
163
+ if user_prefs:
164
+ style = user_prefs.value.get("communication_style", "balanced")
165
+ base += f"\nUser prefers {style} responses."
166
+
167
+ return base
168
+
169
+ agent = create_agent(
170
+ model="gpt-4.1",
171
+ tools=[...],
172
+ middleware=[store_aware_prompt],
173
+ context_schema=Context,
174
+ store=InMemoryStore()
175
+ )
176
+ ```
177
+ </Tab>
178
+
179
+ <Tab title="Runtime Context">
180
+ Access user ID or configuration from Runtime Context:
181
+
182
+ ```python theme={null}
183
+ from dataclasses import dataclass
184
+ from langchain.agents import create_agent
185
+ from langchain.agents.middleware import dynamic_prompt, ModelRequest
186
+
187
+ @dataclass
188
+ class Context:
189
+ user_role: str
190
+ deployment_env: str
191
+
192
+ @dynamic_prompt
193
+ def context_aware_prompt(request: ModelRequest) -> str:
194
+ # Read from Runtime Context: user role and environment
195
+ user_role = request.runtime.context.user_role
196
+ env = request.runtime.context.deployment_env
197
+
198
+ base = "You are a helpful assistant."
199
+
200
+ if user_role == "admin":
201
+ base += "\nYou have admin access. You can perform all operations."
202
+ elif user_role == "viewer":
203
+ base += "\nYou have read-only access. Guide users to read operations only."
204
+
205
+ if env == "production":
206
+ base += "\nBe extra careful with any data modifications."
207
+
208
+ return base
209
+
210
+ agent = create_agent(
211
+ model="gpt-4.1",
212
+ tools=[...],
213
+ middleware=[context_aware_prompt],
214
+ context_schema=Context
215
+ )
216
+ ```
217
+ </Tab>
218
+ </Tabs>
219
+
220
+ ### Messages
221
+
222
+ Messages make up the prompt that is sent to the LLM.
223
+ It's critical to manage the content of messages to ensure that the LLM has the right information to respond well.
224
+
225
+ <Tabs>
226
+ <Tab title="State">
227
+ Inject uploaded file context from State when relevant to current query:
228
+
229
+ ```python theme={null}
230
+ from langchain.agents import create_agent
231
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
232
+ from typing import Callable
233
+
234
+ @wrap_model_call
235
+ def inject_file_context(
236
+ request: ModelRequest,
237
+ handler: Callable[[ModelRequest], ModelResponse]
238
+ ) -> ModelResponse:
239
+ """Inject context about files user has uploaded this session."""
240
+ # Read from State: get uploaded files metadata
241
+ uploaded_files = request.state.get("uploaded_files", []) # [!code highlight]
242
+
243
+ if uploaded_files:
244
+ # Build context about available files
245
+ file_descriptions = []
246
+ for file in uploaded_files:
247
+ file_descriptions.append(
248
+ f"- {file['name']} ({file['type']}): {file['summary']}"
249
+ )
250
+
251
+ file_context = f"""Files you have access to in this conversation:
252
+ {chr(10).join(file_descriptions)}
253
+
254
+ Reference these files when answering questions."""
255
+
256
+ # Inject file context before recent messages
257
+ messages = [ # [!code highlight]
258
+ *request.messages,
259
+ {"role": "user", "content": file_context},
260
+ ]
261
+ request = request.override(messages=messages) # [!code highlight]
262
+
263
+ return handler(request)
264
+
265
+ agent = create_agent(
266
+ model="gpt-4.1",
267
+ tools=[...],
268
+ middleware=[inject_file_context]
269
+ )
270
+ ```
271
+ </Tab>
272
+
273
+ <Tab title="Store">
274
+ Inject user's email writing style from Store to guide drafting:
275
+
276
+ ```python theme={null}
277
+ from dataclasses import dataclass
278
+ from langchain.agents import create_agent
279
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
280
+ from typing import Callable
281
+ from langgraph.store.memory import InMemoryStore
282
+
283
+ @dataclass
284
+ class Context:
285
+ user_id: str
286
+
287
+ @wrap_model_call
288
+ def inject_writing_style(
289
+ request: ModelRequest,
290
+ handler: Callable[[ModelRequest], ModelResponse]
291
+ ) -> ModelResponse:
292
+ """Inject user's email writing style from Store."""
293
+ user_id = request.runtime.context.user_id # [!code highlight]
294
+
295
+ # Read from Store: get user's writing style examples
296
+ store = request.runtime.store # [!code highlight]
297
+ writing_style = store.get(("writing_style",), user_id) # [!code highlight]
298
+
299
+ if writing_style:
300
+ style = writing_style.value
301
+ # Build style guide from stored examples
302
+ style_context = f"""Your writing style:
303
+ - Tone: {style.get('tone', 'professional')}
304
+ - Typical greeting: "{style.get('greeting', 'Hi')}"
305
+ - Typical sign-off: "{style.get('sign_off', 'Best')}"
306
+ - Example email you've written:
307
+ {style.get('example_email', '')}"""
308
+
309
+ # Append at end - models pay more attention to final messages
310
+ messages = [
311
+ *request.messages,
312
+ {"role": "user", "content": style_context}
313
+ ]
314
+ request = request.override(messages=messages) # [!code highlight]
315
+
316
+ return handler(request)
317
+
318
+ agent = create_agent(
319
+ model="gpt-4.1",
320
+ tools=[...],
321
+ middleware=[inject_writing_style],
322
+ context_schema=Context,
323
+ store=InMemoryStore()
324
+ )
325
+ ```
326
+ </Tab>
327
+
328
+ <Tab title="Runtime Context">
329
+ Inject compliance rules from Runtime Context based on user's jurisdiction:
330
+
331
+ ```python theme={null}
332
+ from dataclasses import dataclass
333
+ from langchain.agents import create_agent
334
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
335
+ from typing import Callable
336
+
337
+ @dataclass
338
+ class Context:
339
+ user_jurisdiction: str
340
+ industry: str
341
+ compliance_frameworks: list[str]
342
+
343
+ @wrap_model_call
344
+ def inject_compliance_rules(
345
+ request: ModelRequest,
346
+ handler: Callable[[ModelRequest], ModelResponse]
347
+ ) -> ModelResponse:
348
+ """Inject compliance constraints from Runtime Context."""
349
+ # Read from Runtime Context: get compliance requirements
350
+ jurisdiction = request.runtime.context.user_jurisdiction # [!code highlight]
351
+ industry = request.runtime.context.industry # [!code highlight]
352
+ frameworks = request.runtime.context.compliance_frameworks # [!code highlight]
353
+
354
+ # Build compliance constraints
355
+ rules = []
356
+ if "GDPR" in frameworks:
357
+ rules.append("- Must obtain explicit consent before processing personal data")
358
+ rules.append("- Users have right to data deletion")
359
+ if "HIPAA" in frameworks:
360
+ rules.append("- Cannot share patient health information without authorization")
361
+ rules.append("- Must use secure, encrypted communication")
362
+ if industry == "finance":
363
+ rules.append("- Cannot provide financial advice without proper disclaimers")
364
+
365
+ if rules:
366
+ compliance_context = f"""Compliance requirements for {jurisdiction}:
367
+ {chr(10).join(rules)}"""
368
+
369
+ # Append at end - models pay more attention to final messages
370
+ messages = [
371
+ *request.messages,
372
+ {"role": "user", "content": compliance_context}
373
+ ]
374
+ request = request.override(messages=messages) # [!code highlight]
375
+
376
+ return handler(request)
377
+
378
+ agent = create_agent(
379
+ model="gpt-4.1",
380
+ tools=[...],
381
+ middleware=[inject_compliance_rules],
382
+ context_schema=Context
383
+ )
384
+ ```
385
+ </Tab>
386
+ </Tabs>
387
+
388
+ <Note>
389
+ **Transient vs Persistent Message Updates:**
390
+
391
+ The examples above use `wrap_model_call` to make **transient** updates - modifying what messages are sent to the model for a single call without changing what's saved in state.
392
+
393
+ For **persistent** updates that modify state (like the summarization example in [Life-cycle Context](#summarization)), use life-cycle hooks like `before_model` or `after_model` to permanently update the conversation history. See the [middleware documentation](/oss/python/langchain/middleware) for more details.
394
+ </Note>
395
+
396
+ ### Tools
397
+
398
+ Tools let the model interact with databases, APIs, and external systems. How you define and select tools directly impacts whether the model can complete tasks effectively.
399
+
400
+ #### Defining tools
401
+
402
+ Each tool needs a clear name, description, argument names, and argument descriptions. These aren't just metadata—they guide the model's reasoning about when and how to use the tool.
403
+
404
+ ```python theme={null}
405
+ from langchain.tools import tool
406
+
407
+ @tool(parse_docstring=True)
408
+ def search_orders(
409
+ user_id: str,
410
+ status: str,
411
+ limit: int = 10
412
+ ) -> str:
413
+ """Search for user orders by status.
414
+
415
+ Use this when the user asks about order history or wants to check
416
+ order status. Always filter by the provided status.
417
+
418
+ Args:
419
+ user_id: Unique identifier for the user
420
+ status: Order status: 'pending', 'shipped', or 'delivered'
421
+ limit: Maximum number of results to return
422
+ """
423
+ # Implementation here
424
+ pass
425
+ ```
426
+
427
+ #### Selecting tools
428
+
429
+ Not every tool is appropriate for every situation. Too many tools may overwhelm the model (overload context) and increase errors; too few limit capabilities. Dynamic tool selection adapts the available toolset based on authentication state, user permissions, feature flags, or conversation stage.
430
+
431
+ <Tabs>
432
+ <Tab title="State">
433
+ Enable advanced tools only after certain conversation milestones:
434
+
435
+ ```python theme={null}
436
+ from langchain.agents import create_agent
437
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
438
+ from typing import Callable
439
+
440
+ @wrap_model_call
441
+ def state_based_tools(
442
+ request: ModelRequest,
443
+ handler: Callable[[ModelRequest], ModelResponse]
444
+ ) -> ModelResponse:
445
+ """Filter tools based on conversation State."""
446
+ # Read from State: check if user has authenticated
447
+ state = request.state # [!code highlight]
448
+ is_authenticated = state.get("authenticated", False) # [!code highlight]
449
+ message_count = len(state["messages"])
450
+
451
+ # Only enable sensitive tools after authentication
452
+ if not is_authenticated:
453
+ tools = [t for t in request.tools if t.name.startswith("public_")]
454
+ request = request.override(tools=tools) # [!code highlight]
455
+ elif message_count < 5:
456
+ # Limit tools early in conversation
457
+ tools = [t for t in request.tools if t.name != "advanced_search"]
458
+ request = request.override(tools=tools) # [!code highlight]
459
+
460
+ return handler(request)
461
+
462
+ agent = create_agent(
463
+ model="gpt-4.1",
464
+ tools=[public_search, private_search, advanced_search],
465
+ middleware=[state_based_tools]
466
+ )
467
+ ```
468
+ </Tab>
469
+
470
+ <Tab title="Store">
471
+ Filter tools based on user preferences or feature flags in Store:
472
+
473
+ ```python theme={null}
474
+ from dataclasses import dataclass
475
+ from langchain.agents import create_agent
476
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
477
+ from typing import Callable
478
+ from langgraph.store.memory import InMemoryStore
479
+
480
+ @dataclass
481
+ class Context:
482
+ user_id: str
483
+
484
+ @wrap_model_call
485
+ def store_based_tools(
486
+ request: ModelRequest,
487
+ handler: Callable[[ModelRequest], ModelResponse]
488
+ ) -> ModelResponse:
489
+ """Filter tools based on Store preferences."""
490
+ user_id = request.runtime.context.user_id
491
+
492
+ # Read from Store: get user's enabled features
493
+ store = request.runtime.store
494
+ feature_flags = store.get(("features",), user_id)
495
+
496
+ if feature_flags:
497
+ enabled_features = feature_flags.value.get("enabled_tools", [])
498
+ # Only include tools that are enabled for this user
499
+ tools = [t for t in request.tools if t.name in enabled_features]
500
+ request = request.override(tools=tools)
501
+
502
+ return handler(request)
503
+
504
+ agent = create_agent(
505
+ model="gpt-4.1",
506
+ tools=[search_tool, analysis_tool, export_tool],
507
+ middleware=[store_based_tools],
508
+ context_schema=Context,
509
+ store=InMemoryStore()
510
+ )
511
+ ```
512
+ </Tab>
513
+
514
+ <Tab title="Runtime Context">
515
+ Filter tools based on user permissions from Runtime Context:
516
+
517
+ ```python theme={null}
518
+ from dataclasses import dataclass
519
+ from langchain.agents import create_agent
520
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
521
+ from typing import Callable
522
+
523
+ @dataclass
524
+ class Context:
525
+ user_role: str
526
+
527
+ @wrap_model_call
528
+ def context_based_tools(
529
+ request: ModelRequest,
530
+ handler: Callable[[ModelRequest], ModelResponse]
531
+ ) -> ModelResponse:
532
+ """Filter tools based on Runtime Context permissions."""
533
+ # Read from Runtime Context: get user role
534
+ user_role = request.runtime.context.user_role
535
+
536
+ if user_role == "admin":
537
+ # Admins get all tools
538
+ pass
539
+ elif user_role == "editor":
540
+ # Editors can't delete
541
+ tools = [t for t in request.tools if t.name != "delete_data"]
542
+ request = request.override(tools=tools)
543
+ else:
544
+ # Viewers get read-only tools
545
+ tools = [t for t in request.tools if t.name.startswith("read_")]
546
+ request = request.override(tools=tools)
547
+
548
+ return handler(request)
549
+
550
+ agent = create_agent(
551
+ model="gpt-4.1",
552
+ tools=[read_data, write_data, delete_data],
553
+ middleware=[context_based_tools],
554
+ context_schema=Context
555
+ )
556
+ ```
557
+ </Tab>
558
+ </Tabs>
559
+
560
+ See [Dynamic tools](/oss/python/langchain/agents#dynamic-tools) for both filtering pre-registered tools and registering tools at runtime (e.g., from MCP servers).
561
+
562
+ ### Model
563
+
564
+ Different models have different strengths, costs, and context windows. Select the right model for the task at hand, which
565
+ might change during an agent run.
566
+
567
+ <Tabs>
568
+ <Tab title="State">
569
+ Use different models based on conversation length from State:
570
+
571
+ ```python theme={null}
572
+ from langchain.agents import create_agent
573
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
574
+ from langchain.chat_models import init_chat_model
575
+ from typing import Callable
576
+
577
+ # Initialize models once outside the middleware
578
+ large_model = init_chat_model("claude-sonnet-4-5-20250929")
579
+ standard_model = init_chat_model("gpt-4.1")
580
+ efficient_model = init_chat_model("gpt-4.1-mini")
581
+
582
+ @wrap_model_call
583
+ def state_based_model(
584
+ request: ModelRequest,
585
+ handler: Callable[[ModelRequest], ModelResponse]
586
+ ) -> ModelResponse:
587
+ """Select model based on State conversation length."""
588
+ # request.messages is a shortcut for request.state["messages"]
589
+ message_count = len(request.messages) # [!code highlight]
590
+
591
+ if message_count > 20:
592
+ # Long conversation - use model with larger context window
593
+ model = large_model
594
+ elif message_count > 10:
595
+ # Medium conversation
596
+ model = standard_model
597
+ else:
598
+ # Short conversation - use efficient model
599
+ model = efficient_model
600
+
601
+ request = request.override(model=model) # [!code highlight]
602
+
603
+ return handler(request)
604
+
605
+ agent = create_agent(
606
+ model="gpt-4.1-mini",
607
+ tools=[...],
608
+ middleware=[state_based_model]
609
+ )
610
+ ```
611
+ </Tab>
612
+
613
+ <Tab title="Store">
614
+ Use user's preferred model from Store:
615
+
616
+ ```python theme={null}
617
+ from dataclasses import dataclass
618
+ from langchain.agents import create_agent
619
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
620
+ from langchain.chat_models import init_chat_model
621
+ from typing import Callable
622
+ from langgraph.store.memory import InMemoryStore
623
+
624
+ @dataclass
625
+ class Context:
626
+ user_id: str
627
+
628
+ # Initialize available models once
629
+ MODEL_MAP = {
630
+ "gpt-4.1": init_chat_model("gpt-4.1"),
631
+ "gpt-4.1-mini": init_chat_model("gpt-4.1-mini"),
632
+ "claude-sonnet": init_chat_model("claude-sonnet-4-5-20250929"),
633
+ }
634
+
635
+ @wrap_model_call
636
+ def store_based_model(
637
+ request: ModelRequest,
638
+ handler: Callable[[ModelRequest], ModelResponse]
639
+ ) -> ModelResponse:
640
+ """Select model based on Store preferences."""
641
+ user_id = request.runtime.context.user_id
642
+
643
+ # Read from Store: get user's preferred model
644
+ store = request.runtime.store
645
+ user_prefs = store.get(("preferences",), user_id)
646
+
647
+ if user_prefs:
648
+ preferred_model = user_prefs.value.get("preferred_model")
649
+ if preferred_model and preferred_model in MODEL_MAP:
650
+ request = request.override(model=MODEL_MAP[preferred_model])
651
+
652
+ return handler(request)
653
+
654
+ agent = create_agent(
655
+ model="gpt-4.1",
656
+ tools=[...],
657
+ middleware=[store_based_model],
658
+ context_schema=Context,
659
+ store=InMemoryStore()
660
+ )
661
+ ```
662
+ </Tab>
663
+
664
+ <Tab title="Runtime Context">
665
+ Select model based on cost limits or environment from Runtime Context:
666
+
667
+ ```python theme={null}
668
+ from dataclasses import dataclass
669
+ from langchain.agents import create_agent
670
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
671
+ from langchain.chat_models import init_chat_model
672
+ from typing import Callable
673
+
674
+ @dataclass
675
+ class Context:
676
+ cost_tier: str
677
+ environment: str
678
+
679
+ # Initialize models once outside the middleware
680
+ premium_model = init_chat_model("claude-sonnet-4-5-20250929")
681
+ standard_model = init_chat_model("gpt-4.1")
682
+ budget_model = init_chat_model("gpt-4.1-mini")
683
+
684
+ @wrap_model_call
685
+ def context_based_model(
686
+ request: ModelRequest,
687
+ handler: Callable[[ModelRequest], ModelResponse]
688
+ ) -> ModelResponse:
689
+ """Select model based on Runtime Context."""
690
+ # Read from Runtime Context: cost tier and environment
691
+ cost_tier = request.runtime.context.cost_tier
692
+ environment = request.runtime.context.environment
693
+
694
+ if environment == "production" and cost_tier == "premium":
695
+ # Production premium users get best model
696
+ model = premium_model
697
+ elif cost_tier == "budget":
698
+ # Budget tier gets efficient model
699
+ model = budget_model
700
+ else:
701
+ # Standard tier
702
+ model = standard_model
703
+
704
+ request = request.override(model=model)
705
+
706
+ return handler(request)
707
+
708
+ agent = create_agent(
709
+ model="gpt-4.1",
710
+ tools=[...],
711
+ middleware=[context_based_model],
712
+ context_schema=Context
713
+ )
714
+ ```
715
+ </Tab>
716
+ </Tabs>
717
+
718
+ See [Dynamic model](/oss/python/langchain/agents#dynamic-model) for more examples.
719
+
720
+ ### Response format
721
+
722
+ Structured output transforms unstructured text into validated, structured data. When extracting specific fields or returning data for downstream systems, free-form text isn't sufficient.
723
+
724
+ **How it works:** When you provide a schema as the response format, the model's final response is guaranteed to conform to that schema. The agent runs the model / tool calling loop until the model is done calling tools, then the final response is coerced into the provided format.
725
+
726
+ #### Defining formats
727
+
728
+ Schema definitions guide the model. Field names, types, and descriptions specify exactly what format the output should adhere to.
729
+
730
+ ```python theme={null}
731
+ from pydantic import BaseModel, Field
732
+
733
+ class CustomerSupportTicket(BaseModel):
734
+ """Structured ticket information extracted from customer message."""
735
+
736
+ category: str = Field(
737
+ description="Issue category: 'billing', 'technical', 'account', or 'product'"
738
+ )
739
+ priority: str = Field(
740
+ description="Urgency level: 'low', 'medium', 'high', or 'critical'"
741
+ )
742
+ summary: str = Field(
743
+ description="One-sentence summary of the customer's issue"
744
+ )
745
+ customer_sentiment: str = Field(
746
+ description="Customer's emotional tone: 'frustrated', 'neutral', or 'satisfied'"
747
+ )
748
+ ```
749
+
750
+ #### Selecting formats
751
+
752
+ Dynamic response format selection adapts schemas based on user preferences, conversation stage, or role—returning simple formats early and detailed formats as complexity increases.
753
+
754
+ <Tabs>
755
+ <Tab title="State">
756
+ Configure structured output based on conversation state:
757
+
758
+ ```python theme={null}
759
+ from langchain.agents import create_agent
760
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
761
+ from pydantic import BaseModel, Field
762
+ from typing import Callable
763
+
764
+ class SimpleResponse(BaseModel):
765
+ """Simple response for early conversation."""
766
+ answer: str = Field(description="A brief answer")
767
+
768
+ class DetailedResponse(BaseModel):
769
+ """Detailed response for established conversation."""
770
+ answer: str = Field(description="A detailed answer")
771
+ reasoning: str = Field(description="Explanation of reasoning")
772
+ confidence: float = Field(description="Confidence score 0-1")
773
+
774
+ @wrap_model_call
775
+ def state_based_output(
776
+ request: ModelRequest,
777
+ handler: Callable[[ModelRequest], ModelResponse]
778
+ ) -> ModelResponse:
779
+ """Select output format based on State."""
780
+ # request.messages is a shortcut for request.state["messages"]
781
+ message_count = len(request.messages) # [!code highlight]
782
+
783
+ if message_count < 3:
784
+ # Early conversation - use simple format
785
+ request = request.override(response_format=SimpleResponse) # [!code highlight]
786
+ else:
787
+ # Established conversation - use detailed format
788
+ request = request.override(response_format=DetailedResponse) # [!code highlight]
789
+
790
+ return handler(request)
791
+
792
+ agent = create_agent(
793
+ model="gpt-4.1",
794
+ tools=[...],
795
+ middleware=[state_based_output]
796
+ )
797
+ ```
798
+ </Tab>
799
+
800
+ <Tab title="Store">
801
+ Configure output format based on user preferences in Store:
802
+
803
+ ```python theme={null}
804
+ from dataclasses import dataclass
805
+ from langchain.agents import create_agent
806
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
807
+ from pydantic import BaseModel, Field
808
+ from typing import Callable
809
+ from langgraph.store.memory import InMemoryStore
810
+
811
+ @dataclass
812
+ class Context:
813
+ user_id: str
814
+
815
+ class VerboseResponse(BaseModel):
816
+ """Verbose response with details."""
817
+ answer: str = Field(description="Detailed answer")
818
+ sources: list[str] = Field(description="Sources used")
819
+
820
+ class ConciseResponse(BaseModel):
821
+ """Concise response."""
822
+ answer: str = Field(description="Brief answer")
823
+
824
+ @wrap_model_call
825
+ def store_based_output(
826
+ request: ModelRequest,
827
+ handler: Callable[[ModelRequest], ModelResponse]
828
+ ) -> ModelResponse:
829
+ """Select output format based on Store preferences."""
830
+ user_id = request.runtime.context.user_id
831
+
832
+ # Read from Store: get user's preferred response style
833
+ store = request.runtime.store
834
+ user_prefs = store.get(("preferences",), user_id)
835
+
836
+ if user_prefs:
837
+ style = user_prefs.value.get("response_style", "concise")
838
+ if style == "verbose":
839
+ request = request.override(response_format=VerboseResponse)
840
+ else:
841
+ request = request.override(response_format=ConciseResponse)
842
+
843
+ return handler(request)
844
+
845
+ agent = create_agent(
846
+ model="gpt-4.1",
847
+ tools=[...],
848
+ middleware=[store_based_output],
849
+ context_schema=Context,
850
+ store=InMemoryStore()
851
+ )
852
+ ```
853
+ </Tab>
854
+
855
+ <Tab title="Runtime Context">
856
+ Configure output format based on Runtime Context like user role or environment:
857
+
858
+ ```python theme={null}
859
+ from dataclasses import dataclass
860
+ from langchain.agents import create_agent
861
+ from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
862
+ from pydantic import BaseModel, Field
863
+ from typing import Callable
864
+
865
+ @dataclass
866
+ class Context:
867
+ user_role: str
868
+ environment: str
869
+
870
+ class AdminResponse(BaseModel):
871
+ """Response with technical details for admins."""
872
+ answer: str = Field(description="Answer")
873
+ debug_info: dict = Field(description="Debug information")
874
+ system_status: str = Field(description="System status")
875
+
876
+ class UserResponse(BaseModel):
877
+ """Simple response for regular users."""
878
+ answer: str = Field(description="Answer")
879
+
880
+ @wrap_model_call
881
+ def context_based_output(
882
+ request: ModelRequest,
883
+ handler: Callable[[ModelRequest], ModelResponse]
884
+ ) -> ModelResponse:
885
+ """Select output format based on Runtime Context."""
886
+ # Read from Runtime Context: user role and environment
887
+ user_role = request.runtime.context.user_role
888
+ environment = request.runtime.context.environment
889
+
890
+ if user_role == "admin" and environment == "production":
891
+ # Admins in production get detailed output
892
+ request = request.override(response_format=AdminResponse)
893
+ else:
894
+ # Regular users get simple output
895
+ request = request.override(response_format=UserResponse)
896
+
897
+ return handler(request)
898
+
899
+ agent = create_agent(
900
+ model="gpt-4.1",
901
+ tools=[...],
902
+ middleware=[context_based_output],
903
+ context_schema=Context
904
+ )
905
+ ```
906
+ </Tab>
907
+ </Tabs>
908
+
909
+ ## Tool context
910
+
911
+ Tools are special in that they both read and write context.
912
+
913
+ In the most basic case, when a tool executes, it receives the LLM's request parameters and returns a tool message back. The tool does its work and produces a result.
914
+
915
+ Tools can also fetch important information for the model that allows it to perform and complete tasks.
916
+
917
+ ### Reads
918
+
919
+ Most real-world tools need more than just the LLM's parameters. They need user IDs for database queries, API keys for external services, or current session state to make decisions. Tools read from state, store, and runtime context to access this information.
920
+
921
+ <Tabs>
922
+ <Tab title="State">
923
+ Read from State to check current session information:
924
+
925
+ ```python theme={null}
926
+ from langchain.tools import tool, ToolRuntime
927
+ from langchain.agents import create_agent
928
+
929
+ @tool
930
+ def check_authentication(
931
+ runtime: ToolRuntime
932
+ ) -> str:
933
+ """Check if user is authenticated."""
934
+ # Read from State: check current auth status
935
+ current_state = runtime.state
936
+ is_authenticated = current_state.get("authenticated", False)
937
+
938
+ if is_authenticated:
939
+ return "User is authenticated"
940
+ else:
941
+ return "User is not authenticated"
942
+
943
+ agent = create_agent(
944
+ model="gpt-4.1",
945
+ tools=[check_authentication]
946
+ )
947
+ ```
948
+ </Tab>
949
+
950
+ <Tab title="Store">
951
+ Read from Store to access persisted user preferences:
952
+
953
+ ```python theme={null}
954
+ from dataclasses import dataclass
955
+ from langchain.tools import tool, ToolRuntime
956
+ from langchain.agents import create_agent
957
+ from langgraph.store.memory import InMemoryStore
958
+
959
+ @dataclass
960
+ class Context:
961
+ user_id: str
962
+
963
+ @tool
964
+ def get_preference(
965
+ preference_key: str,
966
+ runtime: ToolRuntime[Context]
967
+ ) -> str:
968
+ """Get user preference from Store."""
969
+ user_id = runtime.context.user_id
970
+
971
+ # Read from Store: get existing preferences
972
+ store = runtime.store
973
+ existing_prefs = store.get(("preferences",), user_id)
974
+
975
+ if existing_prefs:
976
+ value = existing_prefs.value.get(preference_key)
977
+ return f"{preference_key}: {value}" if value else f"No preference set for {preference_key}"
978
+ else:
979
+ return "No preferences found"
980
+
981
+ agent = create_agent(
982
+ model="gpt-4.1",
983
+ tools=[get_preference],
984
+ context_schema=Context,
985
+ store=InMemoryStore()
986
+ )
987
+ ```
988
+ </Tab>
989
+
990
+ <Tab title="Runtime Context">
991
+ Read from Runtime Context for configuration like API keys and user IDs:
992
+
993
+ ```python theme={null}
994
+ from dataclasses import dataclass
995
+ from langchain.tools import tool, ToolRuntime
996
+ from langchain.agents import create_agent
997
+
998
+ @dataclass
999
+ class Context:
1000
+ user_id: str
1001
+ api_key: str
1002
+ db_connection: str
1003
+
1004
+ @tool
1005
+ def fetch_user_data(
1006
+ query: str,
1007
+ runtime: ToolRuntime[Context]
1008
+ ) -> str:
1009
+ """Fetch data using Runtime Context configuration."""
1010
+ # Read from Runtime Context: get API key and DB connection
1011
+ user_id = runtime.context.user_id
1012
+ api_key = runtime.context.api_key
1013
+ db_connection = runtime.context.db_connection
1014
+
1015
+ # Use configuration to fetch data
1016
+ results = perform_database_query(db_connection, query, api_key)
1017
+
1018
+ return f"Found {len(results)} results for user {user_id}"
1019
+
1020
+ agent = create_agent(
1021
+ model="gpt-4.1",
1022
+ tools=[fetch_user_data],
1023
+ context_schema=Context
1024
+ )
1025
+
1026
+ # Invoke with runtime context
1027
+ result = agent.invoke(
1028
+ {"messages": [{"role": "user", "content": "Get my data"}]},
1029
+ context=Context(
1030
+ user_id="user_123",
1031
+ api_key="sk-...",
1032
+ db_connection="postgresql://..."
1033
+ )
1034
+ )
1035
+ ```
1036
+ </Tab>
1037
+ </Tabs>
1038
+
1039
+ ### Writes
1040
+
1041
+ Tool results can be used to help an agent complete a given task. Tools can both return results directly to the model
1042
+ and update the memory of the agent to make important context available to future steps.
1043
+
1044
+ <Tabs>
1045
+ <Tab title="State">
1046
+ Write to State to track session-specific information using Command:
1047
+
1048
+ ```python theme={null}
1049
+ from langchain.tools import tool, ToolRuntime
1050
+ from langchain.agents import create_agent
1051
+ from langgraph.types import Command
1052
+
1053
+ @tool
1054
+ def authenticate_user(
1055
+ password: str,
1056
+ runtime: ToolRuntime
1057
+ ) -> Command:
1058
+ """Authenticate user and update State."""
1059
+ # Perform authentication (simplified)
1060
+ if password == "correct":
1061
+ # Write to State: mark as authenticated using Command
1062
+ return Command(
1063
+ update={"authenticated": True},
1064
+ )
1065
+ else:
1066
+ return Command(update={"authenticated": False})
1067
+
1068
+ agent = create_agent(
1069
+ model="gpt-4.1",
1070
+ tools=[authenticate_user]
1071
+ )
1072
+ ```
1073
+ </Tab>
1074
+
1075
+ <Tab title="Store">
1076
+ Write to Store to persist data across sessions:
1077
+
1078
+ ```python theme={null}
1079
+ from dataclasses import dataclass
1080
+ from langchain.tools import tool, ToolRuntime
1081
+ from langchain.agents import create_agent
1082
+ from langgraph.store.memory import InMemoryStore
1083
+
1084
+ @dataclass
1085
+ class Context:
1086
+ user_id: str
1087
+
1088
+ @tool
1089
+ def save_preference(
1090
+ preference_key: str,
1091
+ preference_value: str,
1092
+ runtime: ToolRuntime[Context]
1093
+ ) -> str:
1094
+ """Save user preference to Store."""
1095
+ user_id = runtime.context.user_id
1096
+
1097
+ # Read existing preferences
1098
+ store = runtime.store
1099
+ existing_prefs = store.get(("preferences",), user_id)
1100
+
1101
+ # Merge with new preference
1102
+ prefs = existing_prefs.value if existing_prefs else {}
1103
+ prefs[preference_key] = preference_value
1104
+
1105
+ # Write to Store: save updated preferences
1106
+ store.put(("preferences",), user_id, prefs)
1107
+
1108
+ return f"Saved preference: {preference_key} = {preference_value}"
1109
+
1110
+ agent = create_agent(
1111
+ model="gpt-4.1",
1112
+ tools=[save_preference],
1113
+ context_schema=Context,
1114
+ store=InMemoryStore()
1115
+ )
1116
+ ```
1117
+ </Tab>
1118
+ </Tabs>
1119
+
1120
+ See [Tools](/oss/python/langchain/tools) for comprehensive examples of accessing state, store, and runtime context in tools.
1121
+
1122
+ ## Life-cycle context
1123
+
1124
+ Control what happens **between** the core agent steps - intercepting data flow to implement cross-cutting concerns like summarization, guardrails, and logging.
1125
+
1126
+ As you've seen in [Model Context](#model-context) and [Tool Context](#tool-context), [middleware](/oss/python/langchain/middleware) is the mechanism that makes context engineering practical. Middleware allows you to hook into any step in the agent lifecycle and either:
1127
+
1128
+ 1. **Update context** - Modify state and store to persist changes, update conversation history, or save insights
1129
+ 2. **Jump in the lifecycle** - Move to different steps in the agent cycle based on context (e.g., skip tool execution if a condition is met, repeat model call with modified context)
1130
+
1131
+ <div style={{ display: "flex", justifyContent: "center" }}>
1132
+ <img src="https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=eb4404b137edec6f6f0c8ccb8323eaf1" alt="Middleware hooks in the agent loop" className="rounded-lg" data-og-width="500" width="500" data-og-height="560" height="560" data-path="oss/images/middleware_final.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?w=280&fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=483413aa87cf93323b0f47c0dd5528e8 280w, https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?w=560&fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=41b7dd647447978ff776edafe5f42499 560w, https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?w=840&fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=e9b14e264f68345de08ae76f032c52d4 840w, https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?w=1100&fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=ec45e1932d1279b1beee4a4b016b473f 1100w, https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?w=1650&fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=3bca5ebf8aa56632b8a9826f7f112e57 1650w, https://mintcdn.com/langchain-5e9cc07a/RAP6mjwE5G00xYsA/oss/images/middleware_final.png?w=2500&fit=max&auto=format&n=RAP6mjwE5G00xYsA&q=85&s=437f141d1266f08a95f030c2804691d9 2500w" />
1133
+ </div>
1134
+
1135
+ ### Example: Summarization
1136
+
1137
+ One of the most common life-cycle patterns is automatically condensing conversation history when it gets too long. Unlike the transient message trimming shown in [Model Context](#messages), summarization **persistently updates state** - permanently replacing old messages with a summary that's saved for all future turns.
1138
+
1139
+ LangChain offers built-in middleware for this:
1140
+
1141
+ ```python theme={null}
1142
+ from langchain.agents import create_agent
1143
+ from langchain.agents.middleware import SummarizationMiddleware
1144
+
1145
+ agent = create_agent(
1146
+ model="gpt-4.1",
1147
+ tools=[...],
1148
+ middleware=[
1149
+ SummarizationMiddleware(
1150
+ model="gpt-4.1-mini",
1151
+ trigger={"tokens": 4000},
1152
+ keep={"messages": 20},
1153
+ ),
1154
+ ],
1155
+ )
1156
+ ```
1157
+
1158
+ When the conversation exceeds the token limit, `SummarizationMiddleware` automatically:
1159
+
1160
+ 1. Summarizes older messages using a separate LLM call
1161
+ 2. Replaces them with a summary message in State (permanently)
1162
+ 3. Keeps recent messages intact for context
1163
+
1164
+ The summarized conversation history is permanently updated - future turns will see the summary instead of the original messages.
1165
+
1166
+ <Note>
1167
+ For a complete list of built-in middleware, available hooks, and how to create custom middleware, see the [Middleware documentation](/oss/python/langchain/middleware).
1168
+ </Note>
1169
+
1170
+ ## Best practices
1171
+
1172
+ 1. **Start simple** - Begin with static prompts and tools, add dynamics only when needed
1173
+ 2. **Test incrementally** - Add one context engineering feature at a time
1174
+ 3. **Monitor performance** - Track model calls, token usage, and latency
1175
+ 4. **Use built-in middleware** - Leverage [`SummarizationMiddleware`](/oss/python/langchain/middleware#summarization), [`LLMToolSelectorMiddleware`](/oss/python/langchain/middleware#llm-tool-selector), etc.
1176
+ 5. **Document your context strategy** - Make it clear what context is being passed and why
1177
+ 6. **Understand transient vs persistent**: Model context changes are transient (per-call), while life-cycle context changes persist to state
1178
+
1179
+ ## Related resources
1180
+
1181
+ * [Context conceptual overview](/oss/python/concepts/context) - Understand context types and when to use them
1182
+ * [Middleware](/oss/python/langchain/middleware) - Complete middleware guide
1183
+ * [Tools](/oss/python/langchain/tools) - Tool creation and context access
1184
+ * [Memory](/oss/python/concepts/memory) - Short-term and long-term memory patterns
1185
+ * [Agents](/oss/python/langchain/agents) - Core agent concepts
1186
+
1187
+ ***
1188
+
1189
+ <Callout icon="pen-to-square" iconType="regular">
1190
+ [Edit this page on GitHub](https://github.com/langchain-ai/docs/edit/main/src/oss/langchain/context-engineering.mdx) or [file an issue](https://github.com/langchain-ai/docs/issues/new/choose).
1191
+ </Callout>
1192
+
1193
+ <Tip icon="terminal" iconType="regular">
1194
+ [Connect these docs](/use-these-docs) to Claude, VSCode, and more via MCP for real-time answers.
1195
+ </Tip>