agentir-langgraph-annotation 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Krish Modi
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,7 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentir-langgraph-annotation
3
+ Version: 0.1.0
4
+ Summary: Public AgentIR SDK, decorators, contract tooling, and client logging helpers.
5
+ License-Expression: MIT
6
+ License-File: LICENSE
7
+ Dynamic: license-file
@@ -0,0 +1,265 @@
1
+ # AgentIR SDK
2
+
3
+ Copy-ready SDK for annotating LangGraph nodes so external schedulers and analyzers can reason about your graph. Install from this tree (or move `agentir_sdk/` into its own repo and `pip install -e` there) when running AgentIR graph benchmarks.
4
+
5
+ ## Purpose
6
+
7
+ - Annotate graph nodes with read/write metadata.
8
+ - Build scheduler-facing graph contracts.
9
+ - Propagate RID and node-name metadata into model calls.
10
+ - Bind scheduler headers into custom `.invoke()` wrappers.
11
+ - Provide lightweight client-side logging helpers.
12
+
13
+ ## What you get
14
+
15
+ With decorators plus `GraphProxy`, you can expose:
16
+
17
+ - Which state keys each node writes.
18
+ - Which state keys each LLM call reads.
19
+ - Static prompt components per call.
20
+ - Conditional routing structure.
21
+ - A machine-readable graph contract (`Contract.to_dict()`).
22
+
23
+ ## Install
24
+
25
+ ```bash
26
+ pip install -e <path-to-agentir-sdk-repo>
27
+ ```
28
+
29
+ ## Package layout
30
+
31
+ | Area | Location |
32
+ |------|----------|
33
+ | `@writes`, `@llm_call` | `agentir_sdk/decorators.py` |
34
+ | Contract data structures | `agentir_sdk/contract.py` |
35
+ | `GraphProxy` | `agentir_sdk/graph_proxy.py` |
36
+ | RID helpers | `agentir_sdk/rid.py` |
37
+ | Client logging | `agentir_sdk/client_logger.py` |
38
+
39
+ ## Core API
40
+
41
+ ### `@writes(*keys: str)`
42
+
43
+ Declare state keys produced by a node.
44
+
45
+ ```python
46
+ from agentir_sdk.decorators import writes
47
+
48
+ @writes("draft", "messages")
49
+ def composer(state, config=None):
50
+ ...
51
+ return {"draft": text, "messages": [...]}
52
+ ```
53
+
54
+ **Guidance**
55
+
56
+ - List only keys the node really produces.
57
+ - Keep names aligned with your `TypedDict` state.
58
+ - Include keys even if they are sometimes empty.
59
+
60
+ **Scheduling implication:** `writes` defines outputs that downstream nodes may depend on.
61
+
62
+ ### `@llm_call(model=None, reads=None, static_vars=None)`
63
+
64
+ Declare one LLM call site inside a node.
65
+
66
+ ```python
67
+ from agentir_sdk.decorators import llm_call
68
+
69
+ @llm_call(
70
+ model="gpt-4o-mini",
71
+ reads=["task_brief", "research_notes"],
72
+ static_vars=["You are a precise writing assistant."]
73
+ )
74
+ def composer(state, config=None):
75
+ ...
76
+ ```
77
+
78
+ **Parameters**
79
+
80
+ - `model`: logical model identifier.
81
+ - `reads`: state keys used to build that prompt.
82
+ - `static_vars`: static prompt constants/templates.
83
+
84
+ You can stack multiple `@llm_call` decorators on one function for multiple call sites.
85
+
86
+ **Scheduling implications**
87
+
88
+ - `reads` exposes data dependencies for that call.
89
+ - `model` and `static_vars` provide stable metadata about call shape.
90
+
91
+ ## Build graphs with `GraphProxy`
92
+
93
+ Use `GraphProxy` as the builder wrapper around `StateGraph`.
94
+
95
+ ```python
96
+ from langgraph.graph import StateGraph
97
+ from agentir_sdk.graph_proxy import GraphProxy
98
+
99
+ workflow = StateGraph(MyState)
100
+ G = GraphProxy(workflow)
101
+
102
+ G.add_node("router", router)
103
+ G.add_node("composer", composer)
104
+ G.add_edge("router", "composer")
105
+
106
+ G.set_entry_point("router")
107
+ G.set_finish_point("composer")
108
+
109
+ contract = G.build_contract()
110
+ graph = G.materialize().compile()
111
+ ```
112
+
113
+ ### Other `GraphProxy` features
114
+
115
+ - `attach_graph(node_name, subgraph)`: merge a subgraph view into the parent contract representation.
116
+ - Entry-point RID guard insertion via `set_entry_point(...)` for per-run tracing context.
117
+ - LLM handle detection and wrapping helpers to improve RID and node-name propagation.
118
+
119
+ ### RID and custom scheduler clients
120
+
121
+ `agentir_sdk/rid.py` provides low-level helpers (`get_rid`, node-name context helpers, and wrappers) if you build custom LLM adapters.
122
+
123
+ If you keep a user-owned scheduler client with its own `.invoke()` method, opt it into SDK header binding instead of reading `get_rid()` manually inside each node:
124
+
125
+ ```python
126
+ from agentir_sdk.rid import SchedulerHeaderBindableMixin
127
+
128
+
129
+ class SchedulerGateway(SchedulerHeaderBindableMixin):
130
+ def invoke(self, node_name: str, prompt: str) -> str:
131
+ headers = self.bound_scheduler_headers()
132
+ ...
133
+ ```
134
+
135
+ `GraphProxy` can then wrap the client object in place, bind the scheduler RID and node name for each node call, and leave your node-level call sites alone.
136
+
137
+ ## Conditional routing annotation
138
+
139
+ If you use `add_conditional_edges`, annotate route possibilities first:
140
+
141
+ ```python
142
+ def route(state):
143
+ return "qa" if state.get("mode") == "qa" else "planner"
144
+
145
+ G.annotate_conditional_edge("router", [["qa"], ["planner"]])
146
+ G.add_conditional_edges("router", route, {
147
+ "qa": "qa",
148
+ "planner": "planner",
149
+ })
150
+ ```
151
+
152
+ **Why this matters**
153
+
154
+ - It makes branch possibilities explicit in the generated contract.
155
+ - It improves dependency visibility for schedulers and analyzers.
156
+
157
+ If a conditional branch first passes through non-LLM nodes, `GraphProxy` will infer the first downstream LLM frontier for that branch automatically. When a branch merges with another branch before the first LLM node, that frontier is ambiguous; use `frontiers=` to pin it explicitly:
158
+
159
+ ```python
160
+ G.annotate_conditional_edge(
161
+ "router",
162
+ [["non_llm_gateway"], ["direct_llm"]],
163
+ frontiers=[["writer", "critic"], ["direct_llm"]],
164
+ )
165
+ ```
166
+
167
+ ## Contract shape
168
+
169
+ `build_contract()` returns a `Contract` object with:
170
+
171
+ - `entry`, `end`
172
+ - `nodes[name].writes`
173
+ - `nodes[name].llm_calls[]` with `model`, `reads`, `static_vars`
174
+ - `edges[]` (`src`, `dst`, optional `label`)
175
+
176
+ Serialize via:
177
+
178
+ ```python
179
+ payload = contract.to_dict()
180
+ ```
181
+
182
+ ## High-level scheduling implications
183
+
184
+ Without exposing internal policy details, annotations generally enable:
185
+
186
+ - Better dependency tracking between nodes.
187
+ - More accurate readiness analysis for downstream work.
188
+ - Clearer branch and fanout structure for conditional paths.
189
+ - Better observability of LLM call footprints.
190
+
191
+ Poor or missing annotations usually mean less precise planning and fewer optimization opportunities.
192
+
193
+ ## Best practices
194
+
195
+ - Keep `writes` minimal and accurate.
196
+ - Keep `reads` specific to actual prompt inputs.
197
+ - Use stable names for state keys and model IDs.
198
+ - Annotate every node that performs LLM calls.
199
+ - Annotate conditional structures before adding conditional edges.
200
+ - Rebuild contracts after graph topology or annotation changes.
201
+
202
+ ## Common mistakes
203
+
204
+ - Declaring `writes` keys that are never returned.
205
+ - Omitting keys in `reads` that are used in prompt construction.
206
+ - Forgetting to annotate conditional route options.
207
+ - Treating `static_vars` as dynamic runtime data.
208
+
209
+ ## End-to-end example
210
+
211
+ ```python
212
+ from typing import TypedDict
213
+ from langchain_core.messages import AIMessage
214
+ from langgraph.graph import StateGraph
215
+ from agentir_sdk.decorators import writes, llm_call
216
+ from agentir_sdk.graph_proxy import GraphProxy
217
+
218
+ class DocState(TypedDict, total=False):
219
+ messages: list
220
+ route: str
221
+ draft: str
222
+
223
+ @writes("route", "messages")
224
+ @llm_call(model="router-model", reads=["messages"], static_vars=["route_prompt_v1"])
225
+ def router(state: DocState, config=None):
226
+ user = state["messages"][-1].content
227
+ route = "qa" if "qa" in user.lower() else "write"
228
+ return {"route": route, "messages": [AIMessage(content=f"[route] {route}")]}
229
+
230
+ @writes("draft", "messages")
231
+ @llm_call(model="writer-model", reads=["messages"], static_vars=["writer_prompt_v2"])
232
+ def writer(state: DocState, config=None):
233
+ return {"draft": "hello", "messages": [AIMessage(content="[writer] done")]}
234
+
235
+ @writes("draft", "messages")
236
+ @llm_call(model="qa-model", reads=["messages"], static_vars=["qa_prompt_v1"])
237
+ def qa(state: DocState, config=None):
238
+ return {"draft": "answer", "messages": [AIMessage(content="[qa] done")]}
239
+
240
+ wf = StateGraph(DocState)
241
+ G = GraphProxy(wf)
242
+
243
+ G.add_node("router", router)
244
+ G.add_node("writer", writer)
245
+ G.add_node("qa", qa)
246
+
247
+ G.set_entry_point("router")
248
+
249
+ G.annotate_conditional_edge("router", [["writer"], ["qa"]])
250
+ G.add_conditional_edges("router", lambda s: "qa" if s.get("route") == "qa" else "writer", {
251
+ "writer": "writer",
252
+ "qa": "qa",
253
+ })
254
+
255
+ G.set_finish_point("writer")
256
+ G.set_finish_point("qa")
257
+
258
+ contract = G.build_contract()
259
+ ```
260
+
261
+ ## Important invariants
262
+
263
+ - `@writes` and `@llm_call` annotations should reflect the real graph contract.
264
+ - `GraphProxy.build_contract()` is the source of truth for scheduler-facing contract serialization.
265
+ - RID propagation should remain explicit and observable; missing RID or node-name metadata is a correctness problem.
@@ -0,0 +1,13 @@
1
+ [build-system]
2
+ requires = ["setuptools>=68"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "agentir-langgraph-annotation"
7
+ version = "0.1.0"
8
+ description = "Public AgentIR SDK, decorators, contract tooling, and client logging helpers."
9
+ license = "MIT"
10
+
11
+ [tool.setuptools.packages.find]
12
+ where = ["src"]
13
+ include = ["agentir_langgraph_annotation*"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,7 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentir-langgraph-annotation
3
+ Version: 0.1.0
4
+ Summary: Public AgentIR SDK, decorators, contract tooling, and client logging helpers.
5
+ License-Expression: MIT
6
+ License-File: LICENSE
7
+ Dynamic: license-file
@@ -0,0 +1,7 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/agentir_langgraph_annotation.egg-info/PKG-INFO
5
+ src/agentir_langgraph_annotation.egg-info/SOURCES.txt
6
+ src/agentir_langgraph_annotation.egg-info/dependency_links.txt
7
+ src/agentir_langgraph_annotation.egg-info/top_level.txt