mycode-sdk 0.5.2__tar.gz → 0.5.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: mycode-sdk
3
- Version: 0.5.2
4
- Summary: Multi-turn tool-calling agent runtime for embedding the mycode agent loop.
3
+ Version: 0.5.3
4
+ Summary: Lightweight Python SDK for building AI agents.
5
5
  Project-URL: Homepage, https://github.com/legibet/mycode
6
6
  Project-URL: Repository, https://github.com/legibet/mycode
7
7
  Project-URL: Issues, https://github.com/legibet/mycode/issues
@@ -25,7 +25,7 @@ Description-Content-Type: text/markdown
25
25
 
26
26
  # mycode-sdk
27
27
 
28
- Lightweight Python SDK for building agents.
28
+ Lightweight Python SDK for building AI agents.
29
29
 
30
30
  ## Install
31
31
 
@@ -58,18 +58,42 @@ async def main() -> None:
58
58
  asyncio.run(main())
59
59
  ```
60
60
 
61
- `Agent(...)` infers the provider from the model id and defaults `cwd` to the current working directory. No tools are registered unless you pass `tools=[...]`, and nothing is persisted unless you pass `session_dir=`.
61
+ `Agent(...)` infers the provider from the model id. No tools are registered unless you pass `tools=[...]`.
62
62
 
63
- For a synchronous call, use `run()` — it collects the stream into a `RunResult`:
63
+ For a simple synchronous call, use `run()`:
64
64
 
65
65
  ```python
66
66
  result = agent.run("Read pyproject.toml and tell me the project name.")
67
67
  print(result.text)
68
68
  ```
69
69
 
70
- Call `achat` or `run` again on the same `Agent` to continue the conversation — history accumulates in `agent.messages`.
70
+ ## Multi-turn conversations
71
71
 
72
- To persist across processes, pass `session_dir` as the root directory; each `session_id` becomes a subdirectory. Reconstruct with the same `(session_dir, session_id)` to resume.
72
+ Call `achat()` or `run()` again on the same `Agent` history accumulates automatically:
73
+
74
+ ```python
75
+ agent = Agent(model="claude-sonnet-4-6", api_key="...")
76
+
77
+ agent.run("What is 2 + 2?")
78
+ agent.run("Now multiply that by 10.") # remembers the earlier answer
79
+ ```
80
+
81
+ ## Saving sessions
82
+
83
+ Pass `session_dir` to persist the conversation to disk. Each session lives in a subdirectory named by `session_id`:
84
+
85
+ ```python
86
+ from pathlib import Path
87
+
88
+ agent = Agent(
89
+ model="claude-sonnet-4-6",
90
+ api_key="...",
91
+ session_dir=Path("./chats"),
92
+ session_id="my-chat",
93
+ )
94
+ ```
95
+
96
+ Construct another `Agent` with the same `(session_dir, session_id)` later to resume the conversation — the history is loaded automatically.
73
97
 
74
98
  ## Built-in tools
75
99
 
@@ -77,33 +101,42 @@ To persist across processes, pass `session_dir` as the root directory; each `ses
77
101
  from mycode import read_tool, write_tool, edit_tool, bash_tool
78
102
  ```
79
103
 
80
- Only `bash_tool` streams incremental output as `tool_output` events; the others return a single result.
104
+ Four tools for reading, writing, editing files, and running shell commands. Opt in by passing them to `tools=[...]`.
81
105
 
82
106
  ## Custom tools
83
107
 
84
- `@tool` wraps a sync or `async def` Python function as a `ToolSpec`. Parameter type hints become the JSON schema sent to the provider.
85
-
86
- Annotate the first parameter as `ToolContext` to have the context injected. Use `ctx.read / ctx.write / ctx.edit / ctx.bash` to invoke the built-ins, or `ctx.call(name, args)` for any registered tool by name.
108
+ Decorate any function with `@tool`. Parameter type hints become the JSON schema sent to the provider; the docstring becomes the description:
87
109
 
88
110
  ```python
89
- from mycode import Agent, ToolContext, read_tool, tool
111
+ from mycode import Agent, tool
90
112
 
91
113
 
92
114
  @tool
93
- def summarize_file(ctx: ToolContext, path: str) -> str:
94
- """Return the first line of a text file."""
115
+ def greet(name: str) -> str:
116
+ """Return a friendly greeting."""
95
117
 
96
- result = ctx.read(path)
97
- return result.output.splitlines()[0] if result.output else ""
118
+ return f"hello, {name}"
98
119
 
99
120
 
100
121
  agent = Agent(
101
122
  model="claude-sonnet-4-6",
102
- api_key="YOUR_API_KEY",
103
- tools=[read_tool, summarize_file],
123
+ api_key="...",
124
+ tools=[greet],
104
125
  )
105
126
  ```
106
127
 
107
- A bare `str` return becomes the tool `output`; any other JSON-serializable value is dumped to JSON. For finer control, return a `ToolExecutionResult` to set `output`, `content` (multimodal blocks such as images), `metadata` (structured UI data), and `is_error` independently.
128
+ To call a built-in tool from inside your own, type the first parameter as `ToolContext`:
129
+
130
+ ```python
131
+ from mycode import ToolContext, tool
132
+
133
+
134
+ @tool
135
+ def summarize_file(ctx: ToolContext, path: str) -> str:
136
+ """Return the first line of a text file."""
137
+
138
+ result = ctx.read(path)
139
+ return result.output.splitlines()[0] if result.output else ""
140
+ ```
108
141
 
109
- See [docs/sdk.md](../docs/sdk.md) for the event stream, cancellation, session rules, and the full `Agent` / `@tool` reference.
142
+ See [docs/sdk.md](../docs/sdk.md) for the event stream, cancellation, sessions, and the full `Agent` / `@tool` reference.
@@ -0,0 +1,117 @@
1
+ # mycode-sdk
2
+
3
+ Lightweight Python SDK for building AI agents.
4
+
5
+ ## Install
6
+
7
+ ```bash
8
+ uv add mycode-sdk
9
+ # or
10
+ pip install mycode-sdk
11
+ ```
12
+
13
+ ## Quick start
14
+
15
+ ```python
16
+ import asyncio
17
+
18
+ from mycode import Agent, bash_tool, read_tool
19
+
20
+
21
+ async def main() -> None:
22
+ agent = Agent(
23
+ model="claude-sonnet-4-6",
24
+ api_key="YOUR_API_KEY",
25
+ tools=[read_tool, bash_tool],
26
+ )
27
+
28
+ async for event in agent.achat("Read pyproject.toml and tell me the project name."):
29
+ if event.type == "text":
30
+ print(event.data["delta"], end="", flush=True)
31
+
32
+
33
+ asyncio.run(main())
34
+ ```
35
+
36
+ `Agent(...)` infers the provider from the model id. No tools are registered unless you pass `tools=[...]`.
37
+
38
+ For a simple synchronous call, use `run()`:
39
+
40
+ ```python
41
+ result = agent.run("Read pyproject.toml and tell me the project name.")
42
+ print(result.text)
43
+ ```
44
+
45
+ ## Multi-turn conversations
46
+
47
+ Call `achat()` or `run()` again on the same `Agent` — history accumulates automatically:
48
+
49
+ ```python
50
+ agent = Agent(model="claude-sonnet-4-6", api_key="...")
51
+
52
+ agent.run("What is 2 + 2?")
53
+ agent.run("Now multiply that by 10.") # remembers the earlier answer
54
+ ```
55
+
56
+ ## Saving sessions
57
+
58
+ Pass `session_dir` to persist the conversation to disk. Each session lives in a subdirectory named by `session_id`:
59
+
60
+ ```python
61
+ from pathlib import Path
62
+
63
+ agent = Agent(
64
+ model="claude-sonnet-4-6",
65
+ api_key="...",
66
+ session_dir=Path("./chats"),
67
+ session_id="my-chat",
68
+ )
69
+ ```
70
+
71
+ Construct another `Agent` with the same `(session_dir, session_id)` later to resume the conversation — the history is loaded automatically.
72
+
73
+ ## Built-in tools
74
+
75
+ ```python
76
+ from mycode import read_tool, write_tool, edit_tool, bash_tool
77
+ ```
78
+
79
+ Four tools for reading, writing, editing files, and running shell commands. Opt in by passing them to `tools=[...]`.
80
+
81
+ ## Custom tools
82
+
83
+ Decorate any function with `@tool`. Parameter type hints become the JSON schema sent to the provider; the docstring becomes the description:
84
+
85
+ ```python
86
+ from mycode import Agent, tool
87
+
88
+
89
+ @tool
90
+ def greet(name: str) -> str:
91
+ """Return a friendly greeting."""
92
+
93
+ return f"hello, {name}"
94
+
95
+
96
+ agent = Agent(
97
+ model="claude-sonnet-4-6",
98
+ api_key="...",
99
+ tools=[greet],
100
+ )
101
+ ```
102
+
103
+ To call a built-in tool from inside your own, type the first parameter as `ToolContext`:
104
+
105
+ ```python
106
+ from mycode import ToolContext, tool
107
+
108
+
109
+ @tool
110
+ def summarize_file(ctx: ToolContext, path: str) -> str:
111
+ """Return the first line of a text file."""
112
+
113
+ result = ctx.read(path)
114
+ return result.output.splitlines()[0] if result.output else ""
115
+ ```
116
+
117
+ See [docs/sdk.md](../docs/sdk.md) for the event stream, cancellation, sessions, and the full `Agent` / `@tool` reference.
@@ -4,8 +4,8 @@ build-backend = "hatchling.build"
4
4
 
5
5
  [project]
6
6
  name = "mycode-sdk"
7
- version = "0.5.2"
8
- description = "Multi-turn tool-calling agent runtime for embedding the mycode agent loop."
7
+ version = "0.5.3"
8
+ description = "Lightweight Python SDK for building AI agents."
9
9
  readme = "README.md"
10
10
  requires-python = ">=3.12"
11
11
  license = "MIT"
@@ -12,16 +12,14 @@ from mycode.utils import as_bool, as_int
12
12
 
13
13
  _MODELS_CATALOG_PATH = Path(__file__).with_name("models_catalog.json")
14
14
 
15
- # Catalogs consulted only for capability bits (context window, image / pdf
16
- # support, …) when the requested provider has no entry for the model. They
17
- # are NOT registered providers; the metadata returned from a fallback hit is
18
- # always attributed to a real provider type the caller already has in hand.
19
- _FALLBACK_CAPABILITY_CATALOGS: tuple[str, ...] = ("aihubmix",)
20
-
21
15
 
22
16
  @dataclass(frozen=True)
23
17
  class ModelMetadata:
24
- """Normalized metadata used by provider resolution."""
18
+ """Model metadata for the requested provider/model.
19
+
20
+ ``provider`` and ``model`` keep the original query identity. Other fields
21
+ may come from a fallback catalog entry.
22
+ """
25
23
 
26
24
  provider: str
27
25
  model: str
@@ -91,71 +89,74 @@ def lookup_model_metadata(
91
89
  provider_type: str | None,
92
90
  model: str | None,
93
91
  ) -> ModelMetadata | None:
94
- """Resolve metadata for one provider type and model.
92
+ """Return model metadata for a provider/model request.
95
93
 
96
- Lookup tiers, in order:
94
+ Catalog lookup order:
97
95
 
98
- 1. Exact ``(provider_type, model)`` entry.
99
- 2. Canonical provider inferred from the model id prefix (``gpt-`` →
100
- ``openai``, ``claude-`` ``anthropic``, …) when the requested
101
- provider had no hit.
102
- 3. Capability fallback from a secondary catalog (currently
103
- ``aihubmix``), attributed to the caller's real provider — never
104
- to the secondary catalog id.
96
+ 1. Requested provider and requested model.
97
+ 2. Inferred provider and unprefixed model name.
98
+ 3. Unique OpenRouter model whose suffix matches the model name.
105
99
  """
106
100
 
107
- raw = (model or "").strip()
108
- if not raw:
101
+ requested_model = (model or "").strip()
102
+ if not provider_type or not requested_model:
109
103
  return None
110
104
  catalog = load_models_catalog()
111
105
  if not catalog:
112
106
  return None
113
107
 
114
- bare = raw.split("/", 1)[1].strip() if "/" in raw else raw
115
- inferred = infer_provider_from_model(bare)
108
+ model_name = requested_model.split("/", 1)[1].strip() if "/" in requested_model else requested_model
109
+ catalog_entry = _get_catalog_entry(catalog, provider_type, requested_model)
116
110
 
117
- if provider_type:
118
- hit = _match(catalog, lookup=provider_type, model_id=raw, attributed=provider_type)
119
- if hit is not None:
120
- return hit
111
+ inferred_provider = infer_provider_from_model(model_name)
112
+ if catalog_entry is None and inferred_provider and inferred_provider != provider_type:
113
+ catalog_entry = _get_catalog_entry(catalog, inferred_provider, model_name)
121
114
 
122
- if inferred and inferred != provider_type:
123
- hit = _match(catalog, lookup=inferred, model_id=bare, attributed=inferred)
124
- if hit is not None:
125
- return hit
115
+ if catalog_entry is None:
116
+ catalog_entry = _get_openrouter_suffix_entry(catalog, model_name)
126
117
 
127
- attributed = provider_type or inferred
128
- if attributed is None:
118
+ if catalog_entry is None:
129
119
  return None
130
- for source in _FALLBACK_CAPABILITY_CATALOGS:
131
- hit = _match(catalog, lookup=source, model_id=bare, attributed=attributed)
132
- if hit is not None:
133
- return hit
134
120
 
135
- return None
121
+ return ModelMetadata(
122
+ provider=provider_type,
123
+ model=requested_model,
124
+ context_window=as_int(catalog_entry.get("context_window")),
125
+ max_output_tokens=as_int(catalog_entry.get("max_output_tokens")),
126
+ supports_reasoning=as_bool(catalog_entry.get("supports_reasoning")),
127
+ supports_image_input=as_bool(catalog_entry.get("supports_image_input")),
128
+ supports_pdf_input=as_bool(catalog_entry.get("supports_pdf_input")),
129
+ )
136
130
 
137
131
 
138
- def _match(
132
+ def _get_catalog_entry(
139
133
  catalog: dict[str, Any],
140
- *,
141
- lookup: str,
134
+ provider: str,
142
135
  model_id: str,
143
- attributed: str,
144
- ) -> ModelMetadata | None:
145
- """Look up one model in a catalog section and build metadata if present."""
136
+ ) -> dict[str, Any] | None:
137
+ """Return a catalog entry for provider/model_id."""
146
138
 
147
- section = catalog.get(lookup)
139
+ section = catalog.get(provider)
148
140
  if not isinstance(section, dict):
149
141
  return None
150
- raw = section.get(model_id)
151
- if not isinstance(raw, dict):
142
+ catalog_entry = section.get(model_id)
143
+ return catalog_entry if isinstance(catalog_entry, dict) else None
144
+
145
+
146
+ def _get_openrouter_suffix_entry(catalog: dict[str, Any], model_name: str) -> dict[str, Any] | None:
147
+ """Return an OpenRouter entry with a unique matching model suffix."""
148
+
149
+ openrouter = catalog.get("openrouter")
150
+ if not isinstance(openrouter, dict):
152
151
  return None
153
- return ModelMetadata(
154
- provider=attributed,
155
- model=model_id,
156
- context_window=as_int(raw.get("context_window")),
157
- max_output_tokens=as_int(raw.get("max_output_tokens")),
158
- supports_reasoning=as_bool(raw.get("supports_reasoning")),
159
- supports_image_input=as_bool(raw.get("supports_image_input")),
160
- supports_pdf_input=as_bool(raw.get("supports_pdf_input")),
161
- )
152
+
153
+ match: dict[str, Any] | None = None
154
+ for model_id, catalog_entry in openrouter.items():
155
+ if not isinstance(model_id, str) or "/" not in model_id or not isinstance(catalog_entry, dict):
156
+ continue
157
+ if model_id.split("/", 1)[1].strip() != model_name:
158
+ continue
159
+ if match is not None:
160
+ return None
161
+ match = catalog_entry
162
+ return match
@@ -1,370 +1,4 @@
1
1
  {
2
- "aihubmix": {
3
- "Kimi-K2-0905": {
4
- "context_window": 262144,
5
- "max_output_tokens": 262144,
6
- "supports_image_input": false,
7
- "supports_pdf_input": false,
8
- "supports_reasoning": false
9
- },
10
- "claude-haiku-4-5": {
11
- "context_window": 200000,
12
- "max_output_tokens": 64000,
13
- "supports_image_input": true,
14
- "supports_pdf_input": true,
15
- "supports_reasoning": true
16
- },
17
- "claude-opus-4-1": {
18
- "context_window": 200000,
19
- "max_output_tokens": 32000,
20
- "supports_image_input": true,
21
- "supports_pdf_input": true,
22
- "supports_reasoning": true
23
- },
24
- "claude-opus-4-5": {
25
- "context_window": 200000,
26
- "max_output_tokens": 32000,
27
- "supports_image_input": true,
28
- "supports_pdf_input": false,
29
- "supports_reasoning": true
30
- },
31
- "claude-opus-4-6": {
32
- "context_window": 200000,
33
- "max_output_tokens": 128000,
34
- "supports_image_input": true,
35
- "supports_pdf_input": true,
36
- "supports_reasoning": true
37
- },
38
- "claude-opus-4-6-think": {
39
- "context_window": 200000,
40
- "max_output_tokens": 128000,
41
- "supports_image_input": true,
42
- "supports_pdf_input": true,
43
- "supports_reasoning": true
44
- },
45
- "claude-sonnet-4-5": {
46
- "context_window": 200000,
47
- "max_output_tokens": 64000,
48
- "supports_image_input": true,
49
- "supports_pdf_input": true,
50
- "supports_reasoning": true
51
- },
52
- "claude-sonnet-4-6": {
53
- "context_window": 200000,
54
- "max_output_tokens": 64000,
55
- "supports_image_input": true,
56
- "supports_pdf_input": true,
57
- "supports_reasoning": true
58
- },
59
- "claude-sonnet-4-6-think": {
60
- "context_window": 200000,
61
- "max_output_tokens": 64000,
62
- "supports_image_input": true,
63
- "supports_pdf_input": true,
64
- "supports_reasoning": true
65
- },
66
- "coding-glm-4.7": {
67
- "context_window": 204800,
68
- "max_output_tokens": 131072,
69
- "supports_image_input": false,
70
- "supports_pdf_input": false,
71
- "supports_reasoning": true
72
- },
73
- "coding-glm-4.7-free": {
74
- "context_window": 204800,
75
- "max_output_tokens": 131072,
76
- "supports_image_input": false,
77
- "supports_pdf_input": false,
78
- "supports_reasoning": true
79
- },
80
- "coding-glm-5-free": {
81
- "context_window": 204800,
82
- "max_output_tokens": 131072,
83
- "supports_image_input": false,
84
- "supports_pdf_input": false,
85
- "supports_reasoning": true
86
- },
87
- "coding-glm-5.1": {
88
- "context_window": 200000,
89
- "max_output_tokens": 128000,
90
- "supports_image_input": false,
91
- "supports_pdf_input": false,
92
- "supports_reasoning": true
93
- },
94
- "coding-minimax-m2.1-free": {
95
- "context_window": 204800,
96
- "max_output_tokens": 131072,
97
- "supports_image_input": false,
98
- "supports_pdf_input": false,
99
- "supports_reasoning": true
100
- },
101
- "deepseek-v3.2": {
102
- "context_window": 131000,
103
- "max_output_tokens": 64000,
104
- "supports_image_input": false,
105
- "supports_pdf_input": false,
106
- "supports_reasoning": true
107
- },
108
- "deepseek-v3.2-fast": {
109
- "context_window": 128000,
110
- "max_output_tokens": 128000,
111
- "supports_image_input": false,
112
- "supports_pdf_input": false,
113
- "supports_reasoning": false
114
- },
115
- "deepseek-v3.2-think": {
116
- "context_window": 131000,
117
- "max_output_tokens": 64000,
118
- "supports_image_input": false,
119
- "supports_pdf_input": false,
120
- "supports_reasoning": true
121
- },
122
- "gemini-2.5-flash": {
123
- "context_window": 1000000,
124
- "max_output_tokens": 65000,
125
- "supports_image_input": true,
126
- "supports_pdf_input": false,
127
- "supports_reasoning": false
128
- },
129
- "gemini-2.5-pro": {
130
- "context_window": 2000000,
131
- "max_output_tokens": 65000,
132
- "supports_image_input": true,
133
- "supports_pdf_input": false,
134
- "supports_reasoning": true
135
- },
136
- "gemini-3-pro-preview": {
137
- "context_window": 1000000,
138
- "max_output_tokens": 65000,
139
- "supports_image_input": true,
140
- "supports_pdf_input": false,
141
- "supports_reasoning": true
142
- },
143
- "gemini-3-pro-preview-search": {
144
- "context_window": 1000000,
145
- "max_output_tokens": 65000,
146
- "supports_image_input": true,
147
- "supports_pdf_input": false,
148
- "supports_reasoning": true
149
- },
150
- "glm-4.6v": {
151
- "context_window": 128000,
152
- "max_output_tokens": 32768,
153
- "supports_image_input": true,
154
- "supports_pdf_input": false,
155
- "supports_reasoning": true
156
- },
157
- "glm-4.7": {
158
- "context_window": 204800,
159
- "max_output_tokens": 131072,
160
- "supports_image_input": false,
161
- "supports_pdf_input": false,
162
- "supports_reasoning": true
163
- },
164
- "glm-5": {
165
- "context_window": 204800,
166
- "max_output_tokens": 131072,
167
- "supports_image_input": false,
168
- "supports_pdf_input": false,
169
- "supports_reasoning": true
170
- },
171
- "glm-5.1": {
172
- "context_window": 200000,
173
- "max_output_tokens": 128000,
174
- "supports_image_input": false,
175
- "supports_pdf_input": false,
176
- "supports_reasoning": true
177
- },
178
- "gpt-4.1": {
179
- "context_window": 1047576,
180
- "max_output_tokens": 32768,
181
- "supports_image_input": true,
182
- "supports_pdf_input": false,
183
- "supports_reasoning": false
184
- },
185
- "gpt-4.1-mini": {
186
- "context_window": 1047576,
187
- "max_output_tokens": 32768,
188
- "supports_image_input": true,
189
- "supports_pdf_input": false,
190
- "supports_reasoning": false
191
- },
192
- "gpt-4.1-nano": {
193
- "context_window": 1047576,
194
- "max_output_tokens": 32768,
195
- "supports_image_input": true,
196
- "supports_pdf_input": false,
197
- "supports_reasoning": false
198
- },
199
- "gpt-4o": {
200
- "context_window": 128000,
201
- "max_output_tokens": 16384,
202
- "supports_image_input": true,
203
- "supports_pdf_input": false,
204
- "supports_reasoning": false
205
- },
206
- "gpt-5": {
207
- "context_window": 400000,
208
- "max_output_tokens": 128000,
209
- "supports_image_input": true,
210
- "supports_pdf_input": false,
211
- "supports_reasoning": true
212
- },
213
- "gpt-5-codex": {
214
- "context_window": 400000,
215
- "max_output_tokens": 128000,
216
- "supports_image_input": true,
217
- "supports_pdf_input": false,
218
- "supports_reasoning": true
219
- },
220
- "gpt-5-mini": {
221
- "context_window": 200000,
222
- "max_output_tokens": 64000,
223
- "supports_image_input": true,
224
- "supports_pdf_input": false,
225
- "supports_reasoning": true
226
- },
227
- "gpt-5-nano": {
228
- "context_window": 128000,
229
- "max_output_tokens": 16384,
230
- "supports_image_input": true,
231
- "supports_pdf_input": false,
232
- "supports_reasoning": false
233
- },
234
- "gpt-5-pro": {
235
- "context_window": 400000,
236
- "max_output_tokens": 128000,
237
- "supports_image_input": true,
238
- "supports_pdf_input": false,
239
- "supports_reasoning": true
240
- },
241
- "gpt-5.1": {
242
- "context_window": 400000,
243
- "max_output_tokens": 128000,
244
- "supports_image_input": true,
245
- "supports_pdf_input": false,
246
- "supports_reasoning": true
247
- },
248
- "gpt-5.1-codex": {
249
- "context_window": 400000,
250
- "max_output_tokens": 128000,
251
- "supports_image_input": true,
252
- "supports_pdf_input": false,
253
- "supports_reasoning": true
254
- },
255
- "gpt-5.1-codex-max": {
256
- "context_window": 400000,
257
- "max_output_tokens": 128000,
258
- "supports_image_input": true,
259
- "supports_pdf_input": false,
260
- "supports_reasoning": true
261
- },
262
- "gpt-5.1-codex-mini": {
263
- "context_window": 400000,
264
- "max_output_tokens": 128000,
265
- "supports_image_input": true,
266
- "supports_pdf_input": false,
267
- "supports_reasoning": true
268
- },
269
- "gpt-5.2": {
270
- "context_window": 400000,
271
- "max_output_tokens": 128000,
272
- "supports_image_input": true,
273
- "supports_pdf_input": false,
274
- "supports_reasoning": true
275
- },
276
- "gpt-5.2-codex": {
277
- "context_window": 400000,
278
- "max_output_tokens": 128000,
279
- "supports_image_input": true,
280
- "supports_pdf_input": false,
281
- "supports_reasoning": true
282
- },
283
- "gpt-5.4": {
284
- "context_window": 400000,
285
- "max_output_tokens": 128000,
286
- "supports_image_input": true,
287
- "supports_pdf_input": false,
288
- "supports_reasoning": true
289
- },
290
- "gpt-5.4-mini": {
291
- "context_window": 400000,
292
- "max_output_tokens": 128000,
293
- "supports_image_input": true,
294
- "supports_pdf_input": false,
295
- "supports_reasoning": false
296
- },
297
- "kimi-k2.5": {
298
- "context_window": 262144,
299
- "max_output_tokens": 262144,
300
- "supports_image_input": true,
301
- "supports_pdf_input": false,
302
- "supports_reasoning": true
303
- },
304
- "minimax-m2.1": {
305
- "context_window": 204800,
306
- "max_output_tokens": 131072,
307
- "supports_image_input": false,
308
- "supports_pdf_input": false,
309
- "supports_reasoning": true
310
- },
311
- "minimax-m2.5": {
312
- "context_window": 204800,
313
- "max_output_tokens": 131072,
314
- "supports_image_input": false,
315
- "supports_pdf_input": false,
316
- "supports_reasoning": true
317
- },
318
- "o4-mini": {
319
- "context_window": 200000,
320
- "max_output_tokens": 65536,
321
- "supports_image_input": false,
322
- "supports_pdf_input": false,
323
- "supports_reasoning": true
324
- },
325
- "qwen3-235b-a22b-instruct-2507": {
326
- "context_window": 262144,
327
- "max_output_tokens": 262144,
328
- "supports_image_input": false,
329
- "supports_pdf_input": false,
330
- "supports_reasoning": false
331
- },
332
- "qwen3-235b-a22b-thinking-2507": {
333
- "context_window": 262144,
334
- "max_output_tokens": 262144,
335
- "supports_image_input": false,
336
- "supports_pdf_input": false,
337
- "supports_reasoning": true
338
- },
339
- "qwen3-coder-480b-a35b-instruct": {
340
- "context_window": 262144,
341
- "max_output_tokens": 131000,
342
- "supports_image_input": false,
343
- "supports_pdf_input": false,
344
- "supports_reasoning": false
345
- },
346
- "qwen3-coder-next": {
347
- "context_window": 262144,
348
- "max_output_tokens": 65536,
349
- "supports_image_input": false,
350
- "supports_pdf_input": false,
351
- "supports_reasoning": false
352
- },
353
- "qwen3-max-2026-01-23": {
354
- "context_window": 262144,
355
- "max_output_tokens": 65536,
356
- "supports_image_input": false,
357
- "supports_pdf_input": false,
358
- "supports_reasoning": false
359
- },
360
- "qwen3.5-plus": {
361
- "context_window": 1000000,
362
- "max_output_tokens": 65536,
363
- "supports_image_input": true,
364
- "supports_pdf_input": false,
365
- "supports_reasoning": true
366
- }
367
- },
368
2
  "anthropic": {
369
3
  "claude-3-5-haiku-20241022": {
370
4
  "context_window": 200000,
@@ -891,6 +525,13 @@
891
525
  "supports_image_input": true,
892
526
  "supports_pdf_input": false,
893
527
  "supports_reasoning": true
528
+ },
529
+ "kimi-k2.6": {
530
+ "context_window": 262144,
531
+ "max_output_tokens": 262144,
532
+ "supports_image_input": true,
533
+ "supports_pdf_input": false,
534
+ "supports_reasoning": true
894
535
  }
895
536
  },
896
537
  "openai": {
@@ -1827,6 +1468,13 @@
1827
1468
  "supports_pdf_input": false,
1828
1469
  "supports_reasoning": true
1829
1470
  },
1471
+ "moonshotai/kimi-k2.6": {
1472
+ "context_window": 262144,
1473
+ "max_output_tokens": 262144,
1474
+ "supports_image_input": true,
1475
+ "supports_pdf_input": false,
1476
+ "supports_reasoning": true
1477
+ },
1830
1478
  "nousresearch/hermes-3-llama-3.1-405b:free": {
1831
1479
  "context_window": 131072,
1832
1480
  "max_output_tokens": 131072,
@@ -311,7 +311,7 @@ class AnthropicAdapter(AnthropicLikeAdapter):
311
311
  provider_id = "anthropic"
312
312
  label = "Anthropic"
313
313
  default_base_url = "https://api.anthropic.com"
314
- env_api_key_names = ("ANTHROPIC_API_KEY", "ANTHROPIC_AUTH_TOKEN")
314
+ env_api_key_names = ("ANTHROPIC_API_KEY",)
315
315
  default_models = ("claude-sonnet-4-6", "claude-opus-4-7")
316
316
  supports_reasoning_effort = True
317
317
 
@@ -352,7 +352,7 @@ class AnthropicAdapter(AnthropicLikeAdapter):
352
352
  class MoonshotAIAdapter(AnthropicLikeAdapter):
353
353
  """Moonshot's Anthropic-compatible Messages endpoint.
354
354
 
355
- kimi-k2.5 tool loops work through this endpoint. When thinking is enabled,
355
+ kimi-k2.6 tool loops work through this endpoint. When thinking is enabled,
356
356
  prior reasoning blocks must be replayed in the conversation history —
357
357
  Moonshot does not strip them on the server side.
358
358
  """
@@ -361,7 +361,7 @@ class MoonshotAIAdapter(AnthropicLikeAdapter):
361
361
  label = "Moonshot"
362
362
  default_base_url = "https://api.moonshot.ai/anthropic"
363
363
  env_api_key_names = ("MOONSHOT_API_KEY",)
364
- default_models = ("kimi-k2.5",)
364
+ default_models = ("kimi-k2.6",)
365
365
  supports_reasoning_effort = True
366
366
 
367
367
  def thinking_config(self, request: ProviderRequest) -> dict[str, Any] | None:
@@ -1,84 +0,0 @@
1
- # mycode-sdk
2
-
3
- Lightweight Python SDK for building agents.
4
-
5
- ## Install
6
-
7
- ```bash
8
- uv add mycode-sdk
9
- # or
10
- pip install mycode-sdk
11
- ```
12
-
13
- ## Quick start
14
-
15
- ```python
16
- import asyncio
17
-
18
- from mycode import Agent, bash_tool, read_tool
19
-
20
-
21
- async def main() -> None:
22
- agent = Agent(
23
- model="claude-sonnet-4-6",
24
- api_key="YOUR_API_KEY",
25
- tools=[read_tool, bash_tool],
26
- )
27
-
28
- async for event in agent.achat("Read pyproject.toml and tell me the project name."):
29
- if event.type == "text":
30
- print(event.data["delta"], end="", flush=True)
31
-
32
-
33
- asyncio.run(main())
34
- ```
35
-
36
- `Agent(...)` infers the provider from the model id and defaults `cwd` to the current working directory. No tools are registered unless you pass `tools=[...]`, and nothing is persisted unless you pass `session_dir=`.
37
-
38
- For a synchronous call, use `run()` — it collects the stream into a `RunResult`:
39
-
40
- ```python
41
- result = agent.run("Read pyproject.toml and tell me the project name.")
42
- print(result.text)
43
- ```
44
-
45
- Call `achat` or `run` again on the same `Agent` to continue the conversation — history accumulates in `agent.messages`.
46
-
47
- To persist across processes, pass `session_dir` as the root directory; each `session_id` becomes a subdirectory. Reconstruct with the same `(session_dir, session_id)` to resume.
48
-
49
- ## Built-in tools
50
-
51
- ```python
52
- from mycode import read_tool, write_tool, edit_tool, bash_tool
53
- ```
54
-
55
- Only `bash_tool` streams incremental output as `tool_output` events; the others return a single result.
56
-
57
- ## Custom tools
58
-
59
- `@tool` wraps a sync or `async def` Python function as a `ToolSpec`. Parameter type hints become the JSON schema sent to the provider.
60
-
61
- Annotate the first parameter as `ToolContext` to have the context injected. Use `ctx.read / ctx.write / ctx.edit / ctx.bash` to invoke the built-ins, or `ctx.call(name, args)` for any registered tool by name.
62
-
63
- ```python
64
- from mycode import Agent, ToolContext, read_tool, tool
65
-
66
-
67
- @tool
68
- def summarize_file(ctx: ToolContext, path: str) -> str:
69
- """Return the first line of a text file."""
70
-
71
- result = ctx.read(path)
72
- return result.output.splitlines()[0] if result.output else ""
73
-
74
-
75
- agent = Agent(
76
- model="claude-sonnet-4-6",
77
- api_key="YOUR_API_KEY",
78
- tools=[read_tool, summarize_file],
79
- )
80
- ```
81
-
82
- A bare `str` return becomes the tool `output`; any other JSON-serializable value is dumped to JSON. For finer control, return a `ToolExecutionResult` to set `output`, `content` (multimodal blocks such as images), `metadata` (structured UI data), and `is_error` independently.
83
-
84
- See [docs/sdk.md](../docs/sdk.md) for the event stream, cancellation, session rules, and the full `Agent` / `@tool` reference.
File without changes
File without changes