opencode-agent 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,209 @@
1
+ Metadata-Version: 2.4
2
+ Name: opencode-agent
3
+ Version: 0.1.0
4
+ Summary: Python orchestration layer for the OpenCode Agent API
5
+ License-Expression: MIT
6
+ Keywords: opencode,agent,ai,orchestration,llm
7
+ Classifier: Development Status :: 3 - Alpha
8
+ Classifier: Intended Audience :: Developers
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: Programming Language :: Python :: 3.11
11
+ Classifier: Programming Language :: Python :: 3.12
12
+ Classifier: Topic :: Software Development :: Libraries
13
+ Requires-Python: >=3.11
14
+ Description-Content-Type: text/markdown
15
+ Requires-Dist: httpx<1.0,>=0.27
16
+
17
+ # OpenCode Orchestrator
18
+
19
+ `opencode-orchestrator` is a Python 3.11+ client wrapper for the OpenCode server. The main API is a simple `Agent` class; defaults come from `.env`, MCP definitions come from `mcp.json`, and file or shell work runs on the OpenCode server inside the configured project path.
20
+
21
+ ## Features
22
+
23
+ - Defaults loaded from `.env`
24
+ - MCP server definitions loaded from `mcp.json`
25
+ - Required `OPENCODE_PROJECT_PATH` for project-scoped file and shell operations
26
+ - Simple `Agent` wrapper for server-side execution
27
+ - File, terminal, tool, streaming, and MCP usage through the OpenCode server
28
+ - Optional lower-level client and runner APIs if needed
29
+ - Shell execution via `POST /session/:id/shell`
30
+ - Optional SSE event streaming via `GET /event`
31
+ - Live examples for running against a real OpenCode server
32
+
33
+ ## Install
34
+
35
+ ```bash
36
+ python -m venv .venv
37
+ source .venv/bin/activate
38
+ pip install -e .
39
+ ```
40
+
41
+ ## Configuration
42
+
43
+ Load configuration from `.env`. MCP server definitions are loaded from `mcp.json`.
44
+
45
+ The included `.env` file sets:
46
+
47
+ ```bash
48
+ OPENCODE_SERVER=http://localhost:4096
49
+ OPENCODE_PROJECT_PATH=/absolute/path/to/project
50
+ OPENCODE_MODEL=openai/gpt-5.4-mini
51
+ ```
52
+
53
+ `OPENCODE_PROJECT_PATH` is required. The client uses it as the default scope for file calls and prefixes shell commands with `cd <project_path> && ...` so the code can be run from any local directory while still operating inside the intended OpenCode project.
54
+
55
+ ## Minimal Usage
56
+
57
+ ```python
58
+ from opencode_orchestrator import Agent
59
+
60
+ agent = Agent(
61
+ name="MainAgent",
62
+ system_prompt="You are a highly capable AI assistant that uses the OpenCode server.",
63
+ )
64
+
65
+ reply = agent.run(
66
+ user_prompt="Who are you and what time is it? Do not use tools."
67
+ )
68
+ print(reply)
69
+
70
+ agent.close()
71
+ ```
72
+
73
+ If you want tool access and full metadata instead of only the text response:
74
+
75
+ ```python
76
+ from opencode_orchestrator import Agent
77
+
78
+ agent = Agent(agent="explore", tools=["glob", "read"])
79
+ result = agent.run("Summarize this project.", return_result=True)
80
+ print(result.output)
81
+ print(result.metadata.session_id)
82
+ agent.close()
83
+ ```
84
+
85
+ Terminal execution is also minimal:
86
+
87
+ ```python
88
+ from opencode_orchestrator import Agent
89
+
90
+ agent = Agent(name="TerminalAgent")
91
+ result = agent.run_shell("pwd && ls", return_result=True)
92
+ print(result.metadata.tool_events)
93
+ agent.close()
94
+ ```
95
+
96
+ Available models can be listed from the already usable provider config only:
97
+
98
+ ```python
99
+ from opencode_orchestrator import Agent
100
+
101
+ agent = Agent()
102
+ print(agent.list_models())
103
+ agent.close()
104
+ ```
105
+
106
+ You can also pass a custom model directly, either as the default for the agent or per call:
107
+
108
+ ```python
109
+ from opencode_orchestrator import Agent
110
+
111
+ agent = Agent(model="openai/gpt-5.4")
112
+ reply = agent.run("Reply in one line.")
113
+
114
+ other = agent.run(
115
+ "Reply in one line with a different model.",
116
+ model="openai/gpt-5.4-mini",
117
+ )
118
+
119
+ agent.close()
120
+ ```
121
+
122
+ ## Live Examples
123
+
124
+ These examples are intended to run against a real OpenCode server at `http://localhost:4096`.
125
+
126
+ Simple usage:
127
+
128
+ ```bash
129
+ .venv/bin/python examples/simple_usage.py
130
+ ```
131
+
132
+ Tool usage:
133
+
134
+ ```bash
135
+ .venv/bin/python examples/tool_usage.py
136
+ ```
137
+
138
+ Multiple agents:
139
+
140
+ ```bash
141
+ .venv/bin/python examples/multi_agent_usage.py
142
+ ```
143
+
144
+ File operations:
145
+
146
+ ```bash
147
+ .venv/bin/python examples/file_operations.py
148
+ ```
149
+
150
+ Terminal operations:
151
+
152
+ ```bash
153
+ .venv/bin/python examples/terminal_operations.py
154
+ ```
155
+
156
+ Streaming:
157
+
158
+ ```bash
159
+ .venv/bin/python examples/streaming_usage.py
160
+ ```
161
+
162
+ ## CLI Usage
163
+
164
+ Run a single task:
165
+
166
+ ```bash
167
+ opencode-orchestrator run "Create a release checklist" --agent plan
168
+ ```
169
+
170
+ Run a multi-agent task:
171
+
172
+ ```bash
173
+ opencode-orchestrator orchestrate "Implement and review a feature" --parallel
174
+ ```
175
+
176
+ Inspect sessions:
177
+
178
+ ```bash
179
+ opencode-orchestrator sessions list
180
+ opencode-orchestrator sessions status
181
+ opencode-orchestrator sessions abort ses_123
182
+ ```
183
+
184
+ Inspect tools and MCP servers:
185
+
186
+ ```bash
187
+ opencode-orchestrator tools --ids
188
+ opencode-orchestrator mcp list
189
+ ```
190
+
191
+ Run a shell command through OpenCode:
192
+
193
+ ```bash
194
+ opencode-orchestrator shell ses_123 "pytest -q"
195
+ ```
196
+
197
+ Read a few streaming events:
198
+
199
+ ```bash
200
+ opencode-orchestrator stream --limit 5
201
+ ```
202
+
203
+ ## Notes
204
+
205
+ - Defaults are taken from `.env` unless you override fields directly in `Agent(...)` or `OpenCodeConfig`.
206
+ - MCP registration settings come from `mcp.json`.
207
+ - Tool allowlists are filtered against the live tool IDs reported by the server.
208
+ - `Agent.run()` returns plain text by default so the common case stays minimal.
209
+ - `Agent.list_models()` filters the server provider config down to models that are directly usable with the currently configured credentials.
@@ -0,0 +1,193 @@
1
+ # OpenCode Orchestrator
2
+
3
+ `opencode-orchestrator` is a Python 3.11+ client wrapper for the OpenCode server. The main API is a simple `Agent` class; defaults come from `.env`, MCP definitions come from `mcp.json`, and file or shell work runs on the OpenCode server inside the configured project path.
4
+
5
+ ## Features
6
+
7
+ - Defaults loaded from `.env`
8
+ - MCP server definitions loaded from `mcp.json`
9
+ - Required `OPENCODE_PROJECT_PATH` for project-scoped file and shell operations
10
+ - Simple `Agent` wrapper for server-side execution
11
+ - File, terminal, tool, streaming, and MCP usage through the OpenCode server
12
+ - Optional lower-level client and runner APIs if needed
13
+ - Shell execution via `POST /session/:id/shell`
14
+ - Optional SSE event streaming via `GET /event`
15
+ - Live examples for running against a real OpenCode server
16
+
17
+ ## Install
18
+
19
+ ```bash
20
+ python -m venv .venv
21
+ source .venv/bin/activate
22
+ pip install -e .
23
+ ```
24
+
25
+ ## Configuration
26
+
27
+ Load configuration from `.env`. MCP server definitions are loaded from `mcp.json`.
28
+
29
+ The included `.env` file sets:
30
+
31
+ ```bash
32
+ OPENCODE_SERVER=http://localhost:4096
33
+ OPENCODE_PROJECT_PATH=/absolute/path/to/project
34
+ OPENCODE_MODEL=openai/gpt-5.4-mini
35
+ ```
36
+
37
+ `OPENCODE_PROJECT_PATH` is required. The client uses it as the default scope for file calls and prefixes shell commands with `cd <project_path> && ...` so the code can be run from any local directory while still operating inside the intended OpenCode project.
38
+
39
+ ## Minimal Usage
40
+
41
+ ```python
42
+ from opencode_orchestrator import Agent
43
+
44
+ agent = Agent(
45
+ name="MainAgent",
46
+ system_prompt="You are a highly capable AI assistant that uses the OpenCode server.",
47
+ )
48
+
49
+ reply = agent.run(
50
+ user_prompt="Who are you and what time is it? Do not use tools."
51
+ )
52
+ print(reply)
53
+
54
+ agent.close()
55
+ ```
56
+
57
+ If you want tool access and full metadata instead of only the text response:
58
+
59
+ ```python
60
+ from opencode_orchestrator import Agent
61
+
62
+ agent = Agent(agent="explore", tools=["glob", "read"])
63
+ result = agent.run("Summarize this project.", return_result=True)
64
+ print(result.output)
65
+ print(result.metadata.session_id)
66
+ agent.close()
67
+ ```
68
+
69
+ Terminal execution is also minimal:
70
+
71
+ ```python
72
+ from opencode_orchestrator import Agent
73
+
74
+ agent = Agent(name="TerminalAgent")
75
+ result = agent.run_shell("pwd && ls", return_result=True)
76
+ print(result.metadata.tool_events)
77
+ agent.close()
78
+ ```
79
+
80
+ Available models can be listed from the already usable provider config only:
81
+
82
+ ```python
83
+ from opencode_orchestrator import Agent
84
+
85
+ agent = Agent()
86
+ print(agent.list_models())
87
+ agent.close()
88
+ ```
89
+
90
+ You can also pass a custom model directly, either as the default for the agent or per call:
91
+
92
+ ```python
93
+ from opencode_orchestrator import Agent
94
+
95
+ agent = Agent(model="openai/gpt-5.4")
96
+ reply = agent.run("Reply in one line.")
97
+
98
+ other = agent.run(
99
+ "Reply in one line with a different model.",
100
+ model="openai/gpt-5.4-mini",
101
+ )
102
+
103
+ agent.close()
104
+ ```
105
+
106
+ ## Live Examples
107
+
108
+ These examples are intended to run against a real OpenCode server at `http://localhost:4096`.
109
+
110
+ Simple usage:
111
+
112
+ ```bash
113
+ .venv/bin/python examples/simple_usage.py
114
+ ```
115
+
116
+ Tool usage:
117
+
118
+ ```bash
119
+ .venv/bin/python examples/tool_usage.py
120
+ ```
121
+
122
+ Multiple agents:
123
+
124
+ ```bash
125
+ .venv/bin/python examples/multi_agent_usage.py
126
+ ```
127
+
128
+ File operations:
129
+
130
+ ```bash
131
+ .venv/bin/python examples/file_operations.py
132
+ ```
133
+
134
+ Terminal operations:
135
+
136
+ ```bash
137
+ .venv/bin/python examples/terminal_operations.py
138
+ ```
139
+
140
+ Streaming:
141
+
142
+ ```bash
143
+ .venv/bin/python examples/streaming_usage.py
144
+ ```
145
+
146
+ ## CLI Usage
147
+
148
+ Run a single task:
149
+
150
+ ```bash
151
+ opencode-orchestrator run "Create a release checklist" --agent plan
152
+ ```
153
+
154
+ Run a multi-agent task:
155
+
156
+ ```bash
157
+ opencode-orchestrator orchestrate "Implement and review a feature" --parallel
158
+ ```
159
+
160
+ Inspect sessions:
161
+
162
+ ```bash
163
+ opencode-orchestrator sessions list
164
+ opencode-orchestrator sessions status
165
+ opencode-orchestrator sessions abort ses_123
166
+ ```
167
+
168
+ Inspect tools and MCP servers:
169
+
170
+ ```bash
171
+ opencode-orchestrator tools --ids
172
+ opencode-orchestrator mcp list
173
+ ```
174
+
175
+ Run a shell command through OpenCode:
176
+
177
+ ```bash
178
+ opencode-orchestrator shell ses_123 "pytest -q"
179
+ ```
180
+
181
+ Read a few streaming events:
182
+
183
+ ```bash
184
+ opencode-orchestrator stream --limit 5
185
+ ```
186
+
187
+ ## Notes
188
+
189
+ - Defaults are taken from `.env` unless you override fields directly in `Agent(...)` or `OpenCodeConfig`.
190
+ - MCP registration settings come from `mcp.json`.
191
+ - Tool allowlists are filtered against the live tool IDs reported by the server.
192
+ - `Agent.run()` returns plain text by default so the common case stays minimal.
193
+ - `Agent.list_models()` filters the server provider config down to models that are directly usable with the currently configured credentials.
@@ -0,0 +1,32 @@
1
+ [build-system]
2
+ requires = ["setuptools>=68", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "opencode-agent"
7
+ version = "0.1.0"
8
+ description = "Python orchestration layer for the OpenCode Agent API"
9
+ readme = "README.md"
10
+ requires-python = ">=3.11"
11
+ license = "MIT"
12
+ keywords = ["opencode", "agent", "ai", "orchestration", "llm"]
13
+ classifiers = [
14
+ "Development Status :: 3 - Alpha",
15
+ "Intended Audience :: Developers",
16
+ "Programming Language :: Python :: 3",
17
+ "Programming Language :: Python :: 3.11",
18
+ "Programming Language :: Python :: 3.12",
19
+ "Topic :: Software Development :: Libraries",
20
+ ]
21
+ dependencies = [
22
+ "httpx>=0.27,<1.0",
23
+ ]
24
+
25
+ [project.scripts]
26
+ opencode-agent = "opencode_agent.cli:main"
27
+
28
+ [tool.setuptools]
29
+ package-dir = {"" = "src"}
30
+
31
+ [tool.setuptools.packages.find]
32
+ where = ["src"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,26 @@
1
+ from .agent import Agent
2
+ from .client import OpenCodeClient
3
+ from .config import OpenCodeConfig
4
+ from .exceptions import (
5
+ MCPReadinessError,
6
+ OpenCodeError,
7
+ OpenCodeHTTPError,
8
+ SessionNotFoundError,
9
+ TaskTimeoutError,
10
+ ToolNotFoundError,
11
+ )
12
+ from .orchestrator import MultiAgentRunner, SingleAgentRunner
13
+
14
+ __all__ = [
15
+ "Agent",
16
+ "MCPReadinessError",
17
+ "MultiAgentRunner",
18
+ "OpenCodeClient",
19
+ "OpenCodeConfig",
20
+ "OpenCodeError",
21
+ "OpenCodeHTTPError",
22
+ "SessionNotFoundError",
23
+ "SingleAgentRunner",
24
+ "TaskTimeoutError",
25
+ "ToolNotFoundError",
26
+ ]
@@ -0,0 +1,137 @@
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import asdict, is_dataclass
4
+ from typing import Any
5
+
6
+ from .client import OpenCodeClient
7
+ from .config import OpenCodeConfig
8
+ from .managers import MCPManager, SessionManager, ToolManager
9
+ from .models import ExecutionMetadata, ExecutionResult, StreamEvent
10
+ from .orchestrator import EventStreamer, SingleAgentRunner
11
+
12
+
13
+ class Agent:
14
+ def __init__(
15
+ self,
16
+ name: str = "OpenCodeAgent",
17
+ system_prompt: str | None = None,
18
+ *,
19
+ agent: str | None = None,
20
+ model: str | dict[str, Any] | None = None,
21
+ tools: list[str] | None = None,
22
+ mcp_servers: list[str] | None = None,
23
+ project_path: str | None = None,
24
+ base_url: str | None = None,
25
+ streaming: bool | None = None,
26
+ verbose: bool | None = None,
27
+ ):
28
+ config = OpenCodeConfig.from_env()
29
+ if project_path is not None:
30
+ config.project_path = project_path
31
+ if base_url is not None:
32
+ config.base_url = base_url
33
+ if streaming is not None:
34
+ config.streaming = streaming
35
+ if verbose is not None:
36
+ config.verbose = verbose
37
+ if model is not None:
38
+ config.default_model = model
39
+ if agent is not None:
40
+ config.default_agent = agent
41
+ if tools is not None:
42
+ config.default_tools = tools
43
+
44
+ self.name = name
45
+ self.system_prompt = system_prompt
46
+ self.default_mcp_servers = mcp_servers or []
47
+ self.config = config
48
+ self.client = OpenCodeClient(config=config)
49
+ self.session_manager = SessionManager(self.client)
50
+ self.tool_manager = ToolManager(self.client)
51
+ self.mcp_manager = MCPManager(self.client)
52
+ self.runner = SingleAgentRunner(
53
+ client=self.client,
54
+ config=self.config,
55
+ session_manager=self.session_manager,
56
+ tool_manager=self.tool_manager,
57
+ mcp_manager=self.mcp_manager,
58
+ )
59
+ self.event_streamer = EventStreamer(self.client)
60
+ self.session_id: str | None = None
61
+
62
+ def close(self) -> None:
63
+ self.client.close()
64
+
65
+ def set_model(self, model: str | dict[str, Any] | None) -> None:
66
+ self.config.default_model = model
67
+
68
+ def list_models(self, include_details: bool = False) -> list[Any]:
69
+ return self.client.list_available_models(include_details=include_details)
70
+
71
+ def run(
72
+ self,
73
+ user_prompt: str,
74
+ *,
75
+ agent: str | None = None,
76
+ model: str | dict[str, Any] | None = None,
77
+ tools: list[str] | None = None,
78
+ mcp_servers: list[str] | None = None,
79
+ session_id: str | None = None,
80
+ return_result: bool = False,
81
+ ) -> str | ExecutionResult:
82
+ result = self.runner.run(
83
+ user_prompt,
84
+ agent=agent,
85
+ model=model,
86
+ tools=tools,
87
+ mcp_servers=mcp_servers or self.default_mcp_servers or None,
88
+ session_id=session_id or self.session_id,
89
+ title=self.name,
90
+ system=self.system_prompt,
91
+ )
92
+ self.session_id = result.metadata.session_id
93
+ return result if return_result else result.output
94
+
95
+ def run_async(
96
+ self,
97
+ user_prompt: str,
98
+ *,
99
+ agent: str | None = None,
100
+ model: str | dict[str, Any] | None = None,
101
+ tools: list[str] | None = None,
102
+ session_id: str | None = None,
103
+ ) -> ExecutionMetadata:
104
+ metadata = self.runner.run_async(
105
+ user_prompt,
106
+ agent=agent,
107
+ model=model,
108
+ tools=tools,
109
+ session_id=session_id or self.session_id,
110
+ title=self.name,
111
+ )
112
+ self.session_id = metadata.session_id
113
+ return metadata
114
+
115
+ def run_shell(
116
+ self,
117
+ command: str,
118
+ *,
119
+ agent: str | None = None,
120
+ model: str | dict[str, Any] | None = None,
121
+ return_result: bool = False,
122
+ ) -> str | ExecutionResult:
123
+ if not self.session_id:
124
+ self.session_id = self.client.create_session(title=self.name).id
125
+ result = self.runner.run_shell(command, session_id=self.session_id, agent=agent, model=model)
126
+ return result if return_result else result.output
127
+
128
+ def stream(self, *, limit: int = 10) -> list[StreamEvent]:
129
+ return self.event_streamer.stream(limit=limit)
130
+
131
+ @staticmethod
132
+ def to_dict(value: Any) -> Any:
133
+ if isinstance(value, list):
134
+ return [Agent.to_dict(item) for item in value]
135
+ if is_dataclass(value):
136
+ return asdict(value)
137
+ return value