oc-cc-proxy 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,115 @@
1
+ Metadata-Version: 2.4
2
+ Name: oc-cc-proxy
3
+ Version: 0.1.0
4
+ Summary: Local LiteLLM proxy for routing Claude Code to OpenCode Go.
5
+ Requires-Python: >=3.12
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: httpx>=0.28.1
8
+ Requires-Dist: litellm[proxy]>=1.83.14
9
+ Requires-Dist: python-dotenv>=1.2.2
10
+ Requires-Dist: pyyaml>=6.0.3
11
+
12
+ # oc-cc-proxy
13
+
14
+ `oc-cc-proxy` runs a local LiteLLM Proxy that lets Claude Code send Anthropic Messages API requests to OpenCode Go's OpenAI-compatible endpoint.
15
+
16
+ The proxy listens on `http://127.0.0.1:4000` by default and routes all Claude Code model names through LiteLLM wildcard passthrough to `https://opencode.ai/zen/go/v1/chat/completions`.
17
+
18
+ ## Setup
19
+
20
+ 1. Install dependencies:
21
+
22
+ ```bash
23
+ uv sync
24
+ ```
25
+
26
+ 2. Configure the OpenCode Go API key:
27
+
28
+ ```bash
29
+ cp .env.example .env
30
+ ```
31
+
32
+ 3. Edit `.env` and set `OPENCODE_GO_API_KEY`.
33
+
34
+ 4. Start the proxy:
35
+
36
+ ```bash
37
+ uv run oc-cc-proxy
38
+ ```
39
+
40
+ The proxy fails before accepting requests if `OPENCODE_GO_API_KEY` is missing.
41
+
42
+ ## Claude Code Settings
43
+
44
+ Add equivalent environment values to your Claude Code user-scope `settings.json`:
45
+
46
+ ```json
47
+ {
48
+ "env": {
49
+ "ANTHROPIC_BASE_URL": "http://127.0.0.1:4000",
50
+ "ANTHROPIC_API_KEY": "not-a-real-anthropic-key",
51
+ "ANTHROPIC_DEFAULT_SONNET_MODEL": "deepseek-v4-pro",
52
+ "ANTHROPIC_DEFAULT_HAIKU_MODEL": "deepseek-v4-pro",
53
+ "ANTHROPIC_DEFAULT_OPUS_MODEL": "deepseek-v4-pro"
54
+ }
55
+ }
56
+ ```
57
+
58
+ `ANTHROPIC_API_KEY` only needs to be non-empty for the local proxy path. OpenCode Go authentication uses `OPENCODE_GO_API_KEY` on the proxy process.
59
+
60
+ ## Validation
61
+
62
+ With the proxy running, validate non-streaming text, wildcard model passthrough, streaming terminal events, tool definitions, tool results, and streamed tool requests:
63
+
64
+ ```bash
65
+ uv run oc-cc-proxy-validate --model deepseek-v4-pro
66
+ ```
67
+
68
+ Expected successful output starts each check with `ok:`. Any dropped, malformed, or ignored tool-call behavior should be treated as a blocking compatibility issue before describing the proxy as Claude Code-ready.
69
+
70
+ Project-local checks:
71
+
72
+ ```bash
73
+ uv run pytest
74
+ uv run ruff check .
75
+ ```
76
+
77
+ ## Debugging
78
+
79
+ Enable LiteLLM verbose logging and local request-shape diagnostics:
80
+
81
+ ```bash
82
+ uv run oc-cc-proxy --debug
83
+ ```
84
+
85
+ Debug helpers redact sensitive headers such as `Authorization`, `x-api-key`, and `api-key`. Avoid pasting raw upstream logs publicly unless you have checked them for secrets.
86
+
87
+ To inspect the generated LiteLLM config path without starting the server:
88
+
89
+ ```bash
90
+ uv run oc-cc-proxy --print-config --config /tmp/oc-cc-proxy-litellm.yaml
91
+ ```
92
+
93
+ ## Configuration
94
+
95
+ - `OPENCODE_GO_API_KEY`: required OpenCode Go API key.
96
+ - `OC_PROXY_HOST`: optional host override, defaults to `127.0.0.1`.
97
+ - `OC_PROXY_PORT`: optional port override, defaults to `4000`.
98
+
99
+ The generated LiteLLM route uses:
100
+
101
+ ```yaml
102
+ model_list:
103
+ - model_name: "*"
104
+ litellm_params:
105
+ model: "openai/*"
106
+ api_base: "https://opencode.ai/zen/go/v1"
107
+ ```
108
+
109
+ ## Current Limitations
110
+
111
+ - Tool-use compatibility must be validated against a live OpenCode Go account and target model before claiming a model is known-good for Claude Code.
112
+ - `deepseek-v4-pro` is the current known-good validation target for text, streaming, wildcard routing, tool calls, streamed tool-call metadata, and `tool_result` follow-up turns.
113
+ - DeepSeek V4 requires its returned reasoning metadata to be replayed on assistant tool-call history. The proxy installs a LiteLLM callback that converts Anthropic `thinking` blocks back into upstream `reasoning_content` before `tool_result` follow-up turns.
114
+ - Anthropic-specific features such as prompt caching, extended thinking, and any endpoints beyond `/v1/messages` are not claimed as supported unless separately validated.
115
+ - Invalid Claude Code model names are forwarded to OpenCode Go by design because wildcard model passthrough keeps Claude Code settings as the source of truth.
@@ -0,0 +1,10 @@
1
+ oc_proxy/__init__.py,sha256=wQ5ZuKi7Ka_ObJ_Ai7ynzCt9QyNrZFkxx3m2ermYNXc,105
2
+ oc_proxy/cli.py,sha256=2I48yZLWPI4yiF_tg6JyFvatsyvdVOE1io-zRb1o0Ic,3060
3
+ oc_proxy/config.py,sha256=WgVOH0HN8h_3WyI76etxCDh5dgUWqaQZ-TDbSNBg0_Q,3167
4
+ oc_proxy/reasoning.py,sha256=bo8LU4ZJfxH50EUmvmOwem8xo46e3n24buzAwV_7xQQ,4434
5
+ oc_proxy/validation.py,sha256=nqhAZlPLoYnDhF3VBz7mjrH269gMb78JN7ofYPso0bs,6295
6
+ oc_cc_proxy-0.1.0.dist-info/METADATA,sha256=O5cKpID5RtB48NiOxcNwDGMiAMQf3voQGkf_bmVknKI,3879
7
+ oc_cc_proxy-0.1.0.dist-info/WHEEL,sha256=aeYiig01lYGDzBgS8HxWXOg3uV61G9ijOsup-k9o1sk,91
8
+ oc_cc_proxy-0.1.0.dist-info/entry_points.txt,sha256=0vIUCZBN5eJJpM7ovwMopNP4RH8dl6qFdDya_76kRPM,98
9
+ oc_cc_proxy-0.1.0.dist-info/top_level.txt,sha256=11eGVx9Bi9dmxBE7mIP1My_Hl5epQ0eA61PZCxR8Oa8,9
10
+ oc_cc_proxy-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (82.0.1)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1,3 @@
1
+ [console_scripts]
2
+ oc-cc-proxy = oc_proxy.cli:main
3
+ oc-cc-proxy-validate = oc_proxy.validation:main
@@ -0,0 +1 @@
1
+ oc_proxy
oc_proxy/__init__.py ADDED
@@ -0,0 +1,5 @@
1
+ """Local Claude Code proxy wrapper for OpenCode Go."""
2
+
3
+ __all__ = ["__version__"]
4
+
5
+ __version__ = "0.1.0"
oc_proxy/cli.py ADDED
@@ -0,0 +1,85 @@
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import json
5
+ import subprocess
6
+ import shutil
7
+ import sys
8
+ import tempfile
9
+ from pathlib import Path
10
+
11
+ from .config import ConfigurationError, load_settings, summarize_payload_shape, write_litellm_config
12
+
13
+
14
+ def build_parser() -> argparse.ArgumentParser:
15
+ parser = argparse.ArgumentParser(description="Run a local LiteLLM proxy for Claude Code and OpenCode Go.")
16
+ parser.add_argument("--env-file", help="Path to a .env file containing OPENCODE_GO_API_KEY.")
17
+ parser.add_argument("--host", help="Override OC_PROXY_HOST for this run.")
18
+ parser.add_argument("--port", type=int, help="Override OC_PROXY_PORT for this run.")
19
+ parser.add_argument("--config", help="Write LiteLLM config to this path instead of a temporary file.")
20
+ parser.add_argument("--print-config", action="store_true", help="Print generated LiteLLM config path and exit.")
21
+ parser.add_argument("--debug", action="store_true", help="Enable LiteLLM verbose logging and request-shape diagnostics.")
22
+ return parser
23
+
24
+
25
+ def main(argv: list[str] | None = None) -> int:
26
+ args = build_parser().parse_args(argv)
27
+ try:
28
+ settings = load_settings(env_file=args.env_file, debug=args.debug)
29
+ except ConfigurationError as exc:
30
+ print(f"Configuration error: {exc}", file=sys.stderr)
31
+ return 2
32
+
33
+ if args.host or args.port:
34
+ settings = settings.__class__(
35
+ api_key=settings.api_key,
36
+ host=args.host or settings.host,
37
+ port=args.port or settings.port,
38
+ api_base=settings.api_base,
39
+ debug=settings.debug,
40
+ )
41
+
42
+ if args.config:
43
+ config_path = Path(args.config)
44
+ write_litellm_config(settings, config_path)
45
+ temp_dir = None
46
+ else:
47
+ temp_dir = tempfile.TemporaryDirectory(prefix="oc-cc-proxy-")
48
+ config_path = Path(temp_dir.name) / "litellm.yaml"
49
+ write_litellm_config(settings, config_path)
50
+
51
+ if args.debug:
52
+ shape = summarize_payload_shape({"model": "deepseek-v4-pro", "messages": [{"role": "user", "content": "..."}], "tools": []})
53
+ print(f"Debug request shape example: {json.dumps(shape, sort_keys=True)}", file=sys.stderr)
54
+
55
+ if args.print_config:
56
+ print(config_path)
57
+ if temp_dir:
58
+ temp_dir.cleanup()
59
+ return 0
60
+
61
+ litellm_executable = shutil.which("litellm")
62
+ if not litellm_executable:
63
+ print("Configuration error: litellm executable was not found in PATH.", file=sys.stderr)
64
+ return 2
65
+
66
+ command = [
67
+ litellm_executable,
68
+ "--config",
69
+ str(config_path),
70
+ "--host",
71
+ settings.host,
72
+ "--port",
73
+ str(settings.port),
74
+ ]
75
+ print(f"Starting oc-cc-proxy on http://{settings.host}:{settings.port}")
76
+ print("Set Claude Code ANTHROPIC_BASE_URL to this URL and use any non-empty ANTHROPIC_API_KEY value.")
77
+ try:
78
+ return subprocess.call(command)
79
+ finally:
80
+ if temp_dir:
81
+ temp_dir.cleanup()
82
+
83
+
84
+ if __name__ == "__main__":
85
+ raise SystemExit(main())
oc_proxy/config.py ADDED
@@ -0,0 +1,96 @@
1
+ from __future__ import annotations
2
+
3
+ import os
4
+ import shutil
5
+ from dataclasses import dataclass
6
+ from pathlib import Path
7
+ from typing import Any
8
+
9
+ import yaml
10
+ from dotenv import load_dotenv
11
+
12
+ DEFAULT_HOST = "127.0.0.1"
13
+ DEFAULT_PORT = 4000
14
+ OPENCODE_GO_API_BASE = "https://opencode.ai/zen/go/v1"
15
+ OPENCODE_GO_API_KEY_ENV = "OPENCODE_GO_API_KEY"
16
+
17
+
18
+ class ConfigurationError(RuntimeError):
19
+ """Raised when required proxy configuration is missing or invalid."""
20
+
21
+
22
+ @dataclass(frozen=True)
23
+ class ProxySettings:
24
+ api_key: str
25
+ host: str = DEFAULT_HOST
26
+ port: int = DEFAULT_PORT
27
+ api_base: str = OPENCODE_GO_API_BASE
28
+ debug: bool = False
29
+
30
+
31
+ def load_settings(*, env_file: str | None = None, debug: bool = False) -> ProxySettings:
32
+ if env_file:
33
+ load_dotenv(env_file)
34
+ else:
35
+ load_dotenv()
36
+
37
+ api_key = os.getenv(OPENCODE_GO_API_KEY_ENV)
38
+ if not api_key:
39
+ raise ConfigurationError(
40
+ f"Missing {OPENCODE_GO_API_KEY_ENV}. Set it in the environment or a .env file before starting oc-cc-proxy."
41
+ )
42
+
43
+ host = os.getenv("OC_PROXY_HOST", DEFAULT_HOST)
44
+ port_value = os.getenv("OC_PROXY_PORT", str(DEFAULT_PORT))
45
+ try:
46
+ port = int(port_value)
47
+ except ValueError as exc:
48
+ raise ConfigurationError(f"OC_PROXY_PORT must be an integer, got {port_value!r}.") from exc
49
+
50
+ return ProxySettings(api_key=api_key, host=host, port=port, debug=debug)
51
+
52
+
53
+ def build_litellm_config(settings: ProxySettings) -> dict[str, Any]:
54
+ config: dict[str, Any] = {
55
+ "model_list": [
56
+ {
57
+ "model_name": "*",
58
+ "litellm_params": {
59
+ "model": "openai/*",
60
+ "api_base": settings.api_base,
61
+ "api_key": settings.api_key,
62
+ },
63
+ }
64
+ ],
65
+ "litellm_settings": {
66
+ "callbacks": ["oc_proxy.reasoning.deepseek_reasoning_content_callback"],
67
+ "drop_params": True,
68
+ "set_verbose": settings.debug,
69
+ "use_chat_completions_url_for_anthropic_messages": True,
70
+ },
71
+ }
72
+ return config
73
+
74
+
75
+ def write_litellm_config(settings: ProxySettings, path: Path) -> Path:
76
+ path.parent.mkdir(parents=True, exist_ok=True)
77
+ callback_dir = path.parent / "oc_proxy"
78
+ callback_dir.mkdir(exist_ok=True)
79
+ (callback_dir / "__init__.py").write_text("", encoding="utf-8")
80
+ shutil.copyfile(Path(__file__).with_name("reasoning.py"), callback_dir / "reasoning.py")
81
+ with path.open("w", encoding="utf-8") as config_file:
82
+ yaml.safe_dump(build_litellm_config(settings), config_file, sort_keys=False)
83
+ return path
84
+
85
+
86
+ def redact_headers(headers: dict[str, str]) -> dict[str, str]:
87
+ sensitive = {"authorization", "x-api-key", "api-key"}
88
+ return {key: ("<redacted>" if key.lower() in sensitive else value) for key, value in headers.items()}
89
+
90
+
91
+ def summarize_payload_shape(payload: Any) -> Any:
92
+ if isinstance(payload, dict):
93
+ return {key: summarize_payload_shape(value) for key, value in payload.items()}
94
+ if isinstance(payload, list):
95
+ return [summarize_payload_shape(payload[0])] if payload else []
96
+ return type(payload).__name__
oc_proxy/reasoning.py ADDED
@@ -0,0 +1,105 @@
1
+ from __future__ import annotations
2
+
3
+ from typing import Any
4
+
5
+ from litellm.integrations.custom_logger import CustomLogger
6
+ from litellm.types.utils import CallTypes
7
+
8
+
9
+ def _patch_litellm_empty_stream_choices() -> None:
10
+ try:
11
+ from litellm.litellm_core_utils.streaming_handler import CustomStreamWrapper
12
+ from litellm.llms.anthropic.experimental_pass_through.adapters.streaming_iterator import (
13
+ AnthropicStreamWrapper,
14
+ )
15
+ from litellm.llms.anthropic.experimental_pass_through.adapters.transformation import (
16
+ LiteLLMAnthropicMessagesAdapter,
17
+ )
18
+ except ImportError:
19
+ return
20
+
21
+ original_should_start = AnthropicStreamWrapper._should_start_new_content_block
22
+ original_translate = LiteLLMAnthropicMessagesAdapter.translate_streaming_openai_response_to_anthropic
23
+ original_raise_on_repetition = CustomStreamWrapper.raise_on_model_repetition
24
+
25
+ if getattr(original_translate, "_oc_proxy_empty_choices_patch", False):
26
+ return
27
+
28
+ def patched_should_start(self: Any, chunk: Any) -> bool:
29
+ if _has_empty_choices(chunk):
30
+ return False
31
+ return original_should_start(self, chunk)
32
+
33
+ def patched_translate(self: Any, response: Any, current_content_block_index: int) -> dict[str, Any]:
34
+ if _has_empty_choices(response):
35
+ return {
36
+ "type": "content_block_delta",
37
+ "index": current_content_block_index,
38
+ "delta": {"type": "text_delta", "text": ""},
39
+ }
40
+ return original_translate(self, response, current_content_block_index)
41
+
42
+ def patched_raise_on_repetition(self: Any) -> None:
43
+ if len(self.chunks) < 2:
44
+ return
45
+ if _has_empty_choices(self.chunks[-1]) or _has_empty_choices(self.chunks[-2]):
46
+ self._repeated_messages_count = 1
47
+ return
48
+ return original_raise_on_repetition(self)
49
+
50
+ patched_translate._oc_proxy_empty_choices_patch = True # type: ignore[attr-defined]
51
+ CustomStreamWrapper.raise_on_model_repetition = patched_raise_on_repetition
52
+ AnthropicStreamWrapper._should_start_new_content_block = patched_should_start
53
+ LiteLLMAnthropicMessagesAdapter.translate_streaming_openai_response_to_anthropic = patched_translate
54
+
55
+
56
+ class DeepSeekReasoningContentCallback(CustomLogger):
57
+ """Preserve DeepSeek reasoning metadata and drop Anthropic-only replay fields."""
58
+
59
+ async def async_pre_call_deployment_hook(self, kwargs: dict[str, Any], call_type: CallTypes | None) -> dict[str, Any]:
60
+ if call_type not in {CallTypes.completion, CallTypes.acompletion}:
61
+ return kwargs
62
+
63
+ model = str(kwargs.get("model") or "")
64
+ should_preserve_reasoning = model.startswith(("deepseek-v4", "kimi-k2"))
65
+ messages = kwargs.get("messages")
66
+ if not isinstance(messages, list):
67
+ return kwargs
68
+
69
+ for message in messages:
70
+ if not isinstance(message, dict) or message.get("role") != "assistant":
71
+ continue
72
+ thinking_blocks = message.pop("thinking_blocks", None)
73
+ reasoning_content = _thinking_blocks_to_reasoning_content(thinking_blocks) if should_preserve_reasoning else None
74
+ if reasoning_content:
75
+ message["reasoning_content"] = reasoning_content
76
+ if message.get("tool_calls") and message.get("content") is None:
77
+ message["content"] = ""
78
+ return kwargs
79
+
80
+ async def async_post_call_streaming_iterator_hook(self, user_api_key_dict: Any, response: Any, request_data: dict) -> Any:
81
+ model = str(request_data.get("model") or "")
82
+ async for item in response:
83
+ if model == "minimax-m2.7" and _has_empty_choices(item):
84
+ continue
85
+ yield item
86
+
87
+
88
+ def _has_empty_choices(item: Any) -> bool:
89
+ choices = getattr(item, "choices", None)
90
+ return isinstance(choices, list) and not choices
91
+
92
+
93
+ def _thinking_blocks_to_reasoning_content(thinking_blocks: Any) -> str | None:
94
+ if not isinstance(thinking_blocks, list):
95
+ return None
96
+
97
+ parts: list[str] = []
98
+ for block in thinking_blocks:
99
+ if isinstance(block, dict) and block.get("type") == "thinking" and block.get("thinking"):
100
+ parts.append(str(block["thinking"]))
101
+ return "\n".join(parts) or None
102
+
103
+
104
+ deepseek_reasoning_content_callback = DeepSeekReasoningContentCallback()
105
+ _patch_litellm_empty_stream_choices()
oc_proxy/validation.py ADDED
@@ -0,0 +1,153 @@
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import json
5
+ from typing import Any
6
+
7
+ import httpx
8
+
9
+
10
+ def raise_for_status_with_body(response: httpx.Response, check_name: str) -> None:
11
+ try:
12
+ response.raise_for_status()
13
+ except httpx.HTTPStatusError as exc:
14
+ body = response.text[:1000]
15
+ raise RuntimeError(f"{check_name} failed with HTTP {response.status_code}: {body}") from exc
16
+
17
+
18
+ def messages_payload(*, model: str, stream: bool = False, tools: bool = False) -> dict[str, Any]:
19
+ prompt = "Use the get_status tool for target proxy. Do not answer in text."
20
+ if not tools:
21
+ prompt = "Reply with exactly: ok"
22
+ messages: list[dict[str, Any]] = [{"role": "user", "content": prompt}]
23
+ payload: dict[str, Any] = {
24
+ "model": model,
25
+ "max_tokens": 256 if tools else 64,
26
+ "stream": stream,
27
+ "messages": messages,
28
+ }
29
+ if tools:
30
+ payload["tools"] = [
31
+ {
32
+ "name": "get_status",
33
+ "description": "Return a short status string.",
34
+ "input_schema": {
35
+ "type": "object",
36
+ "properties": {"target": {"type": "string"}},
37
+ "required": ["target"],
38
+ },
39
+ }
40
+ ]
41
+ return payload
42
+
43
+
44
+ def has_tool_use(content: Any) -> bool:
45
+ return isinstance(content, list) and any(
46
+ isinstance(block, dict) and block.get("type") == "tool_use" for block in content
47
+ )
48
+
49
+
50
+ def get_tool_use_id(content: Any) -> str:
51
+ if not isinstance(content, list):
52
+ raise RuntimeError("tool response content is not a list")
53
+ for block in content:
54
+ if isinstance(block, dict) and block.get("type") == "tool_use" and block.get("id"):
55
+ return str(block["id"])
56
+ raise RuntimeError(f"tool response did not include a tool_use id: {json.dumps(content)[:500]}")
57
+
58
+
59
+ def tool_result_payload(*, model: str, assistant_content: Any) -> dict[str, Any]:
60
+ return {
61
+ "model": model,
62
+ "max_tokens": 64,
63
+ "messages": [
64
+ {"role": "user", "content": "Use the get_status tool for target proxy."},
65
+ {"role": "assistant", "content": assistant_content},
66
+ {
67
+ "role": "user",
68
+ "content": [{"type": "tool_result", "tool_use_id": get_tool_use_id(assistant_content), "content": "ok"}],
69
+ },
70
+ ],
71
+ }
72
+
73
+
74
+ def validate(base_url: str, model: str, api_key: str) -> int:
75
+ headers = {"x-api-key": api_key, "anthropic-version": "2023-06-01"}
76
+ checks = [
77
+ ("non-streaming text", messages_payload(model=model)),
78
+ ("wildcard model passthrough", messages_payload(model=model)),
79
+ ]
80
+ with httpx.Client(timeout=60) as client:
81
+ for name, payload in checks:
82
+ response = client.post(f"{base_url.rstrip('/')}/v1/messages", headers=headers, json=payload)
83
+ raise_for_status_with_body(response, name)
84
+ body = response.json()
85
+ if body.get("type") != "message" or "content" not in body:
86
+ raise RuntimeError(f"{name} did not return an Anthropic-shaped message: {json.dumps(body)[:500]}")
87
+ print(f"ok: {name}")
88
+
89
+ response = client.post(
90
+ f"{base_url.rstrip('/')}/v1/messages",
91
+ headers=headers,
92
+ json=messages_payload(model=model, tools=True),
93
+ )
94
+ raise_for_status_with_body(response, "tool definitions and tool_use response")
95
+ body = response.json()
96
+ tool_response_content = body.get("content")
97
+ if not has_tool_use(tool_response_content):
98
+ raise RuntimeError(f"tool definitions did not produce an Anthropic tool_use block: {json.dumps(body)[:500]}")
99
+ print("ok: tool definitions and tool_use response")
100
+
101
+ with client.stream(
102
+ "POST",
103
+ f"{base_url.rstrip('/')}/v1/messages",
104
+ headers=headers,
105
+ json=messages_payload(model=model, stream=True),
106
+ ) as response:
107
+ raise_for_status_with_body(response, "streaming text")
108
+ events = list(response.iter_lines())
109
+ if not any("message_stop" in event or "[DONE]" in event for event in events):
110
+ raise RuntimeError("streaming response did not include a terminal event")
111
+ print("ok: streaming text and terminal event")
112
+
113
+ with client.stream(
114
+ "POST",
115
+ f"{base_url.rstrip('/')}/v1/messages",
116
+ headers=headers,
117
+ json=messages_payload(model=model, stream=True, tools=True),
118
+ ) as response:
119
+ raise_for_status_with_body(response, "streamed tool request")
120
+ events = list(response.iter_lines())
121
+ if not events:
122
+ raise RuntimeError("streamed tool validation returned no events")
123
+ event_text = "\n".join(events)
124
+ required_parts = ["get_status", "tool", "id"]
125
+ missing_parts = [part for part in required_parts if part not in event_text]
126
+ if missing_parts:
127
+ raise RuntimeError(f"streamed tool response missing expected parts {missing_parts}: {event_text[:1000]}")
128
+ print("ok: streamed tool response preserves name, input, and identifier information")
129
+
130
+ response = client.post(
131
+ f"{base_url.rstrip('/')}/v1/messages",
132
+ headers=headers,
133
+ json=tool_result_payload(model=model, assistant_content=tool_response_content),
134
+ )
135
+ raise_for_status_with_body(response, "tool result follow-up")
136
+ body = response.json()
137
+ if body.get("type") != "message" or "content" not in body:
138
+ raise RuntimeError(f"tool result follow-up did not return an Anthropic-shaped message: {json.dumps(body)[:500]}")
139
+ print("ok: tool result follow-up")
140
+ return 0
141
+
142
+
143
+ def main() -> int:
144
+ parser = argparse.ArgumentParser(description="Validate Claude Code proxy compatibility against a running oc-cc-proxy.")
145
+ parser.add_argument("--base-url", default="http://127.0.0.1:4000")
146
+ parser.add_argument("--model", default="qwen3.6-plus")
147
+ parser.add_argument("--api-key", default="not-a-real-anthropic-key")
148
+ args = parser.parse_args()
149
+ return validate(args.base_url, args.model, args.api_key)
150
+
151
+
152
+ if __name__ == "__main__":
153
+ raise SystemExit(main())