opencode-llama-cpp-launcher 0.1.3__tar.gz → 0.1.5__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/PKG-INFO +42 -30
  2. opencode_llama_cpp_launcher-0.1.5/README.md +117 -0
  3. opencode_llama_cpp_launcher-0.1.5/docs/demo.tape +41 -0
  4. opencode_llama_cpp_launcher-0.1.5/docs/opencode-llama-demo.gif +0 -0
  5. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/pyproject.toml +1 -1
  6. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/uv.lock +1 -1
  7. opencode_llama_cpp_launcher-0.1.3/README.md +0 -105
  8. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/.github/workflows/release.yml +0 -0
  9. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/.gitignore +0 -0
  10. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/LICENSE +0 -0
  11. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode-llama.example.yaml +0 -0
  12. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/__init__.py +0 -0
  13. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/cli/__init__.py +0 -0
  14. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/cli/entrypoint.py +0 -0
  15. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/models/__init__.py +0 -0
  16. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/models/launch_config.py +0 -0
  17. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/models/launch_status.py +0 -0
  18. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/__init__.py +0 -0
  19. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/binaries.py +0 -0
  20. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/errors.py +0 -0
  21. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/health.py +0 -0
  22. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/launch_config_loader.py +0 -0
  23. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/launch_preparer.py +0 -0
  24. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/launcher.py +0 -0
  25. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/opencode_config.py +0 -0
  26. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/services/ports.py +0 -0
  27. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/storage/__init__.py +0 -0
  28. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/opencode_llama_cpp_launcher/storage/config_loader.py +0 -0
  29. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/tests/test_cli.py +0 -0
  30. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/tests/test_config_loader.py +0 -0
  31. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/tests/test_launcher.py +0 -0
  32. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/tests/test_opencode_config.py +0 -0
  33. {opencode_llama_cpp_launcher-0.1.3 → opencode_llama_cpp_launcher-0.1.5}/tests/test_ports.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: opencode-llama-cpp-launcher
3
- Version: 0.1.3
3
+ Version: 0.1.5
4
4
  Summary: One command launcher for running OpenCode with a local llama.cpp model.
5
5
  License-Expression: MIT
6
6
  License-File: LICENSE
@@ -22,52 +22,58 @@ Description-Content-Type: text/markdown
22
22
 
23
23
  # OpenCode llama.cpp Launcher
24
24
 
25
- A one command solution for launching [OpenCode](https://opencode.ai/) with any
26
- local LLM that `llama-server` can serve, including models like Qwen, DeepSeek,
27
- and Gemma. This launcher starts `llama-server`, waits for it to become ready,
28
- wires the OpenAI compatible provider config into OpenCode, and cleans up when
29
- the local agentic coding session ends.
25
+ Launch [OpenCode](https://opencode.ai/) with a local model served by
26
+ [llama.cpp](https://github.com/ggml-org/llama.cpp). The launcher starts
27
+ `llama-server`, wires OpenCode to it, and cleans up when your session ends.
28
+
29
+ ![OpenCode llama.cpp Launcher demo](https://raw.githubusercontent.com/ribomo/opencode-llama-cpp-launcher/main/docs/opencode-llama-demo.gif)
30
30
 
31
31
  ## Requirements
32
32
 
33
- - Python 3.12+
34
- - OpenCode
35
- - llama.cpp's `llama-server`
36
- - A local model supported by `llama-server`, for example Qwen, DeepSeek, or
37
- Gemma
33
+ - [OpenCode](https://opencode.ai/)
34
+ - [llama.cpp](https://github.com/ggml-org/llama.cpp)'s `llama-server`
35
+ - A local GGUF model, such as Qwen, DeepSeek, or Gemma
38
36
 
39
37
  The launcher finds `llama-server` on `PATH`, or you can set `llama_server` in
40
38
  your config.
41
39
 
40
+ Install OpenCode using its
41
+ [GitHub installation instructions](https://github.com/anomalyco/opencode#installation).
42
+ Install llama.cpp using its
43
+ [installation guide](https://github.com/ggml-org/llama.cpp/blob/master/docs/install.md).
44
+
42
45
  ## Install
43
46
 
44
- From this repository:
47
+ For most users, install with `pipx`:
45
48
 
46
49
  ```bash
47
- uv sync --dev
50
+ pipx install opencode-llama-cpp-launcher
48
51
  ```
49
52
 
50
- Check that the required external binaries are available:
53
+ Or install with `pip`:
51
54
 
52
55
  ```bash
53
- uv run opencode-llama doctor
56
+ python -m pip install opencode-llama-cpp-launcher
54
57
  ```
55
58
 
56
- ## Configure
57
-
58
- Create a project-local config in the project where you want OpenCode to run:
59
+ Check that the required external binaries are available:
59
60
 
60
61
  ```bash
61
- cp opencode-llama.example.yaml opencode-llama.yaml
62
+ opencode-llama doctor
62
63
  ```
63
64
 
64
- Then edit `opencode-llama.yaml`:
65
+ ## Configure
66
+
67
+ Create `opencode-llama.yaml` in the project where you want OpenCode to run, or
68
+ create `~/.config/opencode-llama.yaml` for a user-wide default:
65
69
 
66
70
  ```yaml
67
71
  model: /absolute/path/to/model.gguf
68
- llama_server: /optional/path/to/llama-server
69
- port: 8080
70
72
  ctx_size: 8192
73
+
74
+ # Optional
75
+ port: 8080
76
+ llama_server: /optional/path/to/llama-server
71
77
  ```
72
78
 
73
79
  Config lookup order:
@@ -81,24 +87,24 @@ Config lookup order:
81
87
  Run with an explicit config file:
82
88
 
83
89
  ```bash
84
- uv run opencode-llama --config opencode-llama.yaml
90
+ opencode-llama --config opencode-llama.yaml
85
91
  ```
86
92
 
87
93
  Or pass the model directly:
88
94
 
89
95
  ```bash
90
- uv run opencode-llama --model /absolute/path/to/model.gguf
96
+ opencode-llama --model /absolute/path/to/model.gguf
91
97
  ```
92
98
 
93
99
  Useful options:
94
100
 
95
101
  ```bash
96
- uv run opencode-llama --help
97
- uv run opencode-llama --dry-run
98
- uv run opencode-llama --config opencode-llama.yaml
99
- uv run opencode-llama --port 9001
100
- uv run opencode-llama --ctx-size 8192
101
- uv run opencode-llama --llama-server /absolute/path/to/llama-server
102
+ opencode-llama --help
103
+ opencode-llama --dry-run
104
+ opencode-llama --config opencode-llama.yaml
105
+ opencode-llama --port 9001
106
+ opencode-llama --ctx-size 8192
107
+ opencode-llama --llama-server /absolute/path/to/llama-server
102
108
  ```
103
109
 
104
110
  If `llama-server` fails before becoming healthy, the launcher includes a bounded
@@ -107,6 +113,12 @@ quiet.
107
113
 
108
114
  ## Development
109
115
 
116
+ Install dependencies from this repository:
117
+
118
+ ```bash
119
+ uv sync --dev
120
+ ```
121
+
110
122
  Run the test suite:
111
123
 
112
124
  ```bash
@@ -0,0 +1,117 @@
1
+ # OpenCode llama.cpp Launcher
2
+
3
+ Launch [OpenCode](https://opencode.ai/) with a local model served by
4
+ [llama.cpp](https://github.com/ggml-org/llama.cpp). The launcher starts
5
+ `llama-server`, wires OpenCode to it, and cleans up when your session ends.
6
+
7
+ ![OpenCode llama.cpp Launcher demo](https://raw.githubusercontent.com/ribomo/opencode-llama-cpp-launcher/main/docs/opencode-llama-demo.gif)
8
+
9
+ ## Requirements
10
+
11
+ - [OpenCode](https://opencode.ai/)
12
+ - [llama.cpp](https://github.com/ggml-org/llama.cpp)'s `llama-server`
13
+ - A local GGUF model, such as Qwen, DeepSeek, or Gemma
14
+
15
+ The launcher finds `llama-server` on `PATH`, or you can set `llama_server` in
16
+ your config.
17
+
18
+ Install OpenCode using its
19
+ [GitHub installation instructions](https://github.com/anomalyco/opencode#installation).
20
+ Install llama.cpp using its
21
+ [installation guide](https://github.com/ggml-org/llama.cpp/blob/master/docs/install.md).
22
+
23
+ ## Install
24
+
25
+ For most users, install with `pipx`:
26
+
27
+ ```bash
28
+ pipx install opencode-llama-cpp-launcher
29
+ ```
30
+
31
+ Or install with `pip`:
32
+
33
+ ```bash
34
+ python -m pip install opencode-llama-cpp-launcher
35
+ ```
36
+
37
+ Check that the required external binaries are available:
38
+
39
+ ```bash
40
+ opencode-llama doctor
41
+ ```
42
+
43
+ ## Configure
44
+
45
+ Create `opencode-llama.yaml` in the project where you want OpenCode to run, or
46
+ create `~/.config/opencode-llama.yaml` for a user-wide default:
47
+
48
+ ```yaml
49
+ model: /absolute/path/to/model.gguf
50
+ ctx_size: 8192
51
+
52
+ # Optional
53
+ port: 8080
54
+ llama_server: /optional/path/to/llama-server
55
+ ```
56
+
57
+ Config lookup order:
58
+
59
+ 1. The path passed with `--config`
60
+ 2. `opencode-llama.yaml` or `opencode-llama.yml` in the project directory
61
+ 3. `~/.config/opencode-llama.yaml`
62
+
63
+ ## Usage
64
+
65
+ Run with an explicit config file:
66
+
67
+ ```bash
68
+ opencode-llama --config opencode-llama.yaml
69
+ ```
70
+
71
+ Or pass the model directly:
72
+
73
+ ```bash
74
+ opencode-llama --model /absolute/path/to/model.gguf
75
+ ```
76
+
77
+ Useful options:
78
+
79
+ ```bash
80
+ opencode-llama --help
81
+ opencode-llama --dry-run
82
+ opencode-llama --config opencode-llama.yaml
83
+ opencode-llama --port 9001
84
+ opencode-llama --ctx-size 8192
85
+ opencode-llama --llama-server /absolute/path/to/llama-server
86
+ ```
87
+
88
+ If `llama-server` fails before becoming healthy, the launcher includes a bounded
89
+ tail of the server's startup output in the error message. Successful runs stay
90
+ quiet.
91
+
92
+ ## Development
93
+
94
+ Install dependencies from this repository:
95
+
96
+ ```bash
97
+ uv sync --dev
98
+ ```
99
+
100
+ Run the test suite:
101
+
102
+ ```bash
103
+ uv run pytest
104
+ ```
105
+
106
+ Before publishing, check for local files:
107
+
108
+ ```bash
109
+ git status --short --ignored
110
+ ```
111
+
112
+ Do not commit local launcher configs, virtual environments, caches, build
113
+ artifacts, or model paths.
114
+
115
+ ## License
116
+
117
+ MIT
@@ -0,0 +1,41 @@
1
+ Output docs/opencode-llama-demo.gif
2
+
3
+ Require opencode-llama
4
+ Require opencode
5
+
6
+ Set Shell "bash"
7
+ Set FontSize 24
8
+ Set FontFamily "JetBrains Mono"
9
+ Set Width 1200
10
+ Set Height 720
11
+ Set Padding 18
12
+ Set Framerate 30
13
+ Set Theme "iTerm2 Tango Dark"
14
+ Set BorderRadius 8
15
+ Set TypingSpeed 25ms
16
+
17
+ Hide
18
+ Type 'DEMO_PROJECT="$PWD"; DEMO_HOME="$(mktemp -d)"; mkdir "$DEMO_HOME/this-project"; cp -a "$DEMO_PROJECT/." "$DEMO_HOME/this-project"; cd "$DEMO_HOME"'
19
+ Enter
20
+ Type "export PS1='> '"
21
+ Enter
22
+ Type "clear"
23
+ Enter
24
+ Show
25
+
26
+ Type "cd this-project"
27
+ Sleep 0.5s
28
+ Enter
29
+ Sleep 1s
30
+
31
+ Type "opencode-llama"
32
+ Sleep 0.8s
33
+ Enter
34
+ Sleep 5s
35
+
36
+ Type "Say hello from the local model."
37
+ Sleep 0.8s
38
+ Enter
39
+ Sleep 8s
40
+
41
+ Ctrl+C
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "opencode-llama-cpp-launcher"
3
- version = "0.1.3"
3
+ version = "0.1.5"
4
4
  description = "One command launcher for running OpenCode with a local llama.cpp model."
5
5
  readme = "README.md"
6
6
  license = "MIT"
@@ -64,7 +64,7 @@ wheels = [
64
64
 
65
65
  [[package]]
66
66
  name = "opencode-llama-cpp-launcher"
67
- version = "0.1.3"
67
+ version = "0.1.5"
68
68
  source = { editable = "." }
69
69
  dependencies = [
70
70
  { name = "pyyaml" },
@@ -1,105 +0,0 @@
1
- # OpenCode llama.cpp Launcher
2
-
3
- A one command solution for launching [OpenCode](https://opencode.ai/) with any
4
- local LLM that `llama-server` can serve, including models like Qwen, DeepSeek,
5
- and Gemma. This launcher starts `llama-server`, waits for it to become ready,
6
- wires the OpenAI compatible provider config into OpenCode, and cleans up when
7
- the local agentic coding session ends.
8
-
9
- ## Requirements
10
-
11
- - Python 3.12+
12
- - OpenCode
13
- - llama.cpp's `llama-server`
14
- - A local model supported by `llama-server`, for example Qwen, DeepSeek, or
15
- Gemma
16
-
17
- The launcher finds `llama-server` on `PATH`, or you can set `llama_server` in
18
- your config.
19
-
20
- ## Install
21
-
22
- From this repository:
23
-
24
- ```bash
25
- uv sync --dev
26
- ```
27
-
28
- Check that the required external binaries are available:
29
-
30
- ```bash
31
- uv run opencode-llama doctor
32
- ```
33
-
34
- ## Configure
35
-
36
- Create a project-local config in the project where you want OpenCode to run:
37
-
38
- ```bash
39
- cp opencode-llama.example.yaml opencode-llama.yaml
40
- ```
41
-
42
- Then edit `opencode-llama.yaml`:
43
-
44
- ```yaml
45
- model: /absolute/path/to/model.gguf
46
- llama_server: /optional/path/to/llama-server
47
- port: 8080
48
- ctx_size: 8192
49
- ```
50
-
51
- Config lookup order:
52
-
53
- 1. The path passed with `--config`
54
- 2. `opencode-llama.yaml` or `opencode-llama.yml` in the project directory
55
- 3. `~/.config/opencode-llama.yaml`
56
-
57
- ## Usage
58
-
59
- Run with an explicit config file:
60
-
61
- ```bash
62
- uv run opencode-llama --config opencode-llama.yaml
63
- ```
64
-
65
- Or pass the model directly:
66
-
67
- ```bash
68
- uv run opencode-llama --model /absolute/path/to/model.gguf
69
- ```
70
-
71
- Useful options:
72
-
73
- ```bash
74
- uv run opencode-llama --help
75
- uv run opencode-llama --dry-run
76
- uv run opencode-llama --config opencode-llama.yaml
77
- uv run opencode-llama --port 9001
78
- uv run opencode-llama --ctx-size 8192
79
- uv run opencode-llama --llama-server /absolute/path/to/llama-server
80
- ```
81
-
82
- If `llama-server` fails before becoming healthy, the launcher includes a bounded
83
- tail of the server's startup output in the error message. Successful runs stay
84
- quiet.
85
-
86
- ## Development
87
-
88
- Run the test suite:
89
-
90
- ```bash
91
- uv run pytest
92
- ```
93
-
94
- Before publishing, check for local files:
95
-
96
- ```bash
97
- git status --short --ignored
98
- ```
99
-
100
- Do not commit local launcher configs, virtual environments, caches, build
101
- artifacts, or model paths.
102
-
103
- ## License
104
-
105
- MIT