liter_llm 1.1.1 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 87c0ce7287f3e000b825d496b84662586cb3d8899f11a9f7a8bac55587ddc26c
4
- data.tar.gz: 5099baea9360fb98d4949324773b9ef5b3dafc870150f1992f3887162b79994e
3
+ metadata.gz: 1b6cf155e1ca65bf3dc5166f8f59178a8580cdd9cd6f9f4a1f3dde0ecc5385be
4
+ data.tar.gz: 07b5f2e4f5ecb3bf1e3bff6dda1cd9c261cb4a6198b727a2ecfe0ac9d97ab55c
5
5
  SHA512:
6
- metadata.gz: 9452263146c7206ebad3b3ca9b0bc163e251ad832b3b4b4b7352132356e59d29779f2741ebe095607a3cdf7a668ff3eb5cf7ae0b2e6593a043e1f92922761897
7
- data.tar.gz: 991f0752533062901b10252157ff0c4fa9e6e69bb69ee5e8218e2124ed70d01cc84c51a1a469ef2a167504b6ff369239712116ed80e1873fb0d662da8c9d3bdd
6
+ metadata.gz: 8b82f5e04d2eaee3b096c61c199dd9b17463989c1ce11665d65a2258e34fbc92ac545b22d6632d5426275137c33b32c9253382b0025e3a427d82755eea1cf43e
7
+ data.tar.gz: d530111003fbd0f490264f52e76a2e34ca76072508cd30446b617f58797a2d76b345e029b6ed57301d9c4e475b910ff4cbf3c993f6075ed2cef95b451a84bfbf
data/README.md CHANGED
@@ -35,6 +35,9 @@
35
35
  <a href="https://github.com/kreuzberg-dev/liter-llm/pkgs/container/liter-llm">
36
36
  <img src="https://img.shields.io/badge/Docker-007ec6?logo=docker&logoColor=white" alt="Docker">
37
37
  </a>
38
+ <a href="https://github.com/kreuzberg-dev/homebrew-tap/blob/main/Formula/liter-llm.rb">
39
+ <img src="https://img.shields.io/badge/Homebrew-007ec6?logo=homebrew&logoColor=white" alt="Homebrew">
40
+ </a>
38
41
  <a href="https://github.com/kreuzberg-dev/liter-llm/tree/main/crates/liter-llm-ffi">
39
42
  <img src="https://img.shields.io/badge/C-FFI-007ec6" alt="C FFI">
40
43
  </a>
@@ -63,7 +66,7 @@
63
66
  </div>
64
67
 
65
68
 
66
- Universal LLM API client for Ruby. Access 142+ LLM providers through a single interface with idiomatic Ruby API and native performance.
69
+ Universal LLM API client for Ruby. Access 143+ LLM providers through a single interface with idiomatic Ruby API and native performance.
67
70
 
68
71
 
69
72
  ## Installation
@@ -161,7 +164,7 @@ chunks.each { |chunk| puts chunk }
161
164
 
162
165
  ## Features
163
166
 
164
- ### Supported Providers (142+)
167
+ ### Supported Providers (143+)
165
168
 
166
169
  Route to any provider using the `provider/model` prefix convention:
167
170
 
@@ -181,7 +184,8 @@ Route to any provider using the `provider/model` prefix convention:
181
184
 
182
185
  ### Key Capabilities
183
186
 
184
- - **Provider Routing** -- Single client for 142+ LLM providers via `provider/model` prefix
187
+ - **Provider Routing** -- Single client for 143+ LLM providers via `provider/model` prefix
188
+ - **Local LLMs** — Connect to locally-hosted models via Ollama, LM Studio, vLLM, llama.cpp, and other local inference servers
185
189
  - **Unified API** -- Consistent `chat`, `chat_stream`, `embeddings`, `list_models` interface
186
190
 
187
191
  - **Streaming** -- Real-time token streaming via `chat_stream`
@@ -207,7 +211,7 @@ Built on a compiled Rust core for speed and safety:
207
211
 
208
212
  ## Provider Routing
209
213
 
210
- Route to 142+ providers using the `provider/model` prefix convention:
214
+ Route to 143+ providers using the `provider/model` prefix convention:
211
215
 
212
216
  ```text
213
217
  openai/gpt-4o
@@ -235,7 +239,7 @@ See the [proxy server documentation](https://docs.liter-llm.kreuzberg.dev/server
235
239
 
236
240
  - **[Documentation](https://docs.liter-llm.kreuzberg.dev)** -- Full docs and API reference
237
241
  - **[GitHub Repository](https://github.com/kreuzberg-dev/liter-llm)** -- Source, issues, and discussions
238
- - **[Provider Registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)** -- 142 supported providers
242
+ - **[Provider Registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)** -- 143 supported providers
239
243
 
240
244
  Part of [kreuzberg.dev](https://kreuzberg.dev).
241
245
 
@@ -1,6 +1,6 @@
1
1
  [package]
2
2
  name = "liter-llm-rb"
3
- version = "1.1.1"
3
+ version = "1.2.0"
4
4
  edition = "2024"
5
5
  authors = ["Na'aman Hirschfeld <naaman@kreuzberg.dev>"]
6
6
  license = "MIT"
data/vendor/Cargo.toml CHANGED
@@ -2,7 +2,7 @@
2
2
  members = ["liter-llm", "liter-llm-ffi"]
3
3
 
4
4
  [workspace.package]
5
- version = "1.1.1"
5
+ version = "1.2.0"
6
6
  edition = "2024"
7
7
  authors = ["Na'aman Hirschfeld <naaman@kreuzberg.dev>"]
8
8
  license = "MIT"
@@ -1,6 +1,6 @@
1
1
  [package]
2
2
  name = "liter-llm"
3
- version = "1.1.1"
3
+ version = "1.2.0"
4
4
  edition = "2024"
5
5
  license = "MIT"
6
6
  repository.workspace = true
@@ -35,6 +35,9 @@
35
35
  <a href="https://github.com/kreuzberg-dev/liter-llm/pkgs/container/liter-llm">
36
36
  <img src="https://img.shields.io/badge/Docker-007ec6?logo=docker&logoColor=white" alt="Docker">
37
37
  </a>
38
+ <a href="https://github.com/kreuzberg-dev/homebrew-tap/blob/main/Formula/liter-llm.rb">
39
+ <img src="https://img.shields.io/badge/Homebrew-007ec6?logo=homebrew&logoColor=white" alt="Homebrew">
40
+ </a>
38
41
  <a href="https://github.com/kreuzberg-dev/liter-llm/tree/main/crates/liter-llm-ffi">
39
42
  <img src="https://img.shields.io/badge/C-FFI-007ec6" alt="C FFI">
40
43
  </a>
@@ -63,7 +66,7 @@
63
66
  </div>
64
67
 
65
68
 
66
- Universal LLM API client for Rust. Access 142+ LLM providers — OpenAI, Anthropic, Groq, Mistral, and more — through a single unified interface. Async/await with Tokio, streaming via BoxStream, composable Tower middleware stack, and compile-time type safety.
69
+ Universal LLM API client for Rust. Access 143+ LLM providers — OpenAI, Anthropic, Groq, Mistral, and more — through a single unified interface. Async/await with Tokio, streaming via BoxStream, composable Tower middleware stack, and compile-time type safety.
67
70
 
68
71
 
69
72
  ## Installation
@@ -174,7 +177,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
174
177
 
175
178
  ## Features
176
179
 
177
- ### Supported Providers (142+)
180
+ ### Supported Providers (143+)
178
181
 
179
182
  Route to any provider using the `provider/model` prefix convention:
180
183
 
@@ -194,7 +197,8 @@ Route to any provider using the `provider/model` prefix convention:
194
197
 
195
198
  ### Key Capabilities
196
199
 
197
- - **Provider Routing** -- Single client for 142+ LLM providers via `provider/model` prefix
200
+ - **Provider Routing** -- Single client for 143+ LLM providers via `provider/model` prefix
201
+ - **Local LLMs** — Connect to locally-hosted models via Ollama, LM Studio, vLLM, llama.cpp, and other local inference servers
198
202
  - **Unified API** -- Consistent `chat`, `chat_stream`, `embeddings`, `list_models` interface
199
203
 
200
204
  - **Streaming** -- Real-time token streaming via `chat_stream`
@@ -220,7 +224,7 @@ Built on a compiled Rust core for speed and safety:
220
224
 
221
225
  ## Provider Routing
222
226
 
223
- Route to 142+ providers using the `provider/model` prefix convention:
227
+ Route to 143+ providers using the `provider/model` prefix convention:
224
228
 
225
229
  ```text
226
230
  openai/gpt-4o
@@ -248,7 +252,7 @@ See the [proxy server documentation](https://docs.liter-llm.kreuzberg.dev/server
248
252
 
249
253
  - **[Documentation](https://docs.liter-llm.kreuzberg.dev)** -- Full docs and API reference
250
254
  - **[GitHub Repository](https://github.com/kreuzberg-dev/liter-llm)** -- Source, issues, and discussions
251
- - **[Provider Registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)** -- 142 supported providers
255
+ - **[Provider Registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)** -- 143 supported providers
252
256
 
253
257
  Part of [kreuzberg.dev](https://kreuzberg.dev).
254
258
 
@@ -0,0 +1,134 @@
1
+ //! Integration tests against local LLM providers (Ollama).
2
+ //!
3
+ //! These tests require a running Ollama instance with models pulled.
4
+ //! Start with: `task local:up`
5
+ //! Run with: `cargo test -p liter-llm --test local_llm -- --ignored`
6
+
7
+ use futures_util::StreamExt;
8
+ use liter_llm::{
9
+ ChatCompletionRequest, ClientConfigBuilder, DefaultClient, EmbeddingInput, EmbeddingRequest, LlmClient,
10
+ };
11
+
12
+ const OLLAMA_CHAT_MODEL: &str = "ollama/qwen2:0.5b";
13
+ const OLLAMA_EMBED_MODEL: &str = "ollama/all-minilm";
14
+
15
+ /// Check whether an Ollama instance is reachable.
16
+ async fn is_ollama_available() -> bool {
17
+ let base = std::env::var("OLLAMA_BASE_URL").unwrap_or_else(|_| "http://localhost:11434".into());
18
+ reqwest::get(format!("{base}/v1/models")).await.is_ok()
19
+ }
20
+
21
+ fn ollama_client(model_hint: &str) -> DefaultClient {
22
+ let config = ClientConfigBuilder::new("").max_retries(2).build();
23
+ DefaultClient::new(config, Some(model_hint)).expect("failed to build Ollama client")
24
+ }
25
+
26
+ fn simple_chat_request(model: &str) -> ChatCompletionRequest {
27
+ serde_json::from_value(serde_json::json!({
28
+ "model": model,
29
+ "messages": [{"role": "user", "content": "Say hello in one word."}],
30
+ "max_tokens": 16,
31
+ }))
32
+ .expect("failed to build chat request from JSON")
33
+ }
34
+
35
+ fn simple_embed_request(model: &str) -> EmbeddingRequest {
36
+ EmbeddingRequest {
37
+ model: model.into(),
38
+ input: EmbeddingInput::Single("hello world".into()),
39
+ encoding_format: None,
40
+ dimensions: None,
41
+ user: None,
42
+ }
43
+ }
44
+
45
+ #[tokio::test]
46
+ #[ignore]
47
+ async fn local_chat_ollama() {
48
+ if !is_ollama_available().await {
49
+ eprintln!("SKIP: Ollama not available, skipping");
50
+ return;
51
+ }
52
+
53
+ let client = ollama_client(OLLAMA_CHAT_MODEL);
54
+ let resp = client.chat(simple_chat_request(OLLAMA_CHAT_MODEL)).await.unwrap();
55
+
56
+ assert!(!resp.choices.is_empty(), "should have at least one choice");
57
+ let choice = &resp.choices[0];
58
+ assert!(
59
+ choice.message.content.as_ref().is_some_and(|c| !c.is_empty()),
60
+ "first choice content should be non-empty"
61
+ );
62
+ assert!(choice.finish_reason.is_some(), "finish_reason should be present");
63
+ assert!(!resp.model.is_empty(), "model field should be non-empty");
64
+ }
65
+
66
+ #[tokio::test]
67
+ #[ignore]
68
+ async fn local_stream_ollama() {
69
+ if !is_ollama_available().await {
70
+ eprintln!("SKIP: Ollama not available, skipping");
71
+ return;
72
+ }
73
+
74
+ let client = ollama_client(OLLAMA_CHAT_MODEL);
75
+ let mut stream = client
76
+ .chat_stream(simple_chat_request(OLLAMA_CHAT_MODEL))
77
+ .await
78
+ .unwrap();
79
+
80
+ let mut content = String::new();
81
+ let mut chunk_count = 0u32;
82
+ let mut saw_finish = false;
83
+
84
+ while let Some(result) = stream.next().await {
85
+ let chunk = result.unwrap();
86
+ chunk_count += 1;
87
+ if let Some(choice) = chunk.choices.first() {
88
+ if let Some(text) = &choice.delta.content {
89
+ content.push_str(text);
90
+ }
91
+ if choice.finish_reason.is_some() {
92
+ saw_finish = true;
93
+ }
94
+ }
95
+ if chunk_count > 200 {
96
+ break;
97
+ }
98
+ }
99
+
100
+ assert!(chunk_count >= 1, "should receive at least 1 chunk");
101
+ assert!(!content.is_empty(), "concatenated content should be non-empty");
102
+ assert!(saw_finish, "should see a finish_reason in the stream");
103
+ }
104
+
105
+ #[tokio::test]
106
+ #[ignore]
107
+ async fn local_embed_ollama() {
108
+ if !is_ollama_available().await {
109
+ eprintln!("SKIP: Ollama not available, skipping");
110
+ return;
111
+ }
112
+
113
+ let client = ollama_client(OLLAMA_EMBED_MODEL);
114
+ let resp = client.embed(simple_embed_request(OLLAMA_EMBED_MODEL)).await.unwrap();
115
+
116
+ assert!(!resp.data.is_empty(), "should have embedding data");
117
+ assert!(!resp.data[0].embedding.is_empty(), "embedding should have dimensions");
118
+ assert!(!resp.model.is_empty(), "model field should be non-empty");
119
+ }
120
+
121
+ #[tokio::test]
122
+ #[ignore]
123
+ async fn local_list_models_ollama() {
124
+ if !is_ollama_available().await {
125
+ eprintln!("SKIP: Ollama not available, skipping");
126
+ return;
127
+ }
128
+
129
+ let client = ollama_client(OLLAMA_CHAT_MODEL);
130
+ let resp = client.list_models().await.unwrap();
131
+
132
+ assert!(!resp.data.is_empty(), "should list at least one model");
133
+ assert!(!resp.data[0].id.is_empty(), "first model id should be non-empty");
134
+ }
@@ -1,6 +1,6 @@
1
1
  [package]
2
2
  name = "liter-llm-ffi"
3
- version = "1.1.1"
3
+ version = "1.2.0"
4
4
  edition = "2024"
5
5
  license = "MIT"
6
6
  repository.workspace = true
@@ -20,8 +20,8 @@ default = []
20
20
  base64.workspace = true
21
21
  bytes.workspace = true
22
22
  futures-core.workspace = true
23
- liter-llm = { path = "../liter-llm", version = "1.1.1", features = ["full"] }
24
- liter-llm-bindings-core = { path = "../liter-llm-bindings-core", version = "1.1.1" }
23
+ liter-llm = { path = "../liter-llm", version = "1.2.0", features = ["full"] }
24
+ liter-llm-bindings-core = { path = "../liter-llm-bindings-core", version = "1.2.0" }
25
25
  serde.workspace = true
26
26
  serde_json.workspace = true
27
27
  tokio.workspace = true
@@ -8,9 +8,9 @@
8
8
  /* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */
9
9
 
10
10
  #define LITER_LLM_VERSION_MAJOR 1
11
- #define LITER_LLM_VERSION_MINOR 1
12
- #define LITER_LLM_VERSION_PATCH 1
13
- #define LITER_LLM_VERSION "1.1.1"
11
+ #define LITER_LLM_VERSION_MINOR 2
12
+ #define LITER_LLM_VERSION_PATCH 0
13
+ #define LITER_LLM_VERSION "1.2.0"
14
14
 
15
15
 
16
16
  #include <stdarg.h>
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: liter_llm
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.1
4
+ version: 1.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Na'aman Hirschfeld
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2026-03-29 00:00:00.000000000 Z
11
+ date: 2026-04-07 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rb_sys
@@ -274,6 +274,7 @@ files:
274
274
  - vendor/liter-llm/tests/live_providers/mistral.rs
275
275
  - vendor/liter-llm/tests/live_providers/openai.rs
276
276
  - vendor/liter-llm/tests/live_providers/vertex_ai.rs
277
+ - vendor/liter-llm/tests/local_llm.rs
277
278
  - vendor/liter-llm/tests/middleware_integration.rs
278
279
  - vendor/liter-llm/tests/operations_integration.rs
279
280
  - vendor/liter-llm/tests/routing_integration.rs