liter_llm 1.0.0.pre.rc.6-aarch64-linux
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/README.md +239 -0
- data/lib/liter_llm.rb +8 -0
- data/lib/liter_llm_rb.so +0 -0
- data/sig/liter_llm.rbs +416 -0
- metadata +199 -0
checksums.yaml
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
1
|
+
---
|
|
2
|
+
SHA256:
|
|
3
|
+
metadata.gz: 2837ee8e05be49794fa8b1e1885b2a8149584ab2d79de3d98c7bec95cbdaf978
|
|
4
|
+
data.tar.gz: 4ef79499547e416da331335556cdcfbaec7502dd7ebbaf5be5a4207b5072b77e
|
|
5
|
+
SHA512:
|
|
6
|
+
metadata.gz: 1dbdd75dd67268d3f69c543ef5ac97d7f539f1976d65f44838b83f7e2213dd87611e74b5eb08c30cb86ad59688086438878701f97e3483ea9b170d0c209bc1d2
|
|
7
|
+
data.tar.gz: 351ddc8832eb89dc60d09a9c1f23f46ed85eeb5885b072b125ab076baaa65cf1f1b7a03981493c48fd491a21554acbf560c468157ccade978a63da8365abbea3
|
data/README.md
ADDED
|
@@ -0,0 +1,239 @@
|
|
|
1
|
+
# Ruby
|
|
2
|
+
|
|
3
|
+
<div align="center" style="display: flex; flex-wrap: wrap; gap: 8px; justify-content: center; margin: 20px 0;">
|
|
4
|
+
<!-- Language Bindings -->
|
|
5
|
+
<a href="https://crates.io/crates/liter-llm">
|
|
6
|
+
<img src="https://img.shields.io/crates/v/liter-llm?label=Rust&color=007ec6" alt="Rust">
|
|
7
|
+
</a>
|
|
8
|
+
<a href="https://pypi.org/project/liter-llm/">
|
|
9
|
+
<img src="https://img.shields.io/pypi/v/liter-llm?label=Python&color=007ec6" alt="Python">
|
|
10
|
+
</a>
|
|
11
|
+
<a href="https://www.npmjs.com/package/@kreuzberg/liter-llm">
|
|
12
|
+
<img src="https://img.shields.io/npm/v/@kreuzberg/liter-llm?label=Node.js&color=007ec6" alt="Node.js">
|
|
13
|
+
</a>
|
|
14
|
+
<a href="https://www.npmjs.com/package/@kreuzberg/liter-llm-wasm">
|
|
15
|
+
<img src="https://img.shields.io/npm/v/@kreuzberg/liter-llm-wasm?label=WASM&color=007ec6" alt="WASM">
|
|
16
|
+
</a>
|
|
17
|
+
<a href="https://central.sonatype.com/artifact/dev.kreuzberg/liter-llm">
|
|
18
|
+
<img src="https://img.shields.io/maven-central/v/dev.kreuzberg/liter-llm?label=Java&color=007ec6" alt="Java">
|
|
19
|
+
</a>
|
|
20
|
+
<a href="https://github.com/kreuzberg-dev/liter-llm/tree/main/packages/go">
|
|
21
|
+
<img src="https://img.shields.io/github/v/tag/kreuzberg-dev/liter-llm?label=Go&color=007ec6" alt="Go">
|
|
22
|
+
</a>
|
|
23
|
+
<a href="https://www.nuget.org/packages/LiterLlm">
|
|
24
|
+
<img src="https://img.shields.io/nuget/v/LiterLlm?label=C%23&color=007ec6" alt="C#">
|
|
25
|
+
</a>
|
|
26
|
+
<a href="https://packagist.org/packages/kreuzberg/liter-llm">
|
|
27
|
+
<img src="https://img.shields.io/packagist/v/kreuzberg/liter-llm?label=PHP&color=007ec6" alt="PHP">
|
|
28
|
+
</a>
|
|
29
|
+
<a href="https://rubygems.org/gems/liter_llm">
|
|
30
|
+
<img src="https://img.shields.io/gem/v/liter_llm?label=Ruby&color=007ec6" alt="Ruby">
|
|
31
|
+
</a>
|
|
32
|
+
<a href="https://hex.pm/packages/liter_llm">
|
|
33
|
+
<img src="https://img.shields.io/hexpm/v/liter_llm?label=Elixir&color=007ec6" alt="Elixir">
|
|
34
|
+
</a>
|
|
35
|
+
<a href="https://github.com/kreuzberg-dev/liter-llm/pkgs/container/liter-llm">
|
|
36
|
+
<img src="https://img.shields.io/badge/Docker-007ec6?logo=docker&logoColor=white" alt="Docker">
|
|
37
|
+
</a>
|
|
38
|
+
<a href="https://github.com/kreuzberg-dev/liter-llm/tree/main/crates/liter-llm-ffi">
|
|
39
|
+
<img src="https://img.shields.io/badge/C-FFI-007ec6" alt="C FFI">
|
|
40
|
+
</a>
|
|
41
|
+
|
|
42
|
+
<!-- Project Info -->
|
|
43
|
+
<a href="https://github.com/kreuzberg-dev/liter-llm/blob/main/LICENSE">
|
|
44
|
+
<img src="https://img.shields.io/badge/License-MIT-007ec6" alt="License">
|
|
45
|
+
</a>
|
|
46
|
+
<a href="https://docs.liter-llm.kreuzberg.dev">
|
|
47
|
+
<img src="https://img.shields.io/badge/docs-kreuzberg.dev-007ec6" alt="Docs">
|
|
48
|
+
</a>
|
|
49
|
+
</div>
|
|
50
|
+
|
|
51
|
+
|
|
52
|
+
<div align="center" style="margin: 20px 0;">
|
|
53
|
+
<picture>
|
|
54
|
+
<img width="100%" alt="kreuzberg.dev" src="https://github.com/user-attachments/assets/1b6c6ad7-3b6d-4171-b1c9-f2026cc9deb8" />
|
|
55
|
+
</picture>
|
|
56
|
+
</div>
|
|
57
|
+
|
|
58
|
+
|
|
59
|
+
<div align="center" style="margin-bottom: 20px;">
|
|
60
|
+
<a href="https://discord.gg/xt9WY3GnKR">
|
|
61
|
+
<img height="22" src="https://img.shields.io/badge/Discord-Join%20our%20community-7289da?logo=discord&logoColor=white" alt="Discord">
|
|
62
|
+
</a>
|
|
63
|
+
</div>
|
|
64
|
+
|
|
65
|
+
|
|
66
|
+
Universal LLM API client for Ruby. Access 142+ LLM providers through a single interface with idiomatic Ruby API and native performance.
|
|
67
|
+
|
|
68
|
+
|
|
69
|
+
## Installation
|
|
70
|
+
|
|
71
|
+
### Package Installation
|
|
72
|
+
|
|
73
|
+
|
|
74
|
+
Install via one of the supported package managers:
|
|
75
|
+
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
**gem:**
|
|
79
|
+
|
|
80
|
+
```bash
|
|
81
|
+
gem install liter_llm
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
|
|
85
|
+
|
|
86
|
+
|
|
87
|
+
**Bundler:**
|
|
88
|
+
|
|
89
|
+
```ruby
|
|
90
|
+
gem 'liter_llm'
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
|
|
94
|
+
|
|
95
|
+
|
|
96
|
+
|
|
97
|
+
### System Requirements
|
|
98
|
+
|
|
99
|
+
|
|
100
|
+
- **Ruby 3.2+** required
|
|
101
|
+
- API keys via environment variables (e.g. `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
|
|
102
|
+
|
|
103
|
+
|
|
104
|
+
|
|
105
|
+
## Quick Start
|
|
106
|
+
|
|
107
|
+
### Basic Chat
|
|
108
|
+
|
|
109
|
+
Send a message to any provider using the `provider/model` prefix:
|
|
110
|
+
|
|
111
|
+
```ruby
|
|
112
|
+
# frozen_string_literal: true
|
|
113
|
+
|
|
114
|
+
require "liter_llm"
|
|
115
|
+
require "json"
|
|
116
|
+
|
|
117
|
+
client = LiterLlm::LlmClient.new(ENV.fetch("OPENAI_API_KEY"), {})
|
|
118
|
+
|
|
119
|
+
response = JSON.parse(client.chat(JSON.generate(
|
|
120
|
+
model: "openai/gpt-4o",
|
|
121
|
+
messages: [{ role: "user", content: "Hello!" }]
|
|
122
|
+
)))
|
|
123
|
+
|
|
124
|
+
puts response.dig("choices", 0, "message", "content")
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
|
|
128
|
+
### Common Use Cases
|
|
129
|
+
|
|
130
|
+
|
|
131
|
+
#### Streaming Responses
|
|
132
|
+
|
|
133
|
+
Stream tokens in real time:
|
|
134
|
+
|
|
135
|
+
```ruby
|
|
136
|
+
# frozen_string_literal: true
|
|
137
|
+
|
|
138
|
+
require "liter_llm"
|
|
139
|
+
require "json"
|
|
140
|
+
|
|
141
|
+
client = LiterLlm::LlmClient.new(ENV.fetch("OPENAI_API_KEY"), {})
|
|
142
|
+
|
|
143
|
+
chunks = JSON.parse(client.chat_stream(JSON.generate(
|
|
144
|
+
model: "openai/gpt-4o-mini",
|
|
145
|
+
messages: [{ role: "user", content: "Hello" }]
|
|
146
|
+
)))
|
|
147
|
+
|
|
148
|
+
chunks.each { |chunk| puts chunk }
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
|
|
152
|
+
|
|
153
|
+
|
|
154
|
+
|
|
155
|
+
### Next Steps
|
|
156
|
+
|
|
157
|
+
- **[Provider Registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)** - Full list of supported providers
|
|
158
|
+
- **[GitHub Repository](https://github.com/kreuzberg-dev/liter-llm)** - Source, issues, and discussions
|
|
159
|
+
|
|
160
|
+
|
|
161
|
+
|
|
162
|
+
## Features
|
|
163
|
+
|
|
164
|
+
### Supported Providers (142+)
|
|
165
|
+
|
|
166
|
+
Route to any provider using the `provider/model` prefix convention:
|
|
167
|
+
|
|
168
|
+
| Provider | Example Model |
|
|
169
|
+
|----------|--------------|
|
|
170
|
+
| **OpenAI** | `openai/gpt-4o`, `openai/gpt-4o-mini` |
|
|
171
|
+
| **Anthropic** | `anthropic/claude-3-5-sonnet-20241022` |
|
|
172
|
+
| **Groq** | `groq/llama-3.1-70b-versatile` |
|
|
173
|
+
| **Mistral** | `mistral/mistral-large-latest` |
|
|
174
|
+
| **Cohere** | `cohere/command-r-plus` |
|
|
175
|
+
| **Together AI** | `together/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo` |
|
|
176
|
+
| **Fireworks** | `fireworks/accounts/fireworks/models/llama-v3p1-70b-instruct` |
|
|
177
|
+
| **Google Vertex** | `vertexai/gemini-1.5-pro` |
|
|
178
|
+
| **Amazon Bedrock** | `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0` |
|
|
179
|
+
|
|
180
|
+
**[Complete Provider List](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)**
|
|
181
|
+
|
|
182
|
+
### Key Capabilities
|
|
183
|
+
|
|
184
|
+
- **Provider Routing** -- Single client for 142+ LLM providers via `provider/model` prefix
|
|
185
|
+
- **Unified API** -- Consistent `chat`, `chat_stream`, `embeddings`, `list_models` interface
|
|
186
|
+
|
|
187
|
+
- **Streaming** -- Real-time token streaming via `chat_stream`
|
|
188
|
+
|
|
189
|
+
|
|
190
|
+
- **Tool Calling** -- Function calling and tool use across all supporting providers
|
|
191
|
+
|
|
192
|
+
- **Type Safe** -- Schema-driven types compiled from JSON schemas
|
|
193
|
+
- **Secure** -- API keys never logged or serialized, managed via environment variables
|
|
194
|
+
- **Observability** -- Built-in [OpenTelemetry](https://opentelemetry.io/docs/specs/semconv/gen-ai/) with GenAI semantic conventions
|
|
195
|
+
- **Error Handling** -- Structured errors with provider context and retry hints
|
|
196
|
+
|
|
197
|
+
### Performance
|
|
198
|
+
|
|
199
|
+
Built on a compiled Rust core for speed and safety:
|
|
200
|
+
|
|
201
|
+
- **Provider resolution** at client construction -- zero per-request overhead
|
|
202
|
+
- **Configurable timeouts** and connection pooling
|
|
203
|
+
- **Zero-copy streaming** with SSE and AWS EventStream support
|
|
204
|
+
- **API keys** wrapped in secure memory, zeroed on drop
|
|
205
|
+
|
|
206
|
+
|
|
207
|
+
|
|
208
|
+
## Provider Routing
|
|
209
|
+
|
|
210
|
+
Route to 142+ providers using the `provider/model` prefix convention:
|
|
211
|
+
|
|
212
|
+
```text
|
|
213
|
+
openai/gpt-4o
|
|
214
|
+
anthropic/claude-3-5-sonnet-20241022
|
|
215
|
+
groq/llama-3.1-70b-versatile
|
|
216
|
+
mistral/mistral-large-latest
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
See the [provider registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json) for the full list.
|
|
220
|
+
|
|
221
|
+
|
|
222
|
+
|
|
223
|
+
## Documentation
|
|
224
|
+
|
|
225
|
+
- **[Documentation](https://docs.liter-llm.kreuzberg.dev)** -- Full docs and API reference
|
|
226
|
+
- **[GitHub Repository](https://github.com/kreuzberg-dev/liter-llm)** -- Source, issues, and discussions
|
|
227
|
+
- **[Provider Registry](https://github.com/kreuzberg-dev/liter-llm/blob/main/schemas/providers.json)** -- 142 supported providers
|
|
228
|
+
|
|
229
|
+
Part of [kreuzberg.dev](https://kreuzberg.dev).
|
|
230
|
+
|
|
231
|
+
## Contributing
|
|
232
|
+
|
|
233
|
+
Contributions are welcome! See [CONTRIBUTING.md](https://github.com/kreuzberg-dev/liter-llm/blob/main/CONTRIBUTING.md) for guidelines.
|
|
234
|
+
|
|
235
|
+
Join our [Discord community](https://discord.gg/xt9WY3GnKR) for questions and discussion.
|
|
236
|
+
|
|
237
|
+
## License
|
|
238
|
+
|
|
239
|
+
MIT -- see [LICENSE](https://github.com/kreuzberg-dev/liter-llm/blob/main/LICENSE) for details.
|
data/lib/liter_llm.rb
ADDED
data/lib/liter_llm_rb.so
ADDED
|
Binary file
|
data/sig/liter_llm.rbs
ADDED
|
@@ -0,0 +1,416 @@
|
|
|
1
|
+
# Type signatures for the LiterLlm Ruby binding.
|
|
2
|
+
# Generated from crates/liter-llm-rb Rust source — keep in sync with the
|
|
3
|
+
# Magnus extension and lib/liter_llm.rb public API.
|
|
4
|
+
|
|
5
|
+
module LiterLlm
|
|
6
|
+
VERSION: String
|
|
7
|
+
|
|
8
|
+
# ─── Shared types ────────────────────────────────────────────────────────────
|
|
9
|
+
|
|
10
|
+
# Token usage counts returned in both chat and embedding responses.
|
|
11
|
+
type usage_response = { prompt_tokens: Integer, completion_tokens: Integer, total_tokens: Integer }
|
|
12
|
+
|
|
13
|
+
# ─── Content types ───────────────────────────────────────────────────────────
|
|
14
|
+
|
|
15
|
+
type image_url_param = { url: String, detail: ("low" | "high" | "auto")? }
|
|
16
|
+
|
|
17
|
+
# A single part of a multipart user message.
|
|
18
|
+
type content_part_param =
|
|
19
|
+
{ type: "text", text: String } |
|
|
20
|
+
{ type: "image_url", image_url: image_url_param }
|
|
21
|
+
|
|
22
|
+
# ─── Message types ───────────────────────────────────────────────────────────
|
|
23
|
+
|
|
24
|
+
# A single message in the conversation history.
|
|
25
|
+
# The `content` field is a plain string for system/tool/developer/function
|
|
26
|
+
# roles and either a string or a list of content parts for user messages.
|
|
27
|
+
type message_param = {
|
|
28
|
+
role: ("system" | "user" | "assistant" | "tool" | "developer" | "function"),
|
|
29
|
+
content: String | Array[content_part_param],
|
|
30
|
+
name: String?,
|
|
31
|
+
tool_call_id: String?
|
|
32
|
+
}
|
|
33
|
+
|
|
34
|
+
# ─── Tool / function call types ──────────────────────────────────────────────
|
|
35
|
+
|
|
36
|
+
type function_definition = {
|
|
37
|
+
name: String,
|
|
38
|
+
description: String?,
|
|
39
|
+
parameters: Hash[String, untyped]?,
|
|
40
|
+
strict: bool?
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
type tool_param = { type: "function", function: function_definition }
|
|
44
|
+
|
|
45
|
+
type specific_tool_choice = { type: "function", function: { name: String } }
|
|
46
|
+
|
|
47
|
+
type tool_choice_param = ("auto" | "required" | "none") | specific_tool_choice
|
|
48
|
+
|
|
49
|
+
type function_call = { name: String, arguments: String }
|
|
50
|
+
|
|
51
|
+
type tool_call = { id: String, type: "function", function: function_call }
|
|
52
|
+
|
|
53
|
+
# ─── Response format ─────────────────────────────────────────────────────────
|
|
54
|
+
|
|
55
|
+
type json_schema_format = {
|
|
56
|
+
name: String,
|
|
57
|
+
description: String?,
|
|
58
|
+
schema: Hash[String, untyped],
|
|
59
|
+
strict: bool?
|
|
60
|
+
}
|
|
61
|
+
|
|
62
|
+
type response_format_param =
|
|
63
|
+
{ type: "text" } |
|
|
64
|
+
{ type: "json_object" } |
|
|
65
|
+
{ type: "json_schema", json_schema: json_schema_format }
|
|
66
|
+
|
|
67
|
+
# ─── Chat request / response ─────────────────────────────────────────────────
|
|
68
|
+
|
|
69
|
+
type stream_options = { include_usage: bool? }
|
|
70
|
+
|
|
71
|
+
# Full OpenAI-compatible chat completion request.
|
|
72
|
+
type chat_request = {
|
|
73
|
+
model: String,
|
|
74
|
+
messages: Array[message_param],
|
|
75
|
+
temperature: Float?,
|
|
76
|
+
top_p: Float?,
|
|
77
|
+
n: Integer?,
|
|
78
|
+
stream: bool?,
|
|
79
|
+
stop: (String | Array[String])?,
|
|
80
|
+
max_tokens: Integer?,
|
|
81
|
+
presence_penalty: Float?,
|
|
82
|
+
frequency_penalty: Float?,
|
|
83
|
+
logit_bias: Hash[String, Float]?,
|
|
84
|
+
user: String?,
|
|
85
|
+
tools: Array[tool_param]?,
|
|
86
|
+
tool_choice: tool_choice_param?,
|
|
87
|
+
parallel_tool_calls: bool?,
|
|
88
|
+
response_format: response_format_param?,
|
|
89
|
+
stream_options: stream_options?,
|
|
90
|
+
seed: Integer?
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
type assistant_message = {
|
|
94
|
+
content: String?,
|
|
95
|
+
name: String?,
|
|
96
|
+
tool_calls: Array[tool_call]?,
|
|
97
|
+
refusal: String?,
|
|
98
|
+
function_call: function_call?
|
|
99
|
+
}
|
|
100
|
+
|
|
101
|
+
type choice_response = {
|
|
102
|
+
index: Integer,
|
|
103
|
+
message: assistant_message,
|
|
104
|
+
finish_reason: ("stop" | "length" | "tool_calls" | "content_filter" | "function_call" | String)?
|
|
105
|
+
}
|
|
106
|
+
|
|
107
|
+
# Full OpenAI-compatible chat completion response.
|
|
108
|
+
type chat_response = {
|
|
109
|
+
id: String,
|
|
110
|
+
object: String,
|
|
111
|
+
created: Integer,
|
|
112
|
+
model: String,
|
|
113
|
+
choices: Array[choice_response],
|
|
114
|
+
usage: usage_response?,
|
|
115
|
+
system_fingerprint: String?,
|
|
116
|
+
service_tier: String?
|
|
117
|
+
}
|
|
118
|
+
|
|
119
|
+
# ─── Streaming chunk types ───────────────────────────────────────────────────
|
|
120
|
+
|
|
121
|
+
type stream_function_call = { name: String?, arguments: String? }
|
|
122
|
+
|
|
123
|
+
type stream_tool_call = {
|
|
124
|
+
index: Integer,
|
|
125
|
+
id: String?,
|
|
126
|
+
type: "function"?,
|
|
127
|
+
function: stream_function_call?
|
|
128
|
+
}
|
|
129
|
+
|
|
130
|
+
type stream_delta = {
|
|
131
|
+
role: String?,
|
|
132
|
+
content: String?,
|
|
133
|
+
tool_calls: Array[stream_tool_call]?,
|
|
134
|
+
function_call: stream_function_call?,
|
|
135
|
+
refusal: String?
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
type stream_choice = { index: Integer, delta: stream_delta, finish_reason: String? }
|
|
139
|
+
|
|
140
|
+
type chat_completion_chunk = {
|
|
141
|
+
id: String,
|
|
142
|
+
object: String,
|
|
143
|
+
created: Integer,
|
|
144
|
+
model: String,
|
|
145
|
+
choices: Array[stream_choice],
|
|
146
|
+
usage: usage_response?,
|
|
147
|
+
service_tier: String?
|
|
148
|
+
}
|
|
149
|
+
|
|
150
|
+
# ─── Embedding types ─────────────────────────────────────────────────────────
|
|
151
|
+
|
|
152
|
+
type embedding_request = {
|
|
153
|
+
model: String,
|
|
154
|
+
input: String | Array[String],
|
|
155
|
+
encoding_format: String?,
|
|
156
|
+
dimensions: Integer?,
|
|
157
|
+
user: String?
|
|
158
|
+
}
|
|
159
|
+
|
|
160
|
+
type embedding_object = { object: String, embedding: Array[Float], index: Integer }
|
|
161
|
+
|
|
162
|
+
type embedding_response = {
|
|
163
|
+
object: String,
|
|
164
|
+
data: Array[embedding_object],
|
|
165
|
+
model: String,
|
|
166
|
+
usage: usage_response
|
|
167
|
+
}
|
|
168
|
+
|
|
169
|
+
# ─── Models types ────────────────────────────────────────────────────────────
|
|
170
|
+
|
|
171
|
+
type model_object = { id: String, object: String, created: Integer, owned_by: String }
|
|
172
|
+
|
|
173
|
+
type models_response = { object: String, data: Array[model_object] }
|
|
174
|
+
|
|
175
|
+
# ─── Image Generation types ──────────────────────────────────────────────────
|
|
176
|
+
|
|
177
|
+
type create_image_request = {
|
|
178
|
+
prompt: String,
|
|
179
|
+
model: String?,
|
|
180
|
+
n: Integer?,
|
|
181
|
+
quality: String?,
|
|
182
|
+
response_format: String?,
|
|
183
|
+
size: String?,
|
|
184
|
+
style: String?,
|
|
185
|
+
user: String?
|
|
186
|
+
}
|
|
187
|
+
|
|
188
|
+
type image_data = { url: String?, b64_json: String?, revised_prompt: String? }
|
|
189
|
+
|
|
190
|
+
type images_response = { created: Integer, data: Array[image_data] }
|
|
191
|
+
|
|
192
|
+
# ─── Speech types ──────────────────────────────────────────────────────────
|
|
193
|
+
|
|
194
|
+
type create_speech_request = {
|
|
195
|
+
model: String,
|
|
196
|
+
input: String,
|
|
197
|
+
voice: String,
|
|
198
|
+
response_format: String?,
|
|
199
|
+
speed: Float?
|
|
200
|
+
}
|
|
201
|
+
|
|
202
|
+
# ─── Transcription types ───────────────────────────────────────────────────
|
|
203
|
+
|
|
204
|
+
type create_transcription_request = {
|
|
205
|
+
file: String,
|
|
206
|
+
model: String,
|
|
207
|
+
language: String?,
|
|
208
|
+
prompt: String?,
|
|
209
|
+
response_format: String?,
|
|
210
|
+
temperature: Float?
|
|
211
|
+
}
|
|
212
|
+
|
|
213
|
+
type transcription_response = { text: String }
|
|
214
|
+
|
|
215
|
+
# ─── Moderation types ──────────────────────────────────────────────────────
|
|
216
|
+
|
|
217
|
+
type moderation_request = {
|
|
218
|
+
input: untyped,
|
|
219
|
+
model: String?
|
|
220
|
+
}
|
|
221
|
+
|
|
222
|
+
type moderation_categories = {
|
|
223
|
+
sexual: bool,
|
|
224
|
+
hate: bool,
|
|
225
|
+
harassment: bool,
|
|
226
|
+
violence: bool
|
|
227
|
+
}
|
|
228
|
+
|
|
229
|
+
type moderation_category_scores = {
|
|
230
|
+
sexual: Float,
|
|
231
|
+
hate: Float,
|
|
232
|
+
harassment: Float,
|
|
233
|
+
violence: Float
|
|
234
|
+
}
|
|
235
|
+
|
|
236
|
+
type moderation_result = {
|
|
237
|
+
flagged: bool,
|
|
238
|
+
categories: moderation_categories,
|
|
239
|
+
category_scores: moderation_category_scores
|
|
240
|
+
}
|
|
241
|
+
|
|
242
|
+
type moderation_response = { id: String, model: String, results: Array[moderation_result] }
|
|
243
|
+
|
|
244
|
+
# ─── Rerank types ──────────────────────────────────────────────────────────
|
|
245
|
+
|
|
246
|
+
type rerank_request = {
|
|
247
|
+
model: String,
|
|
248
|
+
query: String,
|
|
249
|
+
documents: untyped,
|
|
250
|
+
top_n: Integer?
|
|
251
|
+
}
|
|
252
|
+
|
|
253
|
+
type rerank_result = { index: Integer, relevance_score: Float }
|
|
254
|
+
|
|
255
|
+
type rerank_response = { results: Array[rerank_result], model: String, usage: usage_response? }
|
|
256
|
+
|
|
257
|
+
# ─── File types ────────────────────────────────────────────────────────────
|
|
258
|
+
|
|
259
|
+
type create_file_request = { file: String, purpose: String, filename: String? }
|
|
260
|
+
|
|
261
|
+
type file_object = {
|
|
262
|
+
id: String,
|
|
263
|
+
object: String,
|
|
264
|
+
bytes: Integer,
|
|
265
|
+
created_at: Integer,
|
|
266
|
+
filename: String,
|
|
267
|
+
purpose: String,
|
|
268
|
+
status: String?,
|
|
269
|
+
status_details: String?
|
|
270
|
+
}
|
|
271
|
+
|
|
272
|
+
type delete_response = { id: String, object: String, deleted: bool }
|
|
273
|
+
|
|
274
|
+
type file_list_query = { purpose: String?, limit: Integer?, after: String? }
|
|
275
|
+
|
|
276
|
+
type file_list_response = { object: String, data: Array[file_object] }
|
|
277
|
+
|
|
278
|
+
# ─── Batch types ───────────────────────────────────────────────────────────
|
|
279
|
+
|
|
280
|
+
type create_batch_request = {
|
|
281
|
+
input_file_id: String,
|
|
282
|
+
endpoint: String,
|
|
283
|
+
completion_window: String,
|
|
284
|
+
metadata: Hash[String, String]?
|
|
285
|
+
}
|
|
286
|
+
|
|
287
|
+
type batch_request_counts = { total: Integer, completed: Integer, failed: Integer }
|
|
288
|
+
|
|
289
|
+
type batch_object = {
|
|
290
|
+
id: String,
|
|
291
|
+
object: String,
|
|
292
|
+
endpoint: String,
|
|
293
|
+
input_file_id: String,
|
|
294
|
+
completion_window: String,
|
|
295
|
+
status: String,
|
|
296
|
+
output_file_id: String?,
|
|
297
|
+
error_file_id: String?,
|
|
298
|
+
created_at: Integer,
|
|
299
|
+
request_counts: batch_request_counts?,
|
|
300
|
+
metadata: Hash[String, String]?
|
|
301
|
+
}
|
|
302
|
+
|
|
303
|
+
type batch_list_query = { limit: Integer?, after: String? }
|
|
304
|
+
|
|
305
|
+
type batch_list_response = { object: String, data: Array[batch_object] }
|
|
306
|
+
|
|
307
|
+
# ─── Response types ────────────────────────────────────────────────────────
|
|
308
|
+
|
|
309
|
+
type create_response_request = {
|
|
310
|
+
model: String,
|
|
311
|
+
input: untyped,
|
|
312
|
+
instructions: String?,
|
|
313
|
+
max_output_tokens: Integer?,
|
|
314
|
+
temperature: Float?,
|
|
315
|
+
top_p: Float?,
|
|
316
|
+
stream: bool?,
|
|
317
|
+
metadata: Hash[String, String]?
|
|
318
|
+
}
|
|
319
|
+
|
|
320
|
+
type response_object = {
|
|
321
|
+
id: String,
|
|
322
|
+
object: String,
|
|
323
|
+
created_at: Integer,
|
|
324
|
+
status: String,
|
|
325
|
+
model: String,
|
|
326
|
+
output: untyped?,
|
|
327
|
+
usage: usage_response?,
|
|
328
|
+
metadata: Hash[String, String]?,
|
|
329
|
+
error: untyped?
|
|
330
|
+
}
|
|
331
|
+
|
|
332
|
+
# ─── LlmClient ───────────────────────────────────────────────────────────────
|
|
333
|
+
|
|
334
|
+
# Unified LLM client backed by the Rust core.
|
|
335
|
+
#
|
|
336
|
+
# All I/O methods accept a JSON-encoded request string and return a
|
|
337
|
+
# JSON-encoded response string. The thin Ruby layer is responsible for
|
|
338
|
+
# serialising/deserialising as needed.
|
|
339
|
+
class LlmClient
|
|
340
|
+
# Create a new client.
|
|
341
|
+
#
|
|
342
|
+
# @param api_key API key for authentication.
|
|
343
|
+
# @param base_url Optional provider base URL override.
|
|
344
|
+
# @param max_retries Retries on 429 / 5xx (default: 3).
|
|
345
|
+
# @param timeout_secs Request timeout in seconds (default: 60).
|
|
346
|
+
def initialize: (
|
|
347
|
+
String api_key,
|
|
348
|
+
?base_url: String?,
|
|
349
|
+
?model_hint: String?,
|
|
350
|
+
?max_retries: Integer,
|
|
351
|
+
?timeout_secs: Integer
|
|
352
|
+
) -> void
|
|
353
|
+
|
|
354
|
+
# Send a chat completion request.
|
|
355
|
+
def chat: (String request_json) -> String
|
|
356
|
+
|
|
357
|
+
# Send an embedding request.
|
|
358
|
+
def embed: (String request_json) -> String
|
|
359
|
+
|
|
360
|
+
# List models available from the configured provider.
|
|
361
|
+
def list_models: () -> String
|
|
362
|
+
|
|
363
|
+
# Generate an image from a text prompt.
|
|
364
|
+
def image_generate: (String request_json) -> String
|
|
365
|
+
|
|
366
|
+
# Generate audio speech from text, returning base64-encoded audio bytes.
|
|
367
|
+
def speech: (String request_json) -> String
|
|
368
|
+
|
|
369
|
+
# Transcribe audio to text.
|
|
370
|
+
def transcribe: (String request_json) -> String
|
|
371
|
+
|
|
372
|
+
# Check content against moderation policies.
|
|
373
|
+
def moderate: (String request_json) -> String
|
|
374
|
+
|
|
375
|
+
# Rerank documents by relevance to a query.
|
|
376
|
+
def rerank: (String request_json) -> String
|
|
377
|
+
|
|
378
|
+
# Upload a file.
|
|
379
|
+
def create_file: (String request_json) -> String
|
|
380
|
+
|
|
381
|
+
# Retrieve metadata for a file by ID.
|
|
382
|
+
def retrieve_file: (String file_id) -> String
|
|
383
|
+
|
|
384
|
+
# Delete a file by ID.
|
|
385
|
+
def delete_file: (String file_id) -> String
|
|
386
|
+
|
|
387
|
+
# List files, optionally filtered by query parameters.
|
|
388
|
+
def list_files: (String? query_json) -> String
|
|
389
|
+
|
|
390
|
+
# Retrieve the raw content of a file (base64-encoded).
|
|
391
|
+
def file_content: (String file_id) -> String
|
|
392
|
+
|
|
393
|
+
# Create a new batch job.
|
|
394
|
+
def create_batch: (String request_json) -> String
|
|
395
|
+
|
|
396
|
+
# Retrieve a batch by ID.
|
|
397
|
+
def retrieve_batch: (String batch_id) -> String
|
|
398
|
+
|
|
399
|
+
# List batches, optionally filtered by query parameters.
|
|
400
|
+
def list_batches: (String? query_json) -> String
|
|
401
|
+
|
|
402
|
+
# Cancel an in-progress batch.
|
|
403
|
+
def cancel_batch: (String batch_id) -> String
|
|
404
|
+
|
|
405
|
+
# Create a new response via the Responses API.
|
|
406
|
+
def create_response: (String request_json) -> String
|
|
407
|
+
|
|
408
|
+
# Retrieve a response by ID.
|
|
409
|
+
def retrieve_response: (String response_id) -> String
|
|
410
|
+
|
|
411
|
+
# Cancel an in-progress response.
|
|
412
|
+
def cancel_response: (String response_id) -> String
|
|
413
|
+
|
|
414
|
+
def inspect: () -> String
|
|
415
|
+
end
|
|
416
|
+
end
|
metadata
ADDED
|
@@ -0,0 +1,199 @@
|
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
|
2
|
+
name: liter_llm
|
|
3
|
+
version: !ruby/object:Gem::Version
|
|
4
|
+
version: 1.0.0.pre.rc.6
|
|
5
|
+
platform: aarch64-linux
|
|
6
|
+
authors:
|
|
7
|
+
- Na'aman Hirschfeld
|
|
8
|
+
autorequire:
|
|
9
|
+
bindir: bin
|
|
10
|
+
cert_chain: []
|
|
11
|
+
date: 2026-03-28 00:00:00.000000000 Z
|
|
12
|
+
dependencies:
|
|
13
|
+
- !ruby/object:Gem::Dependency
|
|
14
|
+
name: bundler
|
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
|
16
|
+
requirements:
|
|
17
|
+
- - "~>"
|
|
18
|
+
- !ruby/object:Gem::Version
|
|
19
|
+
version: '4.0'
|
|
20
|
+
type: :development
|
|
21
|
+
prerelease: false
|
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
23
|
+
requirements:
|
|
24
|
+
- - "~>"
|
|
25
|
+
- !ruby/object:Gem::Version
|
|
26
|
+
version: '4.0'
|
|
27
|
+
- !ruby/object:Gem::Dependency
|
|
28
|
+
name: rake
|
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
|
30
|
+
requirements:
|
|
31
|
+
- - "~>"
|
|
32
|
+
- !ruby/object:Gem::Version
|
|
33
|
+
version: '13.0'
|
|
34
|
+
type: :development
|
|
35
|
+
prerelease: false
|
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
37
|
+
requirements:
|
|
38
|
+
- - "~>"
|
|
39
|
+
- !ruby/object:Gem::Version
|
|
40
|
+
version: '13.0'
|
|
41
|
+
- !ruby/object:Gem::Dependency
|
|
42
|
+
name: rake-compiler
|
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
|
44
|
+
requirements:
|
|
45
|
+
- - "~>"
|
|
46
|
+
- !ruby/object:Gem::Version
|
|
47
|
+
version: '1.2'
|
|
48
|
+
type: :development
|
|
49
|
+
prerelease: false
|
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
51
|
+
requirements:
|
|
52
|
+
- - "~>"
|
|
53
|
+
- !ruby/object:Gem::Version
|
|
54
|
+
version: '1.2'
|
|
55
|
+
- !ruby/object:Gem::Dependency
|
|
56
|
+
name: rspec
|
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
|
58
|
+
requirements:
|
|
59
|
+
- - "~>"
|
|
60
|
+
- !ruby/object:Gem::Version
|
|
61
|
+
version: '3.12'
|
|
62
|
+
type: :development
|
|
63
|
+
prerelease: false
|
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
65
|
+
requirements:
|
|
66
|
+
- - "~>"
|
|
67
|
+
- !ruby/object:Gem::Version
|
|
68
|
+
version: '3.12'
|
|
69
|
+
- !ruby/object:Gem::Dependency
|
|
70
|
+
name: rbs
|
|
71
|
+
requirement: !ruby/object:Gem::Requirement
|
|
72
|
+
requirements:
|
|
73
|
+
- - "~>"
|
|
74
|
+
- !ruby/object:Gem::Version
|
|
75
|
+
version: '3.0'
|
|
76
|
+
type: :development
|
|
77
|
+
prerelease: false
|
|
78
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
79
|
+
requirements:
|
|
80
|
+
- - "~>"
|
|
81
|
+
- !ruby/object:Gem::Version
|
|
82
|
+
version: '3.0'
|
|
83
|
+
- !ruby/object:Gem::Dependency
|
|
84
|
+
name: rubocop
|
|
85
|
+
requirement: !ruby/object:Gem::Requirement
|
|
86
|
+
requirements:
|
|
87
|
+
- - "~>"
|
|
88
|
+
- !ruby/object:Gem::Version
|
|
89
|
+
version: '1.66'
|
|
90
|
+
type: :development
|
|
91
|
+
prerelease: false
|
|
92
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
93
|
+
requirements:
|
|
94
|
+
- - "~>"
|
|
95
|
+
- !ruby/object:Gem::Version
|
|
96
|
+
version: '1.66'
|
|
97
|
+
- !ruby/object:Gem::Dependency
|
|
98
|
+
name: rubocop-performance
|
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
|
100
|
+
requirements:
|
|
101
|
+
- - "~>"
|
|
102
|
+
- !ruby/object:Gem::Version
|
|
103
|
+
version: '1.21'
|
|
104
|
+
type: :development
|
|
105
|
+
prerelease: false
|
|
106
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
107
|
+
requirements:
|
|
108
|
+
- - "~>"
|
|
109
|
+
- !ruby/object:Gem::Version
|
|
110
|
+
version: '1.21'
|
|
111
|
+
- !ruby/object:Gem::Dependency
|
|
112
|
+
name: rubocop-rspec
|
|
113
|
+
requirement: !ruby/object:Gem::Requirement
|
|
114
|
+
requirements:
|
|
115
|
+
- - "~>"
|
|
116
|
+
- !ruby/object:Gem::Version
|
|
117
|
+
version: '3.0'
|
|
118
|
+
type: :development
|
|
119
|
+
prerelease: false
|
|
120
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
121
|
+
requirements:
|
|
122
|
+
- - "~>"
|
|
123
|
+
- !ruby/object:Gem::Version
|
|
124
|
+
version: '3.0'
|
|
125
|
+
- !ruby/object:Gem::Dependency
|
|
126
|
+
name: steep
|
|
127
|
+
requirement: !ruby/object:Gem::Requirement
|
|
128
|
+
requirements:
|
|
129
|
+
- - "~>"
|
|
130
|
+
- !ruby/object:Gem::Version
|
|
131
|
+
version: '1.8'
|
|
132
|
+
type: :development
|
|
133
|
+
prerelease: false
|
|
134
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
135
|
+
requirements:
|
|
136
|
+
- - "~>"
|
|
137
|
+
- !ruby/object:Gem::Version
|
|
138
|
+
version: '1.8'
|
|
139
|
+
- !ruby/object:Gem::Dependency
|
|
140
|
+
name: yard
|
|
141
|
+
requirement: !ruby/object:Gem::Requirement
|
|
142
|
+
requirements:
|
|
143
|
+
- - "~>"
|
|
144
|
+
- !ruby/object:Gem::Version
|
|
145
|
+
version: '0.9'
|
|
146
|
+
type: :development
|
|
147
|
+
prerelease: false
|
|
148
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
149
|
+
requirements:
|
|
150
|
+
- - "~>"
|
|
151
|
+
- !ruby/object:Gem::Version
|
|
152
|
+
version: '0.9'
|
|
153
|
+
description: |
|
|
154
|
+
liter-llm is a universal LLM API client with a Rust core and native Ruby bindings
|
|
155
|
+
via Magnus. Provides a unified interface for streaming completions, tool calling,
|
|
156
|
+
and provider routing across 142+ LLM providers. Rust-powered.
|
|
157
|
+
email:
|
|
158
|
+
- naaman@kreuzberg.dev
|
|
159
|
+
executables: []
|
|
160
|
+
extensions: []
|
|
161
|
+
extra_rdoc_files: []
|
|
162
|
+
files:
|
|
163
|
+
- README.md
|
|
164
|
+
- lib/liter_llm.rb
|
|
165
|
+
- lib/liter_llm_rb.so
|
|
166
|
+
- sig/liter_llm.rbs
|
|
167
|
+
homepage: https://kreuzberg.dev
|
|
168
|
+
licenses:
|
|
169
|
+
- MIT
|
|
170
|
+
metadata:
|
|
171
|
+
homepage_uri: https://kreuzberg.dev
|
|
172
|
+
source_code_uri: https://github.com/kreuzberg-dev/liter-llm
|
|
173
|
+
changelog_uri: https://github.com/kreuzberg-dev/liter-llm/blob/main/CHANGELOG.md
|
|
174
|
+
bug_tracker_uri: https://github.com/kreuzberg-dev/liter-llm/issues
|
|
175
|
+
rubygems_mfa_required: 'true'
|
|
176
|
+
keywords: llm,llm-client,openai,anthropic,streaming,tool-calling,provider-routing,rust,native-extension
|
|
177
|
+
post_install_message:
|
|
178
|
+
rdoc_options: []
|
|
179
|
+
require_paths:
|
|
180
|
+
- lib
|
|
181
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
|
182
|
+
requirements:
|
|
183
|
+
- - ">="
|
|
184
|
+
- !ruby/object:Gem::Version
|
|
185
|
+
version: 3.2.0
|
|
186
|
+
- - "<"
|
|
187
|
+
- !ruby/object:Gem::Version
|
|
188
|
+
version: '5.0'
|
|
189
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
190
|
+
requirements:
|
|
191
|
+
- - ">="
|
|
192
|
+
- !ruby/object:Gem::Version
|
|
193
|
+
version: '0'
|
|
194
|
+
requirements: []
|
|
195
|
+
rubygems_version: 3.5.22
|
|
196
|
+
signing_key:
|
|
197
|
+
specification_version: 4
|
|
198
|
+
summary: Universal LLM API client — 142+ providers, streaming, tool calling. Rust-powered.
|
|
199
|
+
test_files: []
|