lex-llm-vertex 0.2.0 → 0.2.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 35a2ffd46ca20c21c1c0d280688794b6702285b996d0f1386c37de565a409b57
4
- data.tar.gz: 5ae360df40fe27bd2c43d37cd15ee36449303422df3a65550e596fc51f22ae3e
3
+ metadata.gz: 278b657e5c2050cb2208929e9ae1e872666cbccc5ccfbf6cda2253550d9aca89
4
+ data.tar.gz: 25ce6f8fdf008778892ff3ec8ca4940b07e3f53e7c5c847baa2ed622c04e8055
5
5
  SHA512:
6
- metadata.gz: ac9e04f6a5ffdf57bd83392cbb039220871eac5d006256bf90c0a095bae241722a2aa8a05216ab663751bc0d5f144eec95ffcf6f6b5921f2135309938bd3de77
7
- data.tar.gz: 928812748673dd57855e0ad4e718d2de1fa0f331d2d60df190d96b072de49cd4b317f4eb905c59a9eccaaf38a85c5ea2b87879bcaab00d5a7c4446db6295258d
6
+ metadata.gz: ca1e2604260d332f28565e7d492a9e38ee1b983f8306d78bdfc36c071f7bd00665465c7462d216c44f319164771b3272e5e93a1a4737ccddd58d5feec39df659
7
+ data.tar.gz: bf2fb3a0389015cda88dd1a934c178e1431fddff9bdc482314737e9823e2fa16d378c103571c2e9fd67214947fc898b93e2a4bd567c55eac370ce6f90eef01f1
@@ -8,8 +8,20 @@ jobs:
8
8
  ci:
9
9
  uses: LegionIO/.github/.github/workflows/ci.yml@main
10
10
 
11
+ excluded-files:
12
+ uses: LegionIO/.github/.github/workflows/excluded-files.yml@main
13
+
14
+ security:
15
+ uses: LegionIO/.github/.github/workflows/security-scan.yml@main
16
+
17
+ version-changelog:
18
+ uses: LegionIO/.github/.github/workflows/version-changelog.yml@main
19
+
20
+ dependency-review:
21
+ uses: LegionIO/.github/.github/workflows/dependency-review.yml@main
22
+
11
23
  release:
12
- needs: ci
24
+ needs: [ci, excluded-files, security]
13
25
  if: github.event_name == 'push' && github.ref == 'refs/heads/main'
14
26
  uses: LegionIO/.github/.github/workflows/release.yml@main
15
27
  secrets:
data/CHANGELOG.md CHANGED
@@ -1,5 +1,42 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.2.6 - 2026-05-06
4
+
5
+ - Load provider-owned fleet actors through the LegionIO subscription base and the canonical Vertex provider root.
6
+ - Keep fleet runners anchored on the provider root namespace so provider constants and instance discovery are always loaded.
7
+ - Preserve configured transport and tier metadata when Vertex builds routing offerings.
8
+ - Remove throwaway unused-argument allocation in provider request methods.
9
+ - Gate release publishing on the shared security workflow.
10
+
11
+ ## 0.2.5 - 2026-05-06
12
+
13
+ - Use the shared `lex-llm` fleet provider responder helper for provider-owned fleet workers.
14
+ - Remove the runtime `legion-llm` dependency and require `lex-llm >= 0.4.3` for responder-side fleet execution.
15
+
16
+ ## 0.2.4 - 2026-05-06
17
+
18
+ - Keep clean CI installs on published RubyGems dependency floors while preserving local path overrides for unreleased sibling integration testing.
19
+ - Add a `stream_chat` compatibility alias so Vertex exposes the shared provider streaming surface even when running against older published `lex-llm` versions.
20
+ - Register Vertex configuration options directly when the installed `lex-llm` does not expose `Configuration.register_provider_options`.
21
+ - Make the provider-owned fleet responder bridge load only when the installed `legion-llm` exposes `Legion::LLM::Fleet::ProviderResponder`; fleet actors stay disabled instead of breaking gem load when that helper is unavailable.
22
+ - Refresh README dependency, fleet responder, file-map, license, and development-command guidance.
23
+
24
+ ## 0.2.3 - 2026-05-06
25
+
26
+ - Remove require-time provider self-registration; `legion-llm` now owns adapter creation and registry writes from loaded provider discovery metadata.
27
+ - Bump dependency floors to `lex-llm >= 0.4.1` and `legion-llm >= 0.9.1`.
28
+
29
+ ## 0.2.2 - 2026-05-06
30
+
31
+ - Enforce the shared keyword-only `lex-llm` provider contract for chat, embeddings, and token counting.
32
+ - Move Vertex defaults back to `Legion::Extensions::Llm.provider_settings` with credentials/provider metadata under the default instance and instance-level fleet responder settings.
33
+ - Add provider-owned fleet responder actor and runner backed by `legion-llm` fleet policy execution.
34
+ - Bump the transport dependency floor to `legion-transport >= 1.4.14`.
35
+
36
+ ## 0.2.1 - 2026-05-03
37
+
38
+ - Normalize generic settings keys to Vertex provider config keys during instance discovery.
39
+
3
40
  ## 0.2.0 - 2026-05-01
4
41
 
5
42
  - Add auto-discovery via CredentialSources and AutoRegistration from lex-llm 0.3.0
data/Gemfile CHANGED
@@ -4,6 +4,8 @@ source 'https://rubygems.org'
4
4
 
5
5
  group :test do
6
6
  llm_base_path = ENV.fetch('LEX_LLM_PATH', File.expand_path('../lex-llm', __dir__))
7
+ transport_path = ENV.fetch('LEGION_TRANSPORT_PATH', File.expand_path('../../legion-transport', __dir__))
8
+ gem 'legion-transport', path: transport_path if File.directory?(transport_path)
7
9
  gem 'lex-llm', path: llm_base_path if File.directory?(llm_base_path)
8
10
  end
9
11
 
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  Google Cloud Vertex AI provider extension for `Legion::Extensions::Llm`.
4
4
 
5
- This gem adds a hosted Vertex AI provider surface for Legion LLM routing without depending on the old `legion-llm` gem. It keeps discovery offline by default, preserves full Vertex publisher model resource names for routing, and exposes project/location instance metadata for multi-region provider fleets. It requires `lex-llm >= 0.1.5` for the shared model offering, alias, readiness, and fleet lane contract.
5
+ This gem adds a hosted Vertex AI provider surface for Legion LLM routing. It keeps discovery offline by default, preserves full Vertex publisher model resource names for routing, and exposes project/location instance metadata for multi-region provider fleets. It installs against the current published `lex-llm` gem, while the `Gemfile` can use local sibling checkouts for unreleased provider-contract testing.
6
6
 
7
7
  ## Install
8
8
 
@@ -32,6 +32,27 @@ Default settings expose `env://` references and keep live discovery disabled:
32
32
  Legion::Extensions::Llm::Vertex.default_settings
33
33
  ```
34
34
 
35
+ ## Fleet Responder
36
+
37
+ Provider instances can opt in to consuming Legion LLM fleet requests. The provider-owned fleet actor only starts when at least one configured instance enables `respond_to_requests`.
38
+
39
+ Fleet request execution is delegated to `Legion::Extensions::Llm::Fleet::ProviderResponder` from `lex-llm`. Request-side routing and reply orchestration remain owned by `legion-llm`; this provider only needs `lex-llm` and `legion-transport` to consume fleet jobs on a responder node.
40
+
41
+ ```yaml
42
+ extensions:
43
+ llm:
44
+ vertex:
45
+ instances:
46
+ local:
47
+ fleet:
48
+ enabled: true
49
+ respond_to_requests: true
50
+ capabilities:
51
+ - chat
52
+ - stream_chat
53
+ - embed
54
+ ```
55
+
35
56
  ## Provider Surface
36
57
 
37
58
  ```ruby
@@ -40,10 +61,10 @@ provider = Legion::Extensions::Llm::Vertex::Provider.new(Legion::Extensions::Llm
40
61
  provider.discover_offerings(live: false)
41
62
  provider.offering_for(model: 'gemini-2.5-flash')
42
63
  provider.health(live: false)
43
- provider.chat(messages, model: model)
44
- provider.stream(messages, model: model) { |chunk| chunk.content }
45
- provider.embed('hello', model: 'gemini-embedding-001')
46
- provider.count_tokens(messages, model: model)
64
+ provider.chat(messages:, model:)
65
+ provider.stream_chat(messages:, model:) { |chunk| chunk.content }
66
+ provider.embed(text: 'hello', model: 'gemini-embedding-001')
67
+ provider.count_tokens(messages:, model:)
47
68
  ```
48
69
 
49
70
  `discover_offerings(live: false)` returns a conservative static catalog for routing defaults and unit tests. `discover_offerings(live: true)` calls the Vertex publisher models listing endpoint and maps returned model data into `Legion::Extensions::Llm::Routing::ModelOffering` records.
@@ -82,11 +103,9 @@ When transport is available, the `RegistryPublisher` publishes best-effort readi
82
103
  |------|---------|
83
104
  | `lib/legion/extensions/llm/vertex.rb` | Namespace module, default settings, provider registration |
84
105
  | `lib/legion/extensions/llm/vertex/provider.rb` | Vertex AI provider: chat, stream, embed, count_tokens, health, discovery |
85
- | `lib/legion/extensions/llm/vertex/registry_publisher.rb` | Async best-effort llm.registry event publisher |
86
- | `lib/legion/extensions/llm/vertex/registry_event_builder.rb` | Builds sanitized registry event envelopes |
106
+ | `lib/legion/extensions/llm/vertex/actors/fleet_worker.rb` | Legion subscription actor for provider-owned fleet request consumption |
107
+ | `lib/legion/extensions/llm/vertex/runners/fleet_worker.rb` | Runner entrypoint that delegates fleet request execution to `lex-llm` |
87
108
  | `lib/legion/extensions/llm/vertex/version.rb` | `VERSION` constant |
88
- | `lib/legion/extensions/llm/vertex/transport/exchanges/llm_registry.rb` | `llm.registry` topic exchange definition |
89
- | `lib/legion/extensions/llm/vertex/transport/messages/registry_event.rb` | Transport message for registry events |
90
109
 
91
110
  ## Observability
92
111
 
@@ -111,14 +130,13 @@ Provider-specific request bodies are not guessed. Partner raw-predict chat reque
111
130
 
112
131
  ```bash
113
132
  bundle install
114
- bundle exec rspec # 0 failures
115
- bundle exec rubocop -A # auto-fix
116
- bundle exec rubocop # lint check
133
+ bundle exec rspec --format json --out tmp/rspec_results.json --format progress --out tmp/rspec_progress.txt
134
+ bundle exec rubocop -A
117
135
  ```
118
136
 
119
137
  ## License
120
138
 
121
- Apache-2.0
139
+ MIT
122
140
 
123
141
  ## References
124
142
 
@@ -26,5 +26,6 @@ Gem::Specification.new do |spec|
26
26
  spec.add_dependency 'legion-json', '>= 1.2.1'
27
27
  spec.add_dependency 'legion-logging', '>= 1.3.2'
28
28
  spec.add_dependency 'legion-settings', '>= 1.3.14'
29
- spec.add_dependency 'lex-llm', '>= 0.3.0'
29
+ spec.add_dependency 'legion-transport', '>= 1.4.14'
30
+ spec.add_dependency 'lex-llm', '>= 0.4.3'
30
31
  end
@@ -0,0 +1,43 @@
1
+ # frozen_string_literal: true
2
+
3
+ begin
4
+ require 'legion/extensions/actors/subscription'
5
+ rescue LoadError => e
6
+ warn(e.message) if $VERBOSE
7
+ end
8
+
9
+ unless defined?(Legion::Extensions::Actors::Subscription)
10
+ raise LoadError, 'LegionIO actor runtime is required for Vertex fleet worker'
11
+ end
12
+
13
+ require 'legion/extensions/llm/vertex'
14
+ require 'legion/extensions/llm/fleet/provider_responder'
15
+
16
+ module Legion
17
+ module Extensions
18
+ module Llm
19
+ module Vertex
20
+ module Actor
21
+ # Subscription actor for Vertex fleet request consumption.
22
+ class FleetWorker < Legion::Extensions::Actors::Subscription
23
+ def runner_class
24
+ 'Legion::Extensions::Llm::Vertex::Runners::FleetWorker'
25
+ end
26
+
27
+ def runner_function
28
+ 'handle_fleet_request'
29
+ end
30
+
31
+ def use_runner?
32
+ false
33
+ end
34
+
35
+ def enabled?
36
+ Legion::Extensions::Llm::Fleet::ProviderResponder.enabled_for?(Vertex.discover_instances)
37
+ end
38
+ end
39
+ end
40
+ end
41
+ end
42
+ end
43
+ end
@@ -177,7 +177,16 @@ module Legion
177
177
  end
178
178
  end
179
179
 
180
- def chat(messages, model:, temperature: nil, max_tokens: nil, tools: {}, tool_prefs: nil, params: {})
180
+ def chat(
181
+ messages:,
182
+ model:,
183
+ temperature: nil,
184
+ max_tokens: nil,
185
+ tools: {},
186
+ tool_prefs: nil,
187
+ params: {},
188
+ **_provider_options
189
+ )
181
190
  model_id = model_id(model)
182
191
  log.info { "chat model=#{model_id} messages=#{messages.size}" }
183
192
  @model = model_id
@@ -187,7 +196,8 @@ module Legion
187
196
  parse_chat_response(response, model: model_id)
188
197
  end
189
198
 
190
- def stream(messages, model:, temperature: nil, max_tokens: nil, tools: {}, tool_prefs: nil, params: {})
199
+ def stream(messages:, model:, temperature: nil, max_tokens: nil, tools: {}, tool_prefs: nil, params: {},
200
+ **_provider_options)
191
201
  model_id = model_id(model)
192
202
  log.info { "stream model=#{model_id} messages=#{messages.size}" }
193
203
  @model = model_id
@@ -199,7 +209,16 @@ module Legion
199
209
  parse_chat_response(response, model: model_id)
200
210
  end
201
211
 
202
- def count_tokens(messages, model:, params: {})
212
+ def stream_chat(messages:, model:, tools: {}, temperature: nil, max_tokens: nil, params: {}, tool_prefs: nil,
213
+ **provider_options, &)
214
+ stream(messages:, model:, temperature:, max_tokens:, tools:, tool_prefs:, params:, **provider_options, &)
215
+ end
216
+
217
+ def count_tokens(
218
+ messages:,
219
+ model:,
220
+ params: {}
221
+ )
203
222
  model_id = model_id(model)
204
223
  log.info { "count_tokens model=#{model_id}" }
205
224
  unless generate_content_model?(model_id)
@@ -216,7 +235,15 @@ module Legion
216
235
  { input_tokens: response.body['totalTokens'], raw: response.body }
217
236
  end
218
237
 
219
- def embed(text, model:, dimensions: nil, task_type: nil, title: nil, params: {})
238
+ def embed(
239
+ text:,
240
+ model:,
241
+ dimensions: nil,
242
+ task_type: nil,
243
+ title: nil,
244
+ params: {},
245
+ **_provider_options
246
+ )
220
247
  model_id = model_id(model)
221
248
  log.info { "embed model=#{model_id} inputs=#{Array(text).size}" }
222
249
  unless Capabilities.embeddings?(model_id)
@@ -236,9 +263,9 @@ module Legion
236
263
  payload[:generationConfig] = Utils.deep_merge(payload[:generationConfig] || {},
237
264
  generation_config(temperature, schema, thinking))
238
265
  if block_given?
239
- stream(messages, model:, temperature:, tools:, tool_prefs:, params: payload, &)
266
+ stream(messages:, model:, temperature:, tools:, tool_prefs:, params: payload, &)
240
267
  else
241
- chat(messages, model:, temperature:, tools:, tool_prefs:, params: payload)
268
+ chat(messages:, model:, temperature:, tools:, tool_prefs:, params: payload)
242
269
  end
243
270
  end
244
271
 
@@ -293,8 +320,8 @@ module Legion
293
320
  Legion::Extensions::Llm::Routing::ModelOffering.new(
294
321
  provider_family: :vertex,
295
322
  instance_id: instance_id,
296
- transport: :http,
297
- tier: :frontier,
323
+ transport: configured_transport(:http),
324
+ tier: configured_tier(:frontier),
298
325
  model: model,
299
326
  usage_type: usage_type,
300
327
  capabilities: default_capabilities(model, api:),
@@ -310,6 +337,14 @@ module Legion
310
337
  )
311
338
  end
312
339
 
340
+ def configured_transport(default)
341
+ config.respond_to?(:transport) ? config.transport : default
342
+ end
343
+
344
+ def configured_tier(default)
345
+ config.respond_to?(:tier) ? config.tier : default
346
+ end
347
+
313
348
  def publisher_parent
314
349
  "projects/#{project}/locations/#{location}/publishers/#{DEFAULT_PUBLISHER}/models"
315
350
  end
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'legion/extensions/llm/fleet/provider_responder'
4
+ require 'legion/extensions/llm/vertex'
5
+
6
+ module Legion
7
+ module Extensions
8
+ module Llm
9
+ module Vertex
10
+ module Runners
11
+ # Runner entrypoint for Vertex fleet request execution.
12
+ module FleetWorker
13
+ module_function
14
+
15
+ def handle_fleet_request(payload, delivery: nil, properties: nil)
16
+ Legion::Extensions::Llm::Fleet::ProviderResponder.call(
17
+ payload: payload,
18
+ provider_family: Vertex::PROVIDER_FAMILY,
19
+ provider_class: Vertex::Provider,
20
+ provider_instances: -> { Vertex.discover_instances },
21
+ delivery: delivery,
22
+ properties: properties
23
+ )
24
+ end
25
+ end
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
@@ -4,7 +4,7 @@ module Legion
4
4
  module Extensions
5
5
  module Llm
6
6
  module Vertex
7
- VERSION = '0.2.0'
7
+ VERSION = '0.2.6'
8
8
  end
9
9
  end
10
10
  end
@@ -16,17 +16,33 @@ module Legion
16
16
  PROVIDER_FAMILY = :vertex
17
17
 
18
18
  def self.default_settings
19
- {
20
- enabled: false,
21
- default_model: nil,
22
- project: nil,
23
- location: 'us-central1',
24
- model_whitelist: [],
25
- model_blacklist: [],
26
- model_cache_ttl: 3600,
27
- tls: { enabled: false, verify: :peer },
28
- instances: {}
29
- }
19
+ ::Legion::Extensions::Llm.provider_settings(
20
+ family: PROVIDER_FAMILY,
21
+ instance: {
22
+ endpoint: nil,
23
+ tier: :frontier,
24
+ transport: :http,
25
+ credentials: {
26
+ access_token: nil,
27
+ credentials: nil
28
+ },
29
+ provider: {
30
+ project: nil,
31
+ location: Provider::DEFAULT_LOCATION,
32
+ model_aliases: {}
33
+ },
34
+ usage: { inference: true, embedding: true, image: false },
35
+ limits: { concurrency: 4 },
36
+ fleet: {
37
+ enabled: false,
38
+ respond_to_requests: false,
39
+ capabilities: %i[chat stream_chat embed],
40
+ lanes: [],
41
+ concurrency: 4,
42
+ queue_suffix: nil
43
+ }
44
+ }
45
+ )
30
46
  end
31
47
 
32
48
  def self.provider_class
@@ -44,7 +60,7 @@ module Legion
44
60
  cfg = CredentialSources.setting(:extensions, :llm, :vertex)
45
61
  return unless cfg.is_a?(Hash) && vertex_credentials_present?(cfg)
46
62
 
47
- instances[:settings] = cfg.except(:instances, 'instances').merge(tier: :cloud)
63
+ instances[:settings] = normalize_instance_config(cfg).merge(tier: :cloud)
48
64
  end
49
65
 
50
66
  def self.discover_named_instances(instances)
@@ -57,7 +73,7 @@ module Legion
57
73
  named.each do |name, config|
58
74
  next unless config.is_a?(Hash) && vertex_credentials_present?(config)
59
75
 
60
- instances[name.to_sym] = config.merge(tier: :cloud)
76
+ instances[name.to_sym] = normalize_instance_config(config).merge(tier: :cloud)
61
77
  end
62
78
  end
63
79
 
@@ -70,14 +86,33 @@ module Legion
70
86
  !(token.nil? && creds.nil?)
71
87
  end
72
88
 
73
- private_class_method :discover_default_instance, :discover_named_instances, :vertex_credentials_present?
89
+ def self.normalize_instance_config(config)
90
+ normalized = config.to_h.transform_keys { |key| key.respond_to?(:to_sym) ? key.to_sym : key }
91
+ normalized[:vertex_project] ||= normalized.delete(:project)
92
+ normalized[:vertex_location] ||= normalized.delete(:location)
93
+ normalized[:vertex_api_base] ||= normalized.delete(:base_url)
94
+ normalized[:vertex_api_base] ||= normalized.delete(:api_base)
95
+ normalized[:vertex_api_base] ||= normalized.delete(:endpoint)
96
+ normalized[:vertex_access_token] ||= normalized.delete(:access_token)
97
+ normalized[:vertex_credentials] ||= normalized.delete(:credentials)
98
+ normalized[:vertex_model_aliases] ||= normalized.delete(:model_aliases)
99
+ normalized.compact.except(:instances)
100
+ end
101
+
102
+ def self.register_provider_options
103
+ configuration = Legion::Extensions::Llm::Configuration
104
+ if configuration.respond_to?(:register_provider_options)
105
+ configuration.register_provider_options(Provider.configuration_options)
106
+ elsif configuration.respond_to?(:option, true)
107
+ Provider.configuration_options.each { |key| configuration.send(:option, key) }
108
+ end
109
+ end
110
+
111
+ private_class_method :discover_default_instance, :discover_named_instances, :vertex_credentials_present?,
112
+ :normalize_instance_config, :register_provider_options
113
+
114
+ register_provider_options
74
115
  end
75
116
  end
76
117
  end
77
118
  end
78
-
79
- Legion::Extensions::Llm::Configuration.register_provider_options(
80
- Legion::Extensions::Llm::Vertex::Provider.configuration_options
81
- )
82
-
83
- Legion::Extensions::Llm::Vertex.register_discovered_instances
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: lex-llm-vertex
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.0
4
+ version: 0.2.6
5
5
  platform: ruby
6
6
  authors:
7
7
  - LegionIO
@@ -51,20 +51,34 @@ dependencies:
51
51
  - - ">="
52
52
  - !ruby/object:Gem::Version
53
53
  version: 1.3.14
54
+ - !ruby/object:Gem::Dependency
55
+ name: legion-transport
56
+ requirement: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: 1.4.14
61
+ type: :runtime
62
+ prerelease: false
63
+ version_requirements: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - ">="
66
+ - !ruby/object:Gem::Version
67
+ version: 1.4.14
54
68
  - !ruby/object:Gem::Dependency
55
69
  name: lex-llm
56
70
  requirement: !ruby/object:Gem::Requirement
57
71
  requirements:
58
72
  - - ">="
59
73
  - !ruby/object:Gem::Version
60
- version: 0.3.0
74
+ version: 0.4.3
61
75
  type: :runtime
62
76
  prerelease: false
63
77
  version_requirements: !ruby/object:Gem::Requirement
64
78
  requirements:
65
79
  - - ">="
66
80
  - !ruby/object:Gem::Version
67
- version: 0.3.0
81
+ version: 0.4.3
68
82
  description: Google Cloud Vertex AI provider integration for the LegionIO LLM routing
69
83
  framework.
70
84
  email:
@@ -84,7 +98,9 @@ files:
84
98
  - README.md
85
99
  - lex-llm-vertex.gemspec
86
100
  - lib/legion/extensions/llm/vertex.rb
101
+ - lib/legion/extensions/llm/vertex/actors/fleet_worker.rb
87
102
  - lib/legion/extensions/llm/vertex/provider.rb
103
+ - lib/legion/extensions/llm/vertex/runners/fleet_worker.rb
88
104
  - lib/legion/extensions/llm/vertex/version.rb
89
105
  homepage: https://github.com/LegionIO/lex-llm-vertex
90
106
  licenses: