mistral 0.2.0 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d9d2632f2ddb13e1ebe7e1e286eef43e577598bfe3f918a8961efda0fd471b84
4
- data.tar.gz: 150463bfd7ec228c3d8bbae83a3e1277d1907324e10649ebe236868486e91c47
3
+ metadata.gz: a13548c7603e4a353f94fb7c798f3256c8c84fcb24f6b9a35a0d34539026641a
4
+ data.tar.gz: 12d4ebe2515e9970d00e086c90bd69c10ab86d7e786b8417d534b3d6b30fc091
5
5
  SHA512:
6
- metadata.gz: d10df680e7ad04178dea6a386539404659836ce4df94fa991a45bc3a00837802c251923ea9bb58a5118e96b4dd2b44dfce171c55588a8c704bb886627144b4ee
7
- data.tar.gz: 709d9ccaf6ed76d8c56d513f5c08ff3610eafe2662430f9f71ff6984f0ec83b3ec37daa0cf7d04efcdd2b8f5ad41e6af17ea6a62a6d593abb07f97542bf64cdd
6
+ metadata.gz: 29ce8bafc147e843e8d9f8057a9b6fde4ab331ae1bd39d5b19e73fd2cc362940368d84c97c3433b69d80ec516ad710e0314e2eab509e1706cc4b206e77787ad7
7
+ data.tar.gz: 93013c03b1047734c3a97b3a20bc98b5d3768bc2d2b0ac64a67a4b5d14bb03c85d16fc146b8eb8d87690278f9012eccfe4ca07d67357d1b66310ce19a36fb1e2
data/CHANGELOG.md CHANGED
@@ -5,6 +5,14 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.1.1/)
6
6
  and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.3.0] - 2024-05-30
9
+
10
+ ### Added
11
+
12
+ - Added support for completion requests which you can use to query their latest model
13
+ [codestral](https://mistral.ai/news/codestral/).
14
+ See [this example](https://github.com/wilsonsilva/mistral/blob/0.3.0/examples/code_completion.rb) to get started.
15
+
8
16
  ## [0.2.0] - 2024-05-23
9
17
 
10
18
  ### Added
@@ -26,5 +34,6 @@ Ports [mistralai/client-python#86](https://github.com/mistralai/client-python/pu
26
34
  - Initial release. Feature parity with `v0.1.8` of the
27
35
  [mistralai/client-python](https://github.com/mistralai/client-python)
28
36
 
37
+ [0.3.0]: https://github.com/wilsonsilva/mistral/compare/v0.2.0...v0.3.0
29
38
  [0.2.0]: https://github.com/wilsonsilva/mistral/compare/v0.1.0...v0.2.0
30
39
  [0.1.0]: https://github.com/wilsonsilva/mistral/compare/28e7c9...v0.1.0
data/README.md CHANGED
@@ -80,15 +80,17 @@ end
80
80
 
81
81
  In the [`examples`](https://github.com/wilsonsilva/mistral/tree/main/examples) folder, you will find how to do:
82
82
 
83
- | File Name | Description |
84
- |--------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|
85
- | [`chat_no_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_no_streaming.rb) | How to use the chat endpoint without streaming |
86
- | [`chat_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_with_streaming.rb) | How to use the chat endpoint with streaming |
87
- | [`chatbot_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chatbot_with_streaming.rb) | A simple interactive chatbot using streaming |
88
- | [`embeddings.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/embeddings.rb) | How to use the embeddings endpoint |
89
- | [`function_calling.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/function_calling.rb) | How to call functions using the chat endpoint |
90
- | [`json_format.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/json_format.rb) | How to request and parse JSON responses from the chat endpoint |
91
- | [`list_models.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/list_models.rb) | How to list available models |
83
+ | File Name | Description |
84
+ |--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|
85
+ | [`chat_no_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_no_streaming.rb) | How to use the chat endpoint without streaming |
86
+ | [`chat_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_with_streaming.rb) | How to use the chat endpoint with streaming |
87
+ | [`chatbot_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chatbot_with_streaming.rb) | A simple interactive chatbot using streaming |
88
+ | [`code_completion.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/code_completion.rb) | How to perform a code completion |
89
+ | [`completion_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/completion_with_streaming.rb) | How to perform a code completion with streaming |
90
+ | [`embeddings.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/embeddings.rb) | How to use the embeddings endpoint |
91
+ | [`function_calling.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/function_calling.rb) | How to call functions using the chat endpoint |
92
+ | [`json_format.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/json_format.rb) | How to request and parse JSON responses from the chat endpoint |
93
+ | [`list_models.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/list_models.rb) | How to list available models |
92
94
 
93
95
  ## 🔨 Development
94
96
 
@@ -10,11 +10,12 @@ require 'optparse'
10
10
  require 'mistral'
11
11
 
12
12
  MODEL_LIST = %w[
13
- mistral-tiny
14
- mistral-small
15
- mistral-medium
13
+ mistral-tiny-latest
14
+ mistral-small-latest
15
+ mistral-medium-latest
16
+ codestral-latest
16
17
  ].freeze
17
- DEFAULT_MODEL = 'mistral-small'
18
+ DEFAULT_MODEL = 'mistral-small-latest'
18
19
  DEFAULT_TEMPERATURE = 0.7
19
20
  LOG_FORMAT = '%(asctime)s - %(levelname)s - %(message)s'
20
21
  # A hash of all commands and their arguments, used for tab completion.
@@ -0,0 +1,24 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require 'bundler/setup'
5
+ require 'dotenv/load'
6
+ require 'mistral'
7
+
8
+ api_key = ENV.fetch('MISTRAL_API_KEY')
9
+ client = Mistral::Client.new(api_key: api_key)
10
+
11
+ prompt = 'def fibonacci(n: int):'
12
+ suffix = "n = int(input('Enter a number: '))\nprint(fibonacci(n))"
13
+
14
+ response = client.completion(
15
+ model: 'codestral-latest',
16
+ prompt: prompt,
17
+ suffix: suffix
18
+ )
19
+
20
+ print <<~COMPLETION
21
+ #{prompt}
22
+ #{response.choices[0].message.content}
23
+ #{suffix}
24
+ COMPLETION
@@ -0,0 +1,24 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require 'bundler/setup'
5
+ require 'dotenv/load'
6
+ require 'mistral'
7
+
8
+ api_key = ENV.fetch('MISTRAL_API_KEY')
9
+ client = Mistral::Client.new(api_key: api_key)
10
+
11
+ prompt = 'def fibonacci(n: int):'
12
+ suffix = "n = int(input('Enter a number: '))\nprint(fibonacci(n))"
13
+
14
+ print(prompt)
15
+
16
+ client.completion_stream(
17
+ model: 'codestral-latest',
18
+ prompt: prompt,
19
+ suffix: suffix
20
+ ).each do |chunk|
21
+ print(chunk.choices[0].delta.content) unless chunk.choices[0].delta.content.nil?
22
+ end
23
+
24
+ print(suffix)
@@ -158,6 +158,73 @@ module Mistral
158
158
  raise Mistral::Error.new(message: 'No response received')
159
159
  end
160
160
 
161
+ # A completion endpoint that returns a single response.
162
+ #
163
+ # @param model [String] model the name of the model to get completion with, e.g. codestral-latest
164
+ # @param prompt [String] the prompt to complete
165
+ # @param suffix [String, nil] the suffix to append to the prompt for fill-in-the-middle completion
166
+ # @param temperature [Float, nil] temperature the temperature to use for sampling, e.g. 0.5.
167
+ # @param max_tokens [Integer, nil] the maximum number of tokens to generate, e.g. 100. Defaults to nil.
168
+ # @param top_p [Float, nil] the cumulative probability of tokens to generate, e.g. 0.9. Defaults to nil.
169
+ # @param random_seed [Integer, nil] the random seed to use for sampling, e.g. 42. Defaults to nil.
170
+ # @param stop [Array<String>, nil] a list of tokens to stop generation at, e.g. ['/n/n']
171
+ # @return [ChatCompletionResponse] a response object containing the generated text.
172
+ #
173
+ def completion(
174
+ model:,
175
+ prompt:,
176
+ suffix: nil,
177
+ temperature: nil,
178
+ max_tokens: nil,
179
+ top_p: nil,
180
+ random_seed: nil,
181
+ stop: nil
182
+ )
183
+ request = make_completion_request(
184
+ prompt:, model:, suffix:, temperature:, max_tokens:, top_p:, random_seed:, stop:
185
+ )
186
+ single_response = request('post', 'v1/fim/completions', json: request, stream: false)
187
+
188
+ single_response.each do |response|
189
+ return ChatCompletionResponse.new(**response)
190
+ end
191
+
192
+ raise Error, 'No response received'
193
+ end
194
+
195
+ # An asynchronous completion endpoint that streams responses.
196
+ #
197
+ # @param model [String] model the name of the model to get completions with, e.g. codestral-latest
198
+ # @param prompt [String] the prompt to complete
199
+ # @param suffix [String, nil] the suffix to append to the prompt for fill-in-the-middle completion
200
+ # @param temperature [Float, nil] temperature the temperature to use for sampling, e.g. 0.5.
201
+ # @param max_tokens [Integer, nil] the maximum number of tokens to generate, e.g. 100. Defaults to nil.
202
+ # @param top_p [Float, nil] the cumulative probability of tokens to generate, e.g. 0.9. Defaults to nil.
203
+ # @param random_seed [Integer, nil] the random seed to use for sampling, e.g. 42. Defaults to nil.
204
+ # @param stop [Array<String>, nil] a list of tokens to stop generation at, e.g. ['/n/n']
205
+ # @return [Enumerator<ChatCompletionStreamResponse>] a generator that yields response objects containing the
206
+ # generated text.
207
+ #
208
+ def completion_stream(
209
+ model:,
210
+ prompt:,
211
+ suffix: nil,
212
+ temperature: nil,
213
+ max_tokens: nil,
214
+ top_p: nil,
215
+ random_seed: nil,
216
+ stop: nil
217
+ )
218
+ request = make_completion_request(
219
+ prompt:, model:, suffix:, temperature:, max_tokens:, top_p:, random_seed:, stop:, stream: true
220
+ )
221
+ response = request('post', 'v1/fim/completions', json: request, stream: true)
222
+
223
+ response.lazy.map do |json_streamed_response|
224
+ ChatCompletionStreamResponse.new(**json_streamed_response)
225
+ end
226
+ end
227
+
161
228
  private
162
229
 
163
230
  def request(method, path, json: nil, stream: false, attempt: 1)
@@ -68,6 +68,57 @@ module Mistral
68
68
  parsed_messages
69
69
  end
70
70
 
71
+ def make_completion_request(
72
+ prompt:,
73
+ model: nil,
74
+ suffix: nil,
75
+ temperature: nil,
76
+ max_tokens: nil,
77
+ top_p: nil,
78
+ random_seed: nil,
79
+ stop: nil,
80
+ stream: false
81
+ )
82
+ request_data = {
83
+ 'prompt' => prompt,
84
+ 'suffix' => suffix,
85
+ 'model' => model,
86
+ 'stream' => stream
87
+ }
88
+
89
+ request_data['stop'] = stop unless stop.nil?
90
+
91
+ if model.nil?
92
+ raise Error.new(message: 'model must be provided') if @default_model.nil?
93
+
94
+ request_data['model'] = @default_model
95
+ else
96
+ request_data['model'] = model
97
+ end
98
+
99
+ request_data.merge!(
100
+ build_sampling_params(
101
+ temperature: temperature,
102
+ max_tokens: max_tokens,
103
+ top_p: top_p,
104
+ random_seed: random_seed
105
+ )
106
+ )
107
+
108
+ @logger.debug("Completion request: #{request_data}")
109
+
110
+ request_data
111
+ end
112
+
113
+ def build_sampling_params(max_tokens: nil, random_seed: nil, temperature: nil, top_p: nil)
114
+ params = {}
115
+ params['temperature'] = temperature unless temperature.nil?
116
+ params['max_tokens'] = max_tokens unless max_tokens.nil?
117
+ params['top_p'] = top_p unless top_p.nil?
118
+ params['random_seed'] = random_seed unless random_seed.nil?
119
+ params
120
+ end
121
+
71
122
  def make_chat_request(
72
123
  messages:,
73
124
  model: nil,
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Mistral
4
- VERSION = '0.2.0'
4
+ VERSION = '0.3.0'
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: mistral
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.0
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Wilson Silva
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-05-23 00:00:00.000000000 Z
11
+ date: 2024-05-30 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: dry-struct
@@ -128,6 +128,8 @@ files:
128
128
  - examples/chat_no_streaming.rb
129
129
  - examples/chat_with_streaming.rb
130
130
  - examples/chatbot_with_streaming.rb
131
+ - examples/code_completion.rb
132
+ - examples/completion_with_streaming.rb
131
133
  - examples/embeddings.rb
132
134
  - examples/function_calling.rb
133
135
  - examples/json_format.rb