mistral 0.1.0 → 0.3.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 01bc1534430ff8a86dafca215d434c999337163910c2b67c5a7198bbc4ade7c3
4
- data.tar.gz: d97eda8d0b53118de9998dca31041ec14a1e357b7052a6083d4f172408192ebf
3
+ metadata.gz: a13548c7603e4a353f94fb7c798f3256c8c84fcb24f6b9a35a0d34539026641a
4
+ data.tar.gz: 12d4ebe2515e9970d00e086c90bd69c10ab86d7e786b8417d534b3d6b30fc091
5
5
  SHA512:
6
- metadata.gz: 46f9cb65643ff7b805a2fe94181d7d8af3e27afc42ae001d9a6a6a4ef660ae79461508c793734cf46bcf14e470ec2f2e21c480e12d567f2e0c224c6cd3639f62
7
- data.tar.gz: a68462d1daf574b786d325ece5f5519d9b3bada3081b1a150eb088e6bdb036b6f455a46a5e516ed6b4f6a79b294e400c21240c0a210181279c5eab16575756e9
6
+ metadata.gz: 29ce8bafc147e843e8d9f8057a9b6fde4ab331ae1bd39d5b19e73fd2cc362940368d84c97c3433b69d80ec516ad710e0314e2eab509e1706cc4b206e77787ad7
7
+ data.tar.gz: 93013c03b1047734c3a97b3a20bc98b5d3768bc2d2b0ac64a67a4b5d14bb03c85d16fc146b8eb8d87690278f9012eccfe4ca07d67357d1b66310ce19a36fb1e2
data/.env.example CHANGED
@@ -1,3 +1,3 @@
1
1
  # Get this from https://console.mistral.ai/api-keys/
2
2
  MISTRAL_API_KEY=
3
- LOG_LEVEL=ERROR
3
+ MISTRAL_LOG_LEVEL=ERROR
data/CHANGELOG.md CHANGED
@@ -5,8 +5,35 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.1.1/)
6
6
  and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.3.0] - 2024-05-30
9
+
10
+ ### Added
11
+
12
+ - Added support for completion requests which you can use to query their latest model
13
+ [codestral](https://mistral.ai/news/codestral/).
14
+ See [this example](https://github.com/wilsonsilva/mistral/blob/0.3.0/examples/code_completion.rb) to get started.
15
+
16
+ ## [0.2.0] - 2024-05-23
17
+
18
+ ### Added
19
+
20
+ - We now support tool_call_id for tool messages. This will be mandatory in the future but you can start using right
21
+ away to improve the model performance during function calling (especially multiple).
22
+ Ports [mistralai/client-python#93](https://github.com/mistralai/client-python/pull/93)
23
+
24
+ ### Changed
25
+
26
+ - Renamed `LOG_LEVEL` to `MISTRAL_LOG_LEVEL`. This is not a direct port of the Python client because Python has a
27
+ global logger in the `logging` module, but Ruby doesn't.
28
+ Ports [mistralai/client-python#86](https://github.com/mistralai/client-python/pull/86)
29
+ - Get API key at client initialization. Ports
30
+ [mistralai/client-python#57](https://github.com/mistralai/client-python/pull/57)
31
+
8
32
  ## [0.1.0] - 2024-05-04
9
33
 
10
- - Initial release. Feature parity with `v0.1.8` of the [client-python](https://github.com/mistralai/client-python)
34
+ - Initial release. Feature parity with `v0.1.8` of the
35
+ [mistralai/client-python](https://github.com/mistralai/client-python)
11
36
 
12
- [0.1.0]: https://github.com/wilsonsilva/nostr/compare/28e7c9...v0.1.0
37
+ [0.3.0]: https://github.com/wilsonsilva/mistral/compare/v0.2.0...v0.3.0
38
+ [0.2.0]: https://github.com/wilsonsilva/mistral/compare/v0.1.0...v0.2.0
39
+ [0.1.0]: https://github.com/wilsonsilva/mistral/compare/28e7c9...v0.1.0
@@ -175,10 +175,21 @@ This code resides in `lib/http/features/line_iterable_body.rb`.
175
175
  ## Testing
176
176
 
177
177
  The Ruby gem aims for 1:1 parity with the Python client. As such, it uses `Minitest` (similar to Python's `pytest`).
178
- However, testing was simplified by using and `webmock` for stubbing requests, instead of implementing 100% test
178
+ However, testing was simplified by using `webmock` for stubbing requests, instead of implementing 100% test
179
179
  coverage and using RSpec, which is usually what I do.
180
180
 
181
181
  ## Examples
182
182
 
183
183
  The `function_calling.rb` example omits the unnecessary `n_rows = data['transaction_id'].length` line present in
184
184
  the Python version.
185
+
186
+ ## Logging
187
+
188
+ Python has a global logger:
189
+
190
+ ```python
191
+ self._logger = logging.getLogger(__name__)
192
+ ```
193
+
194
+ Ruby doesn't. Thus, in order to allow users to customize the logging level, they can set the environment variable
195
+ `MISTRAL_LOG_LEVEL` to `DEBUG`, `INFO`, `WARN`, `ERROR` or `FATAL`.
data/README.md CHANGED
@@ -80,15 +80,17 @@ end
80
80
 
81
81
  In the [`examples`](https://github.com/wilsonsilva/mistral/tree/main/examples) folder, you will find how to do:
82
82
 
83
- | File Name | Description |
84
- |--------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|
85
- | [`chat_no_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_no_streaming.rb) | How to use the chat endpoint without streaming |
86
- | [`chat_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_with_streaming.rb) | How to use the chat endpoint with streaming |
87
- | [`chatbot_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chatbot_with_streaming.rb) | A simple interactive chatbot using streaming |
88
- | [`embeddings.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/embeddings.rb) | How to use the embeddings endpoint |
89
- | [`function_calling.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/function_calling.rb) | How to call functions using the chat endpoint |
90
- | [`json_format.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/json_format.rb) | How to request and parse JSON responses from the chat endpoint |
91
- | [`list_models.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/list_models.rb) | How to list available models |
83
+ | File Name | Description |
84
+ |--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|
85
+ | [`chat_no_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_no_streaming.rb) | How to use the chat endpoint without streaming |
86
+ | [`chat_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chat_with_streaming.rb) | How to use the chat endpoint with streaming |
87
+ | [`chatbot_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/chatbot_with_streaming.rb) | A simple interactive chatbot using streaming |
88
+ | [`code_completion.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/code_completion.rb) | How to perform a code completion |
89
+ | [`completion_with_streaming.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/completion_with_streaming.rb) | How to perform a code completion with streaming |
90
+ | [`embeddings.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/embeddings.rb) | How to use the embeddings endpoint |
91
+ | [`function_calling.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/function_calling.rb) | How to call functions using the chat endpoint |
92
+ | [`json_format.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/json_format.rb) | How to request and parse JSON responses from the chat endpoint |
93
+ | [`list_models.rb`](https://github.com/wilsonsilva/mistral/blob/main/examples/list_models.rb) | How to list available models |
92
94
 
93
95
  ## 🔨 Development
94
96
 
@@ -10,11 +10,12 @@ require 'optparse'
10
10
  require 'mistral'
11
11
 
12
12
  MODEL_LIST = %w[
13
- mistral-tiny
14
- mistral-small
15
- mistral-medium
13
+ mistral-tiny-latest
14
+ mistral-small-latest
15
+ mistral-medium-latest
16
+ codestral-latest
16
17
  ].freeze
17
- DEFAULT_MODEL = 'mistral-small'
18
+ DEFAULT_MODEL = 'mistral-small-latest'
18
19
  DEFAULT_TEMPERATURE = 0.7
19
20
  LOG_FORMAT = '%(asctime)s - %(levelname)s - %(message)s'
20
21
  # A hash of all commands and their arguments, used for tab completion.
@@ -0,0 +1,24 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require 'bundler/setup'
5
+ require 'dotenv/load'
6
+ require 'mistral'
7
+
8
+ api_key = ENV.fetch('MISTRAL_API_KEY')
9
+ client = Mistral::Client.new(api_key: api_key)
10
+
11
+ prompt = 'def fibonacci(n: int):'
12
+ suffix = "n = int(input('Enter a number: '))\nprint(fibonacci(n))"
13
+
14
+ response = client.completion(
15
+ model: 'codestral-latest',
16
+ prompt: prompt,
17
+ suffix: suffix
18
+ )
19
+
20
+ print <<~COMPLETION
21
+ #{prompt}
22
+ #{response.choices[0].message.content}
23
+ #{suffix}
24
+ COMPLETION
@@ -0,0 +1,24 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require 'bundler/setup'
5
+ require 'dotenv/load'
6
+ require 'mistral'
7
+
8
+ api_key = ENV.fetch('MISTRAL_API_KEY')
9
+ client = Mistral::Client.new(api_key: api_key)
10
+
11
+ prompt = 'def fibonacci(n: int):'
12
+ suffix = "n = int(input('Enter a number: '))\nprint(fibonacci(n))"
13
+
14
+ print(prompt)
15
+
16
+ client.completion_stream(
17
+ model: 'codestral-latest',
18
+ prompt: prompt,
19
+ suffix: suffix
20
+ ).each do |chunk|
21
+ print(chunk.choices[0].delta.content) unless chunk.choices[0].delta.content.nil?
22
+ end
23
+
24
+ print(suffix)
@@ -73,7 +73,7 @@ tools = [
73
73
  ]
74
74
 
75
75
  api_key = ENV.fetch('MISTRAL_API_KEY')
76
- model = 'mistral-large-latest'
76
+ model = 'mistral-small-latest'
77
77
 
78
78
  client = Mistral::Client.new(api_key: api_key)
79
79
 
@@ -97,7 +97,9 @@ puts "calling function_name: #{function_name}, with function_params: #{function_
97
97
  function_result = names_to_functions[function_name].call(function_params['transaction_id'])
98
98
 
99
99
  messages << response.choices[0].message
100
- messages << Mistral::ChatMessage.new(role: 'tool', name: function_name, content: function_result)
100
+ messages << Mistral::ChatMessage.new(
101
+ role: 'tool', name: function_name, content: function_result, tool_call_id: tool_call.id
102
+ )
101
103
 
102
104
  response = client.chat(model: model, messages: messages, tools: tools)
103
105
 
@@ -6,7 +6,7 @@ module Mistral
6
6
  # Synchronous wrapper around the async client
7
7
  class Client < ClientBase
8
8
  def initialize(
9
- api_key: ENV['MISTRAL_API_KEY'],
9
+ api_key: nil,
10
10
  endpoint: ENDPOINT,
11
11
  max_retries: 5,
12
12
  timeout: 120
@@ -158,6 +158,73 @@ module Mistral
158
158
  raise Mistral::Error.new(message: 'No response received')
159
159
  end
160
160
 
161
+ # A completion endpoint that returns a single response.
162
+ #
163
+ # @param model [String] model the name of the model to get completion with, e.g. codestral-latest
164
+ # @param prompt [String] the prompt to complete
165
+ # @param suffix [String, nil] the suffix to append to the prompt for fill-in-the-middle completion
166
+ # @param temperature [Float, nil] temperature the temperature to use for sampling, e.g. 0.5.
167
+ # @param max_tokens [Integer, nil] the maximum number of tokens to generate, e.g. 100. Defaults to nil.
168
+ # @param top_p [Float, nil] the cumulative probability of tokens to generate, e.g. 0.9. Defaults to nil.
169
+ # @param random_seed [Integer, nil] the random seed to use for sampling, e.g. 42. Defaults to nil.
170
+ # @param stop [Array<String>, nil] a list of tokens to stop generation at, e.g. ['/n/n']
171
+ # @return [ChatCompletionResponse] a response object containing the generated text.
172
+ #
173
+ def completion(
174
+ model:,
175
+ prompt:,
176
+ suffix: nil,
177
+ temperature: nil,
178
+ max_tokens: nil,
179
+ top_p: nil,
180
+ random_seed: nil,
181
+ stop: nil
182
+ )
183
+ request = make_completion_request(
184
+ prompt:, model:, suffix:, temperature:, max_tokens:, top_p:, random_seed:, stop:
185
+ )
186
+ single_response = request('post', 'v1/fim/completions', json: request, stream: false)
187
+
188
+ single_response.each do |response|
189
+ return ChatCompletionResponse.new(**response)
190
+ end
191
+
192
+ raise Error, 'No response received'
193
+ end
194
+
195
+ # An asynchronous completion endpoint that streams responses.
196
+ #
197
+ # @param model [String] model the name of the model to get completions with, e.g. codestral-latest
198
+ # @param prompt [String] the prompt to complete
199
+ # @param suffix [String, nil] the suffix to append to the prompt for fill-in-the-middle completion
200
+ # @param temperature [Float, nil] temperature the temperature to use for sampling, e.g. 0.5.
201
+ # @param max_tokens [Integer, nil] the maximum number of tokens to generate, e.g. 100. Defaults to nil.
202
+ # @param top_p [Float, nil] the cumulative probability of tokens to generate, e.g. 0.9. Defaults to nil.
203
+ # @param random_seed [Integer, nil] the random seed to use for sampling, e.g. 42. Defaults to nil.
204
+ # @param stop [Array<String>, nil] a list of tokens to stop generation at, e.g. ['/n/n']
205
+ # @return [Enumerator<ChatCompletionStreamResponse>] a generator that yields response objects containing the
206
+ # generated text.
207
+ #
208
+ def completion_stream(
209
+ model:,
210
+ prompt:,
211
+ suffix: nil,
212
+ temperature: nil,
213
+ max_tokens: nil,
214
+ top_p: nil,
215
+ random_seed: nil,
216
+ stop: nil
217
+ )
218
+ request = make_completion_request(
219
+ prompt:, model:, suffix:, temperature:, max_tokens:, top_p:, random_seed:, stop:, stream: true
220
+ )
221
+ response = request('post', 'v1/fim/completions', json: request, stream: true)
222
+
223
+ response.lazy.map do |json_streamed_response|
224
+ ChatCompletionStreamResponse.new(**json_streamed_response)
225
+ end
226
+ end
227
+
161
228
  private
162
229
 
163
230
  def request(method, path, json: nil, stream: false, attempt: 1)
@@ -5,11 +5,15 @@ module Mistral
5
5
  attr_reader :endpoint, :api_key, :max_retries, :timeout
6
6
 
7
7
  def initialize(endpoint:, api_key: nil, max_retries: 5, timeout: 120)
8
- @endpoint = endpoint
9
- @api_key = api_key
10
8
  @max_retries = max_retries
11
9
  @timeout = timeout
12
10
 
11
+ api_key = ENV['MISTRAL_API_KEY'] if api_key.nil?
12
+
13
+ raise Error, 'API key not provided. Please set MISTRAL_API_KEY environment variable.' if api_key.nil?
14
+
15
+ @api_key = api_key
16
+ @endpoint = endpoint
13
17
  @logger = config_logger
14
18
 
15
19
  # For azure endpoints, we default to the mistral model
@@ -64,6 +68,57 @@ module Mistral
64
68
  parsed_messages
65
69
  end
66
70
 
71
+ def make_completion_request(
72
+ prompt:,
73
+ model: nil,
74
+ suffix: nil,
75
+ temperature: nil,
76
+ max_tokens: nil,
77
+ top_p: nil,
78
+ random_seed: nil,
79
+ stop: nil,
80
+ stream: false
81
+ )
82
+ request_data = {
83
+ 'prompt' => prompt,
84
+ 'suffix' => suffix,
85
+ 'model' => model,
86
+ 'stream' => stream
87
+ }
88
+
89
+ request_data['stop'] = stop unless stop.nil?
90
+
91
+ if model.nil?
92
+ raise Error.new(message: 'model must be provided') if @default_model.nil?
93
+
94
+ request_data['model'] = @default_model
95
+ else
96
+ request_data['model'] = model
97
+ end
98
+
99
+ request_data.merge!(
100
+ build_sampling_params(
101
+ temperature: temperature,
102
+ max_tokens: max_tokens,
103
+ top_p: top_p,
104
+ random_seed: random_seed
105
+ )
106
+ )
107
+
108
+ @logger.debug("Completion request: #{request_data}")
109
+
110
+ request_data
111
+ end
112
+
113
+ def build_sampling_params(max_tokens: nil, random_seed: nil, temperature: nil, top_p: nil)
114
+ params = {}
115
+ params['temperature'] = temperature unless temperature.nil?
116
+ params['max_tokens'] = max_tokens unless max_tokens.nil?
117
+ params['top_p'] = top_p unless top_p.nil?
118
+ params['random_seed'] = random_seed unless random_seed.nil?
119
+ params
120
+ end
121
+
67
122
  def make_chat_request(
68
123
  messages:,
69
124
  model: nil,
@@ -115,7 +170,7 @@ module Mistral
115
170
 
116
171
  def config_logger
117
172
  Logger.new($stdout).tap do |logger|
118
- logger.level = ENV.fetch('LOG_LEVEL', 'ERROR')
173
+ logger.level = ENV.fetch('MISTRAL_LOG_LEVEL', 'ERROR')
119
174
 
120
175
  logger.formatter = proc do |severity, datetime, progname, msg|
121
176
  "#{datetime.strftime("%Y-%m-%d %H:%M:%S")} #{severity} #{progname}: #{msg}\n"
@@ -43,6 +43,7 @@ module Mistral
43
43
  attribute :content, Types::Strict::Array.of(Types::Strict::String) | Types::Strict::String
44
44
  attribute? :name, Types::String.optional
45
45
  attribute? :tool_calls, Types::Strict::Array.of(ToolCall).optional
46
+ attribute? :tool_call_id, Types::String.optional
46
47
  end
47
48
 
48
49
  class DeltaMessage < Dry::Struct
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Mistral
4
- VERSION = '0.1.0'
4
+ VERSION = '0.3.0'
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: mistral
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0
4
+ version: 0.3.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Wilson Silva
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-05-04 00:00:00.000000000 Z
11
+ date: 2024-05-30 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: dry-struct
@@ -108,7 +108,8 @@ dependencies:
108
108
  - - "~>"
109
109
  - !ruby/object:Gem::Version
110
110
  version: '3.23'
111
- description: You can use the Mistral Ruby client to interact with the Mistral AI API.
111
+ description: A 1:1 Ruby port of the official Mistral Python client, with feature and
112
+ API parity.
112
113
  email:
113
114
  - wilson.dsigns@gmail.com
114
115
  executables: []
@@ -127,6 +128,8 @@ files:
127
128
  - examples/chat_no_streaming.rb
128
129
  - examples/chat_with_streaming.rb
129
130
  - examples/chatbot_with_streaming.rb
131
+ - examples/code_completion.rb
132
+ - examples/completion_with_streaming.rb
130
133
  - examples/embeddings.rb
131
134
  - examples/function_calling.rb
132
135
  - examples/json_format.rb