rspec-llama 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 54cccc6da1341c23e2ffc84c43095c011e9c6c58255854886ba85ccddb274e67
4
+ data.tar.gz: 43e06cd643a0755eac0dfe3ce2048153cd57fa9c2f57029fe3285e650032fc2a
5
+ SHA512:
6
+ metadata.gz: 42b5b576d38336a09aecf115ba4e3fa3a59d17d7f4c5c107c627429f11be0e2079d3f5e9d8523195c8b45d3e351199b4cb0d1826f483c1a59859e0bc3e0b9d6a
7
+ data.tar.gz: 45ce831f3e2997d6cc4b3953fdd9ea70d3f01fc4bbcfe0d58b5b69d3c16413ae484dfed675caa2e050dfa26feb79448f0c189f925b1279ce608c4386ec90a12c
@@ -0,0 +1,74 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ In the interest of fostering an open and welcoming environment, we as
6
+ contributors and maintainers pledge to making participation in our project and
7
+ our community a harassment-free experience for everyone, regardless of age, body
8
+ size, disability, ethnicity, gender identity and expression, level of experience,
9
+ nationality, personal appearance, race, religion, or sexual identity and
10
+ orientation.
11
+
12
+ ## Our Standards
13
+
14
+ Examples of behavior that contributes to creating a positive environment
15
+ include:
16
+
17
+ * Using welcoming and inclusive language
18
+ * Being respectful of differing viewpoints and experiences
19
+ * Gracefully accepting constructive criticism
20
+ * Focusing on what is best for the community
21
+ * Showing empathy towards other community members
22
+
23
+ Examples of unacceptable behavior by participants include:
24
+
25
+ * The use of sexualized language or imagery and unwelcome sexual attention or
26
+ advances
27
+ * Trolling, insulting/derogatory comments, and personal or political attacks
28
+ * Public or private harassment
29
+ * Publishing others' private information, such as a physical or electronic
30
+ address, without explicit permission
31
+ * Other conduct which could reasonably be considered inappropriate in a
32
+ professional setting
33
+
34
+ ## Our Responsibilities
35
+
36
+ Project maintainers are responsible for clarifying the standards of acceptable
37
+ behavior and are expected to take appropriate and fair corrective action in
38
+ response to any instances of unacceptable behavior.
39
+
40
+ Project maintainers have the right and responsibility to remove, edit, or
41
+ reject comments, commits, code, wiki edits, issues, and other contributions
42
+ that are not aligned to this Code of Conduct, or to ban temporarily or
43
+ permanently any contributor for other behaviors that they deem inappropriate,
44
+ threatening, offensive, or harmful.
45
+
46
+ ## Scope
47
+
48
+ This Code of Conduct applies both within project spaces and in public spaces
49
+ when an individual is representing the project or its community. Examples of
50
+ representing a project or community include using an official project e-mail
51
+ address, posting via an official social media account, or acting as an appointed
52
+ representative at an online or offline event. Representation of a project may be
53
+ further defined and clarified by project maintainers.
54
+
55
+ ## Enforcement
56
+
57
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
58
+ reported by contacting the project team at vadser1999@gmail.com. All
59
+ complaints will be reviewed and investigated and will result in a response that
60
+ is deemed necessary and appropriate to the circumstances. The project team is
61
+ obligated to maintain confidentiality with regard to the reporter of an incident.
62
+ Further details of specific enforcement policies may be posted separately.
63
+
64
+ Project maintainers who do not follow or enforce the Code of Conduct in good
65
+ faith may face temporary or permanent repercussions as determined by other
66
+ members of the project's leadership.
67
+
68
+ ## Attribution
69
+
70
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71
+ available at [https://contributor-covenant.org/version/1/4][version]
72
+
73
+ [homepage]: https://contributor-covenant.org
74
+ [version]: https://contributor-covenant.org/version/1/4/
data/LICENSE ADDED
@@ -0,0 +1,220 @@
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [2024] [AI Foundry]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
203
+
204
+
205
+
206
+ Dlab Subcomponents:
207
+
208
+ Dlab project contains subcomponents with separate copyright
209
+ notices and license terms. Your use of the source code for the these
210
+ subcomponents is subject to the terms and conditions of the following
211
+ licenses.
212
+
213
+ ========================================================================
214
+ Apache 2.0 licenses
215
+ ========================================================================
216
+
217
+ The following components are provided under the Apache 2.0 License. See project link for details.
218
+
219
+ (Apache 2.0 License) Material design icons (https://github.com/google/material-design-icons)
220
+ (Apache 2.0 License) Google Fonts Files, Open Sans (https://github.com/google/fonts/tree/master/apache/opensans)
data/README.md ADDED
@@ -0,0 +1,350 @@
1
+ # RSpec::Llama
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/rspec-llama.svg)](https://badge.fury.io/rb/rspec-llama)
4
+ [![Build Status](https://github.com/aifoundry-org/rspec-llama/actions/workflows/rspec-llama.yml/badge.svg)](https://github.com/aifoundry-org/rspec-llama/actions)
5
+
6
+ ## Introduction
7
+
8
+ **RSpec::Llama** is a versatile testing framework designed to integrate AI model testing seamlessly into the RSpec ecosystem. Whether you're working with OpenAI's GPT models, Llama, or other AI models, RSpec::Llama simplifies the process of configuring, running, and validating your models' outputs.
9
+
10
+ ## Features
11
+
12
+ - **Model Configurations**: Easily set up and customize configurations for various AI models like OpenAI, Llama, and Ollama. Configure models with parameters such as temperature, token limits, and stop sequences.
13
+ - **Model Runners**: Seamlessly run AI models using predefined configurations, allowing you to execute prompts and capture their outputs in a simple and consistent way.
14
+ - **Comprehensive Assertions**: Validate model outputs against expected results using advanced matchers, such as `match`, `match_all`, `match_any`, and `match_none`, to ensure your models behave as expected in different scenarios.
15
+ - **RSpec Integration**: Fully integrated with RSpec, enabling AI model testing to fit naturally into your existing test suite.
16
+
17
+ ## Installation
18
+
19
+ Add this line to your application's Gemfile:
20
+
21
+ ```ruby
22
+ gem 'rspec-llama'
23
+ ```
24
+
25
+ And then execute:
26
+
27
+ ```
28
+ $ bundle install
29
+ ```
30
+
31
+ Or install it yourself as:
32
+
33
+ ```
34
+ $ gem install rspec-llama
35
+ ```
36
+
37
+ ## RSpec Helpers
38
+
39
+ RSpec::Llama provides a set of helpers to simplify the configuration and interaction with AI models during testing.
40
+ These helpers allow you to easily define model configurations, runners, and prompts, making it easier to integrate AI
41
+ models like OpenAI, Llama, and Ollama into your RSpec tests.
42
+
43
+ ### `build_model_configuration(configuration_type, **options)`
44
+
45
+ The `build_model_configuration` helper is used to create a model configuration object for various AI models,
46
+ allowing you to specify and customize the parameters that will be used during the model's execution.
47
+ This helper supports different types of model configurations, such as OpenAI's GPT models and Llama models.
48
+
49
+ Parameters:
50
+
51
+ `configuration_type`: Symbol representing the type of model configuration. Possible values are `:openai`, `:llama_cpp`, and `:ollama`.
52
+
53
+ `options`: Hash of configuration options specific to the selected model type.
54
+
55
+ #### OpenAI Model Configuration
56
+
57
+ Supported Options:
58
+
59
+ `model`: The model to use (e.g., 'gpt-3.5-turbo', 'gpt-4').
60
+
61
+ `temperature`: Sampling temperature between 0 and 2. Higher values make output more random.
62
+
63
+ `stop`: Up to 4 sequences where the API will stop generating further tokens.
64
+
65
+ `seed`: A seed for controlling the randomness in generation. This ensures consistent outputs between runs when the same seed and model configuration are used. Useful for debugging and testing.
66
+
67
+ ```ruby
68
+ config = build_model_configuration(:openai, model: 'gpt-4', temperature: 0.7)
69
+
70
+ # Example usage
71
+ runner = build_model_runner(:openai, access_token: ENV['OPENAI_ACCESS_TOKEN'])
72
+ prompt = 'What gems does the Rails gem depend on?'
73
+ result = runner.call(config, prompt)
74
+
75
+ expect(result).to match_all(
76
+ 'activesupport', 'activerecord', 'actionpack', 'actionview',
77
+ 'actionmailer', 'actioncable', 'railties'
78
+ )
79
+ ```
80
+
81
+ #### LlamaCpp Model Configuration
82
+
83
+ The LlamaCpp model configuration is used to set parameters for models running with the llama.cpp implementation.
84
+
85
+ ```ruby
86
+ config = build_model_configuration(:llama_cpp, model: '/path/to/model', temperature: 0.5, predict: 500)
87
+
88
+ # Example usage
89
+ runner = build_model_runner(:llama_cpp, cli_path: '/path/to/llama-cli')
90
+ prompt = 'What are the most popular Ruby frameworks?'
91
+ result = runner.call(config, prompt)
92
+
93
+ expect(result).to match_all('Ruby on Rails', 'Sinatra', 'Hanami')
94
+ ```
95
+
96
+ Supported Options:
97
+
98
+ `model`: The path to the model file.
99
+
100
+ `temperature`: Sampling temperature between 0 and 2.
101
+
102
+ `predict`: The number of tokens to predict.
103
+
104
+ `stop`: Regular expression to define where the model should stop generating text.
105
+
106
+ #### Ollama Model Configuration
107
+
108
+ The Ollama model configuration is similar to the OpenAI and LlamaCpp configurations but tailored to the Ollama models.
109
+
110
+ ```ruby
111
+ config = build_model_configuration(:ollama, model: 'ollama3.1')
112
+
113
+ # Example usage
114
+ runner = build_model_runner(:ollama)
115
+ prompt = 'Who created the Ruby programming language?'
116
+ result = runner.call(config, prompt)
117
+
118
+ expect(result).to match_all('Yukihiro', 'Matz', 'Matsumoto')
119
+ ```
120
+
121
+ Supported Options:
122
+
123
+ `model`: The model to use (e.g., 'ollama3.1').
124
+
125
+ ### `build_model_runner(runner_type, **options)`
126
+
127
+ The `build_model_runner` helper is used to create a model runner object that interacts with various AI models,
128
+ allowing you to execute prompts and retrieve results. This helper supports different types of model runners,
129
+ such as those for OpenAI's GPT models, Llama models, and others.
130
+
131
+ Parameters:
132
+
133
+ `runner_type`: Symbol representing the type of model runner. Possible values are `:openai`, `:llama_cpp`, and `:ollama`.
134
+
135
+ `options`: Hash of options specific to the selected runner type, such as API credentials or executable paths.
136
+
137
+ #### OpenAI Model Runner
138
+
139
+ The OpenAI model runner interacts with OpenAI's API to execute prompts and retrieve responses from models like GPT-3.5 or GPT-4.
140
+
141
+ ```ruby
142
+ runner = build_model_runner(:openai, access_token: ENV['OPENAI_ACCESS_TOKEN'], organization_id: ENV['OPENAI_ORGANIZATION_ID'])
143
+
144
+ # Example usage
145
+ config = build_model_configuration(:openai, model: 'gpt-4', temperature: 0.7)
146
+ prompt = 'What is the capital of France?'
147
+ result = runner.call(config, prompt)
148
+ puts result.to_s
149
+ ```
150
+
151
+ Supported Options:
152
+
153
+ `access_token`: The API key for accessing OpenAI's API.
154
+
155
+ `organization_id`: (Optional) The ID of your OpenAI organization.
156
+
157
+ `project_id`: (Optional) The ID of your OpenAI project.
158
+
159
+ #### LlamaCpp Model Runner
160
+
161
+ coming soon
162
+
163
+ #### Ollama Model Runner
164
+
165
+ coming soon
166
+
167
+ ## RSpec Matchers
168
+
169
+ This gem provides RSpec matchers for comparing model outputs, focusing on language models like those
170
+ from OpenAI and Llama. These matchers help assert the presence or absence of specific strings or patterns
171
+ in the output, and they are designed to work seamlessly with RSpec's syntax.
172
+
173
+ ### `match(expected)`
174
+
175
+ The `match` matcher is used to check if a string or pattern is present in the actual output.
176
+
177
+ ```ruby
178
+ expect(result).to match('Ruby on Rails')
179
+ expect(result).to match(/Rails/)
180
+ ```
181
+
182
+ **Passes if**: The output contains the exact string or matches the given regular expression.
183
+
184
+ **Fails if**: The string or pattern is not present in the output.
185
+
186
+ ### `match_all(*expected)`
187
+
188
+ The `match_all` matcher checks if all the provided strings or patterns are present in the actual output.
189
+
190
+ ```ruby
191
+ expect(result).to match_all('Ruby on Rails', 'Sinatra', 'Hanami')
192
+ expect(result).to match_all(/Rails/, /Sinatra/)
193
+ ```
194
+
195
+ **Passes if**: All strings or patterns are found in the output.
196
+
197
+ **Fails if**: Any one of the strings or patterns is missing from the output.
198
+
199
+ ### `match_any(*expected)`
200
+
201
+ The `match_any` matcher checks if any of the provided strings or patterns are present in the actual output.
202
+
203
+ ```ruby
204
+ expect(result).to match_any('RoR', 'Ruby on Rails')
205
+ expect(result).to match_any(/Rails/, /Sinatra/)
206
+ ```
207
+
208
+ **Passes if**: At least one string or pattern is found in the output.
209
+
210
+ **Fails if**: None of the strings or patterns are found in the output.
211
+
212
+ ### `match_none(*expected)`
213
+
214
+ The `match_none` matcher ensures that none of the provided strings or patterns are present in the actual output.
215
+
216
+ ```ruby
217
+ expect(result).to match_none('Django', 'Flask', 'Symfony')
218
+ expect(result).to match_none(/Django/, /Flask/)
219
+ ```
220
+
221
+ **Passes if**: None of the strings or patterns are found in the output.
222
+
223
+ **Fails if**: Any one of the strings or patterns is found in the output.
224
+
225
+ ## Full Example
226
+
227
+ Here’s a full example that demonstrates how to use helpers and matchers to test various models with different configurations.
228
+
229
+ ```ruby
230
+ require 'rspec/llama'
231
+
232
+ RSpec.shared_examples 'application frameworks' do
233
+ describe 'popular Ruby frameworks' do
234
+ let(:prompt) { 'What are the most popular Ruby frameworks?' }
235
+
236
+ it 'matches Ruby frameworks', :aggregate_failures do
237
+ result = run_model!
238
+
239
+ expect(result).to match_all('Ruby on Rails', 'Sinatra', 'Hanami')
240
+ expect(result).to match_none('Django', 'Flask', 'Symfony', 'Laravel', 'Yii')
241
+ end
242
+ end
243
+
244
+ describe 'dependencies for the Rails gem' do
245
+ let(:prompt) { 'What gems does the Rails gem depend on?' }
246
+
247
+ it 'matches Rails dependencies' do
248
+ result = run_model!
249
+
250
+ expect(result).to match_all(
251
+ 'activesupport', 'activerecord', 'actionpack', 'actionview',
252
+ 'actionmailer', 'actioncable', 'railties'
253
+ )
254
+ end
255
+ end
256
+
257
+ describe 'popular Python frameworks' do
258
+ let(:prompt) { 'What are the most popular Python frameworks?' }
259
+
260
+ it 'matches Python frameworks', :aggregate_failures do
261
+ result = run_model!
262
+
263
+ expect(result).to match_all('Django', 'Flask')
264
+ expect(result).to match_none('Ruby on Rails', 'Sinatra', 'Hanami', 'Symfony', 'Laravel', 'Yii')
265
+ end
266
+ end
267
+ end
268
+
269
+ RSpec.describe 'Popular Application Frameworks' do
270
+ subject(:run_model!) { runner.call(config, prompt) }
271
+
272
+ context 'with OpenAI model runner' do
273
+ let(:runner) { build_model_runner(:openai, access_token: ENV.fetch('OPENAI_ACCESS_TOKEN')) }
274
+ let(:config) { build_model_configuration(:openai, model:, temperature:, seed: RSpec.configuration.seed) }
275
+ let(:temperature) { 0.5 }
276
+
277
+ context 'with gpt-4o-mini model' do
278
+ let(:model) { 'gpt-4o-mini' }
279
+
280
+ include_examples 'application frameworks'
281
+ end
282
+
283
+ context 'with gpt-4o model' do
284
+ let(:model) { 'gpt-4o' }
285
+
286
+ include_examples 'application frameworks'
287
+ end
288
+
289
+ context 'with gpt-4-turbo model' do
290
+ let(:model) { 'gpt-4-turbo' }
291
+
292
+ include_examples 'application frameworks'
293
+
294
+ context 'with different temperature' do
295
+ let(:temperature) { 0.1 }
296
+
297
+ include_examples 'application frameworks'
298
+ end
299
+ end
300
+ end
301
+ end
302
+ ```
303
+
304
+ ## Development
305
+
306
+ To contribute to the development of this gem, follow the steps below:
307
+
308
+ ### Setting Up the Development Environment
309
+
310
+ 1. Clone the repository:
311
+
312
+ ```bash
313
+ git clone https://github.com/aifoundry-org/rspec-llama.git
314
+ cd rspec-llama
315
+ ```
316
+
317
+ 2. Install dependencies:
318
+
319
+ Make sure you have Bundler installed. Then run:
320
+
321
+ ```bash
322
+ bundle install
323
+ ```
324
+
325
+ 3. Run the tests:
326
+
327
+ This gem uses RSpec for testing. To run the tests:
328
+
329
+ ```bash
330
+ bundle exec rspec
331
+ ```
332
+
333
+ 4. Run Rubocop for code linting
334
+
335
+ Ensure your code follows the community Ruby style guide by running Rubocop:
336
+ ```bash
337
+ bundle exec rubocop
338
+ ```
339
+
340
+ ## Contributing
341
+
342
+ Bug reports and pull requests are welcome on GitHub at https://github.com/aifoundry-org/rspec-llama. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/aifoundry-org/rspec-llama/blob/master/CODE_OF_CONDUCT.md).
343
+
344
+ ## License
345
+
346
+ This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.
347
+
348
+ ## Code of Conduct
349
+
350
+ Everyone interacting in the Rspec::Llama project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/aifoundry-org/rspec-llama/blob/master/CODE_OF_CONDUCT.md).
@@ -0,0 +1,39 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ module Helpers
6
+ RUNNER_TYPES = {
7
+ openai: 'OpenaiModelRunner',
8
+ ollama: 'OllamaModelRunner',
9
+ llama_cpp: 'LlamaCppModelRunner'
10
+ }.freeze
11
+
12
+ CONFIGURATION_TYPES = {
13
+ openai: 'OpenaiModelConfiguration',
14
+ ollama: 'OllamaModelConfiguration',
15
+ llama_cpp: 'LlamaCppModelConfiguration'
16
+ }.freeze
17
+
18
+ ASSERTION_TYPES = {
19
+ exclude_all: 'ExcludeAllModelAssertion',
20
+ include_all: 'IncludeAllModelAssertion',
21
+ include_any: 'IncludeAnyModelAssertion'
22
+ }.freeze
23
+
24
+ def build_model_configuration(configuration_type, **)
25
+ configuration_class = CONFIGURATION_TYPES[configuration_type.to_sym]
26
+ raise "Unsupported model configuration type: #{configuration_type}" unless configuration_class
27
+
28
+ RSpec::Llama.const_get(configuration_class).new(**)
29
+ end
30
+
31
+ def build_model_runner(runner_type, **)
32
+ runner_class = RUNNER_TYPES[runner_type.to_sym]
33
+ raise "Unsupported model runner type: #{runner_type}" unless runner_class
34
+
35
+ RSpec::Llama.const_get(runner_class).new(**)
36
+ end
37
+ end
38
+ end
39
+ end
@@ -0,0 +1,34 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ module Matchers
6
+ class Match < RSpec::Matchers::BuiltIn::Match
7
+ def matches?(actual)
8
+ return super unless actual.is_a?(RSpec::Llama::BaseModelRunnerResult)
9
+
10
+ @actual = actual
11
+ @actual.to_s.match?(@expected)
12
+ end
13
+
14
+ def failure_message
15
+ return super unless @actual.is_a?(RSpec::Llama::BaseModelRunnerResult)
16
+
17
+ "expected #{@actual.to_s.inspect} to match #{@expected.inspect}"
18
+ end
19
+
20
+ def failure_message_when_negated
21
+ return super unless @actual.is_a?(RSpec::Llama::BaseModelRunnerResult)
22
+
23
+ "expected #{@actual.to_s.inspect} not to match #{@expected.inspect}"
24
+ end
25
+
26
+ def diffable?
27
+ return super unless @actual.is_a?(RSpec::Llama::BaseModelRunnerResult)
28
+
29
+ false
30
+ end
31
+ end
32
+ end
33
+ end
34
+ end
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ module Matchers
6
+ class MatchAll
7
+ def initialize(*expected)
8
+ @expected = expected
9
+ end
10
+
11
+ def description
12
+ "match all of #{@expected.inspect}"
13
+ end
14
+
15
+ def matches?(actual)
16
+ @actual = actual
17
+ @unmatched_values = @expected.reject { |exp| @actual.to_s.match?(exp) }
18
+
19
+ @unmatched_values.empty?
20
+ end
21
+
22
+ def failure_message
23
+ "expected #{@actual.to_s.inspect} to match all of #{@expected.inspect}, " \
24
+ "but it did not match: #{@unmatched_values.inspect}"
25
+ end
26
+
27
+ def failure_message_when_negated
28
+ "expected #{@actual.to_s.inspect} not to match all of #{@expected.inspect}, but it matched all."
29
+ end
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,32 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ module Matchers
6
+ class MatchAny
7
+ def initialize(*expected)
8
+ @expected = expected
9
+ end
10
+
11
+ def description
12
+ "match any of #{@expected.inspect}"
13
+ end
14
+
15
+ def matches?(actual)
16
+ @actual = actual
17
+ @unmatched_values = @expected.reject { |exp| @actual.to_s.match?(exp) }
18
+
19
+ @unmatched_values.size < @expected.size
20
+ end
21
+
22
+ def failure_message
23
+ "expected #{@actual.to_s.inspect} to match any of #{@expected.inspect}, but none of them matched."
24
+ end
25
+
26
+ def failure_message_when_negated
27
+ "expected #{@actual.to_s.inspect} not to match any of #{@expected.inspect}, but it matched some."
28
+ end
29
+ end
30
+ end
31
+ end
32
+ end
@@ -0,0 +1,33 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ module Matchers
6
+ class MatchNone
7
+ def initialize(*expected)
8
+ @expected = expected
9
+ end
10
+
11
+ def description
12
+ "match none of #{@expected.inspect}"
13
+ end
14
+
15
+ def matches?(actual)
16
+ @actual = actual
17
+ @matched_values = @expected.select { |exp| @actual.to_s.match?(exp) }
18
+
19
+ @matched_values.empty?
20
+ end
21
+
22
+ def failure_message
23
+ "expected #{@actual.to_s.inspect} to match none of #{@expected.inspect}, " \
24
+ "but it matched: #{@matched_values.inspect}"
25
+ end
26
+
27
+ def failure_message_when_negated
28
+ "expected #{@actual.to_s.inspect} to match at least one of #{@expected.inspect}, but none matched."
29
+ end
30
+ end
31
+ end
32
+ end
33
+ end
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ module Matchers
6
+ {
7
+ Match: 'rspec/llama/matchers/match',
8
+ MatchAll: 'rspec/llama/matchers/match_all',
9
+ MatchAny: 'rspec/llama/matchers/match_any',
10
+ MatchNone: 'rspec/llama/matchers/match_none'
11
+ }.each { |class_name, path| autoload class_name, path }
12
+
13
+ def match(expected)
14
+ Match.new(expected)
15
+ end
16
+
17
+ def match_all(*expected)
18
+ MatchAll.new(*expected)
19
+ end
20
+
21
+ def match_any(*expected)
22
+ MatchAny.new(*expected)
23
+ end
24
+
25
+ def match_none(*expected)
26
+ MatchNone.new(*expected)
27
+ end
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,72 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class LlamaCppModelConfiguration
6
+ DEFAULT_TEMPERATURE = 0.5
7
+ DEFAULT_PREDICT = 500
8
+ DEFAULT_STOP = /\A<(?:end|user|assistant|endoftext|system)>\z/
9
+
10
+ attr_reader :model, :temperature, :predict, :stop, :additional_options
11
+
12
+ # Initializes a new configuration for the llama.cpp model.
13
+ #
14
+ # @param [String] model The path to the model file that will be used.
15
+ # @param [Float] temperature The temperature for sampling, between 0 and 2. Higher values
16
+ # make the output more random, while lower values make it more focused. Defaults to 0.5.
17
+ # @param [Integer] predict The number of tokens to predict. Defaults to 500.
18
+ # @param [Regexp] stop The stop token that signals the end of generation. Defaults to a regular expression
19
+ # that matches common end-of-text tokens.
20
+ # @param [Hash] additional_options Additional configuration options, where keys are the option names and values
21
+ # are either true (for flags) or values for key-value pairs. These options are passed directly to the CLI.
22
+ #
23
+ # @example Basic usage with a specific model path and default parameters
24
+ # config = RSpec::Llama::LlamaCppModelConfiguration.new(
25
+ # model: '/path/to/model'
26
+ # )
27
+ #
28
+ # @example Custom parameters and additional CLI options
29
+ # config = RSpec::Llama::LlamaCppModelConfiguration.new(
30
+ # model: '/path/to/model',
31
+ # temperature: 0.7,
32
+ # predict: 300,
33
+ # threads: 8,
34
+ # verbose: true,
35
+ # log_file: '/path/to/logfile'
36
+ # )
37
+ def initialize(
38
+ model:,
39
+ temperature: DEFAULT_TEMPERATURE,
40
+ predict: DEFAULT_PREDICT,
41
+ stop: DEFAULT_STOP,
42
+ **additional_options
43
+ )
44
+ @model = model
45
+ @temperature = temperature
46
+ @predict = predict
47
+ @stop = stop
48
+ @additional_options = additional_options
49
+ end
50
+
51
+ # Converts the configuration into an array of CLI options.
52
+ #
53
+ # @return [Array<String>] An array of strings representing CLI options, where each key-value
54
+ # pair or flag is converted to the appropriate format for passing to the llama.cpp executable.
55
+ #
56
+ # @example CLI options
57
+ # config.to_a
58
+ # # => ['--model', '/path/to/model', '--temp', '0.7', '--verbose', '--log-file', '/path/to/logfile']
59
+ def to_a
60
+ cli_options = ['--model', model, '--temp', temperature.to_s, '--predict', predict.to_s]
61
+
62
+ # Add additional options in key-value pair format
63
+ additional_options.each do |option, value|
64
+ cli_options << "--#{option}".tr('_', '-')
65
+ cli_options << value.to_s unless value == true
66
+ end
67
+
68
+ cli_options
69
+ end
70
+ end
71
+ end
72
+ end
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class OllamaModelConfiguration
6
+ DEFAULT_MODEL = 'llama3.1'
7
+
8
+ attr_reader :model
9
+
10
+ def initialize(model: DEFAULT_MODEL)
11
+ @model = model
12
+ end
13
+ end
14
+ end
15
+ end
@@ -0,0 +1,48 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class OpenaiModelConfiguration
6
+ DEFAULT_MODEL = 'gpt-3.5-turbo'
7
+ DEFAULT_TEMPERATURE = 0.5
8
+
9
+ attr_reader :model, :temperature, :additional_options
10
+
11
+ # Initializes a new configuration for the OpenAI model.
12
+ #
13
+ # @param [String] model The model to use. Defaults to 'gpt-3.5-turbo'.
14
+ # @param [Float] temperature The sampling temperature for the model, between 0 and 2.0.
15
+ # Higher values make output more random, while lower values make it more focused. Defaults to 0.5.
16
+ # @param [Hash] additional_options Additional configuration options to pass to the API, such as
17
+ # max_tokens, top_p, stop, and others. This allows flexibility for adding any supported parameters
18
+ # without modifying the class. The keys should be symbols.
19
+ #
20
+ # @example Basic usage with default model and temperature
21
+ # config = RSpec::Llama::OpenaiModelConfiguration.new
22
+ #
23
+ # @example Passing custom options
24
+ # config = RSpec::Llama::OpenaiModelConfiguration.new(
25
+ # model: 'gpt-4',
26
+ # temperature: 0.8,
27
+ # max_tokens: 150,
28
+ # stop: ['\n']
29
+ # )
30
+ #
31
+ # @see https://platform.openai.com/docs/api-reference/chat/create
32
+ def initialize(model: DEFAULT_MODEL, temperature: DEFAULT_TEMPERATURE, **additional_options)
33
+ @model = model
34
+ @temperature = temperature
35
+ @additional_options = additional_options
36
+ end
37
+
38
+ # Converts the configuration into a hash format for making API requests.
39
+ #
40
+ # @return [Hash] A hash representation of the model configuration.
41
+ # This hash includes the model, temperature, and any additional options provided during initialization.
42
+ # Nil values are omitted from the hash.
43
+ def to_h
44
+ { model:, temperature:, **additional_options }.compact
45
+ end
46
+ end
47
+ end
48
+ end
@@ -0,0 +1,17 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class BaseModelRunnerResult
6
+ attr_reader :response
7
+
8
+ def initialize(response)
9
+ @response = response
10
+ end
11
+
12
+ def to_s
13
+ raise NotImplementedError
14
+ end
15
+ end
16
+ end
17
+ end
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class LlamaCppModelRunnerResult < BaseModelRunnerResult
6
+ def to_s
7
+ response.strip
8
+ end
9
+ end
10
+ end
11
+ end
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class OllamaModelRunnerResult < BaseModelRunnerResult
6
+ def to_s
7
+ response
8
+ end
9
+ end
10
+ end
11
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class OpenaiModelRunnerResult < BaseModelRunnerResult
6
+ MESSAGE_PATH = [:choices, 0, :message, :content].freeze
7
+
8
+ def to_s
9
+ response.dig(*MESSAGE_PATH)
10
+ end
11
+ end
12
+ end
13
+ end
@@ -0,0 +1,26 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class LlamaCppModelRunner
6
+ DEFAULT_CLI_PATH = 'llama-cli'
7
+
8
+ attr_reader :cli_path
9
+
10
+ def initialize(cli_path: DEFAULT_CLI_PATH)
11
+ @cli_path = cli_path
12
+ end
13
+
14
+ # @param [RSpec::Llama::LlamaCppModelConfiguration] configuration
15
+ # @param [String] prompt
16
+ # @return [RSpec::Llama::LlamaCppModelRunnerResult]
17
+ def call(configuration, prompt)
18
+ command = [cli_path, '--prompt', prompt.to_s] + configuration.to_a
19
+
20
+ IO.popen(command, 'r+') do |io|
21
+ LlamaCppModelRunnerResult.new(io.read)
22
+ end
23
+ end
24
+ end
25
+ end
26
+ end
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ class OllamaModelRunner
6
+ end
7
+ end
8
+ end
@@ -0,0 +1,57 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'json'
5
+
6
+ module RSpec
7
+ module Llama
8
+ class OpenaiModelRunner
9
+ Error = Class.new(StandardError)
10
+
11
+ DEFAULT_BASE_URL = 'https://api.openai.com/v1'
12
+ CHAT_COMPLETIONS_PATH = '/chat/completions'
13
+
14
+ # @param [String] access_token
15
+ # @param [String] base_url
16
+ # @param [String] organization_id
17
+ def initialize(access_token:, base_url: DEFAULT_BASE_URL, organization_id: nil, project_id: nil)
18
+ @access_token = access_token
19
+ @base_url = base_url
20
+ @organization_id = organization_id
21
+ @project_id = project_id
22
+ end
23
+
24
+ # @param [RSpec::Llama::ModelConfiguration] configuration
25
+ # @param [String] prompt
26
+ #
27
+ # @return [RSpec::Llama::OpenaiModelRunnerResult]
28
+ def call(configuration, prompt)
29
+ messages = [{ role: 'user', content: prompt.to_s }]
30
+ response = execute_request(CHAT_COMPLETIONS_PATH, { **configuration.to_h, messages: })
31
+
32
+ RSpec::Llama::OpenaiModelRunnerResult.new(response)
33
+ end
34
+
35
+ private
36
+
37
+ attr_reader :access_token, :base_url, :organization_id, :project_id
38
+
39
+ def request_headers
40
+ {
41
+ 'Content-Type' => 'application/json',
42
+ 'Authorization' => "Bearer #{access_token}",
43
+ 'OpenAI-Organization' => organization_id,
44
+ 'OpenAI-Project' => project_id
45
+ }.compact
46
+ end
47
+
48
+ def execute_request(path, params)
49
+ response = Net::HTTP.post(URI("#{base_url}#{path}"), JSON.dump(params), request_headers)
50
+ json_response = JSON.parse(response.body, symbolize_names: true)
51
+ return json_response if response.is_a?(Net::HTTPSuccess)
52
+
53
+ raise Error, json_response.dig(:error, :message)
54
+ end
55
+ end
56
+ end
57
+ end
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RSpec
4
+ module Llama
5
+ VERSION = '0.1.0'
6
+ end
7
+ end
@@ -0,0 +1,34 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'rspec'
4
+
5
+ require_relative 'llama/version'
6
+ require_relative 'llama/helpers'
7
+ require_relative 'llama/matchers'
8
+
9
+ module RSpec
10
+ module Llama
11
+ {
12
+ # Model configurations
13
+ OpenaiModelConfiguration: 'rspec/llama/model_configurations/openai_model_configuration',
14
+ LlamaCppModelConfiguration: 'rspec/llama/model_configurations/llama_cpp_model_configuration',
15
+ OllamaModelConfiguration: 'rspec/llama/model_configurations/ollama_model_configuration',
16
+
17
+ # Model runners
18
+ OpenaiModelRunner: 'rspec/llama/model_runners/openai_model_runner',
19
+ LlamaCppModelRunner: 'rspec/llama/model_runners/llama_cpp_model_runner',
20
+ OllamaModelRunner: 'rspec/llama/model_runners/ollama_model_runner',
21
+
22
+ # Model runner results
23
+ BaseModelRunnerResult: 'rspec/llama/model_runner_results/base_model_runner_result',
24
+ OpenaiModelRunnerResult: 'rspec/llama/model_runner_results/openai_model_runner_result',
25
+ LlamaCppModelRunnerResult: 'rspec/llama/model_runner_results/llama_cpp_model_runner_result',
26
+ OllamaModelRunnerResult: 'rspec/llama/model_runner_results/ollama_model_runner_result'
27
+ }.each { |class_name, path| autoload class_name, path }
28
+ end
29
+ end
30
+
31
+ RSpec.configure do |config|
32
+ config.include RSpec::Llama::Helpers
33
+ config.include RSpec::Llama::Matchers
34
+ end
@@ -0,0 +1,3 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative 'rspec/llama'
metadata ADDED
@@ -0,0 +1,90 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: rspec-llama
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Vadim S.
8
+ - Artur A.
9
+ - Anatoli L.
10
+ - Sergy S.
11
+ bindir: exe
12
+ cert_chain: []
13
+ date: 2024-10-02 00:00:00.000000000 Z
14
+ dependencies:
15
+ - !ruby/object:Gem::Dependency
16
+ name: rspec
17
+ requirement: !ruby/object:Gem::Requirement
18
+ requirements:
19
+ - - "~>"
20
+ - !ruby/object:Gem::Version
21
+ version: '3.0'
22
+ type: :runtime
23
+ prerelease: false
24
+ version_requirements: !ruby/object:Gem::Requirement
25
+ requirements:
26
+ - - "~>"
27
+ - !ruby/object:Gem::Version
28
+ version: '3.0'
29
+ description: |
30
+ RSpec::Llama is a testing framework that allows developers to easily configure, run, and validate
31
+ AI models such as OpenAI's GPT models, Llama, and others within the RSpec ecosystem.
32
+
33
+ With a focus on simplicity and extensibility, RSpec::Llama provides:
34
+ - A standardized approach to configuring different AI models with customizable parameters.
35
+ - Runners to execute model interactions and capture responses seamlessly.
36
+ - Comprehensive assertion types to validate model outputs against expected patterns.
37
+
38
+ Whether you are developing AI-powered applications or simply need a reliable way to test various AI
39
+ models' outputs, RSpec::Llama offers an all-in-one solution that integrates smoothly into your existing RSpec setup.
40
+ email:
41
+ - sergy@cybergizer.com
42
+ executables: []
43
+ extensions: []
44
+ extra_rdoc_files: []
45
+ files:
46
+ - CODE_OF_CONDUCT.md
47
+ - LICENSE
48
+ - README.md
49
+ - lib/rspec-llama.rb
50
+ - lib/rspec/llama.rb
51
+ - lib/rspec/llama/helpers.rb
52
+ - lib/rspec/llama/matchers.rb
53
+ - lib/rspec/llama/matchers/match.rb
54
+ - lib/rspec/llama/matchers/match_all.rb
55
+ - lib/rspec/llama/matchers/match_any.rb
56
+ - lib/rspec/llama/matchers/match_none.rb
57
+ - lib/rspec/llama/model_configurations/llama_cpp_model_configuration.rb
58
+ - lib/rspec/llama/model_configurations/ollama_model_configuration.rb
59
+ - lib/rspec/llama/model_configurations/openai_model_configuration.rb
60
+ - lib/rspec/llama/model_runner_results/base_model_runner_result.rb
61
+ - lib/rspec/llama/model_runner_results/llama_cpp_model_runner_result.rb
62
+ - lib/rspec/llama/model_runner_results/ollama_model_runner_result.rb
63
+ - lib/rspec/llama/model_runner_results/openai_model_runner_result.rb
64
+ - lib/rspec/llama/model_runners/llama_cpp_model_runner.rb
65
+ - lib/rspec/llama/model_runners/ollama_model_runner.rb
66
+ - lib/rspec/llama/model_runners/openai_model_runner.rb
67
+ - lib/rspec/llama/version.rb
68
+ homepage: https://github.com/aifoundry-org/rspec-llama
69
+ licenses:
70
+ - Apache-2.0
71
+ metadata:
72
+ rubygems_mfa_required: 'true'
73
+ rdoc_options: []
74
+ require_paths:
75
+ - lib
76
+ required_ruby_version: !ruby/object:Gem::Requirement
77
+ requirements:
78
+ - - ">="
79
+ - !ruby/object:Gem::Version
80
+ version: '3.2'
81
+ required_rubygems_version: !ruby/object:Gem::Requirement
82
+ requirements:
83
+ - - ">="
84
+ - !ruby/object:Gem::Version
85
+ version: '0'
86
+ requirements: []
87
+ rubygems_version: 3.6.0.dev
88
+ specification_version: 4
89
+ summary: A versatile testing framework for different AI model configurations.
90
+ test_files: []