llama_cpp 0.0.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: a6bf717ec1012d78b7d83f3f7a7546f589fbf368c1b2babc69a99fd28a5d9ff3
4
+ data.tar.gz: 6ab2e2ae4b6410f32890a86b7ac2dbb93ab9e2f43888158b7cbfd9b16f435447
5
+ SHA512:
6
+ metadata.gz: cd1ae63e518a422dbe3a281a598b18b9397fdf880867f92bad20e56b5a60756a1a929a62879f7aed0c7c24012b87b85353e175c773aeed4f8d87294ba0422cb1
7
+ data.tar.gz: 2828321d0589ac16713745b2770844d5c6fed848ff0efed90304370152650a8e0619657a91184f74c402eb9351800ac3517c20f775faf52db91331d95ac1c87d
data/CHANGELOG.md ADDED
@@ -0,0 +1,5 @@
1
+ ## [Unreleased]
2
+
3
+ ## [0.0.1] - 2023-04-02
4
+
5
+ - Initial release
@@ -0,0 +1,84 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
6
+
7
+ We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
8
+
9
+ ## Our Standards
10
+
11
+ Examples of behavior that contributes to a positive environment for our community include:
12
+
13
+ * Demonstrating empathy and kindness toward other people
14
+ * Being respectful of differing opinions, viewpoints, and experiences
15
+ * Giving and gracefully accepting constructive feedback
16
+ * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
17
+ * Focusing on what is best not just for us as individuals, but for the overall community
18
+
19
+ Examples of unacceptable behavior include:
20
+
21
+ * The use of sexualized language or imagery, and sexual attention or
22
+ advances of any kind
23
+ * Trolling, insulting or derogatory comments, and personal or political attacks
24
+ * Public or private harassment
25
+ * Publishing others' private information, such as a physical or email
26
+ address, without their explicit permission
27
+ * Other conduct which could reasonably be considered inappropriate in a
28
+ professional setting
29
+
30
+ ## Enforcement Responsibilities
31
+
32
+ Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
33
+
34
+ Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
35
+
36
+ ## Scope
37
+
38
+ This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
39
+
40
+ ## Enforcement
41
+
42
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at yoshoku@outlook.com. All complaints will be reviewed and investigated promptly and fairly.
43
+
44
+ All community leaders are obligated to respect the privacy and security of the reporter of any incident.
45
+
46
+ ## Enforcement Guidelines
47
+
48
+ Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
49
+
50
+ ### 1. Correction
51
+
52
+ **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
53
+
54
+ **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
55
+
56
+ ### 2. Warning
57
+
58
+ **Community Impact**: A violation through a single incident or series of actions.
59
+
60
+ **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
61
+
62
+ ### 3. Temporary Ban
63
+
64
+ **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
65
+
66
+ **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
67
+
68
+ ### 4. Permanent Ban
69
+
70
+ **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
71
+
72
+ **Consequence**: A permanent ban from any sort of public interaction within the community.
73
+
74
+ ## Attribution
75
+
76
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0,
77
+ available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
78
+
79
+ Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
80
+
81
+ [homepage]: https://www.contributor-covenant.org
82
+
83
+ For answers to common questions about this code of conduct, see the FAQ at
84
+ https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2023 Atsushi Tatsuma
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,47 @@
1
+ # llama_cpp.rb
2
+
3
+ [![Gem Version](https://badge.fury.io/rb/llama_cpp.svg)](https://badge.fury.io/rb/llama_cpp)
4
+ [![License](https://img.shields.io/badge/License-MIT-yellowgreen.svg)](https://github.com/yoshoku/llama_cpp.rb/blob/main/LICENSE.txt)
5
+ [![Documentation](https://img.shields.io/badge/api-reference-blue.svg)](https://yoshoku.github.io/llama_cpp.rb/doc/)
6
+
7
+ llama_cpp.rb provides Ruby bindings for the [llama.cpp](https://github.com/ggerganov/llama.cpp).
8
+
9
+ This gem is still under development and may undergo many changes in the future.
10
+
11
+ ## Installation
12
+
13
+ Install the gem and add to the application's Gemfile by executing:
14
+
15
+ $ bundle add llama_cpp
16
+
17
+ If bundler is not being used to manage dependencies, install the gem by executing:
18
+
19
+ $ gem install llama_cpp
20
+
21
+ ## Usage
22
+
23
+ ```ruby
24
+ require 'llama_cpp'
25
+
26
+ params = LLaMACpp::ContextParams.new
27
+ params.seed = 123456
28
+
29
+ context = LLaMACpp::Context.new(model_path: '/path/to/ggml-model-q4_0.bin', params: params)
30
+
31
+ puts LLaMACpp.generate(context, 'Please tell me the largest city in Japan.')
32
+ ```
33
+
34
+ ## Contributing
35
+
36
+ Bug reports and pull requests are welcome on GitHub at https://github.com/yoshoku/llama_cpp.rb.
37
+ This project is intended to be a safe, welcoming space for collaboration,
38
+ and contributors are expected to adhere to the [code of conduct](https://github.com/yohsoku/llama_cpp.rb/blob/main/CODE_OF_CONDUCT.md).
39
+
40
+ ## License
41
+
42
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
43
+
44
+ ## Code of Conduct
45
+
46
+ Everyone interacting in the LlamaCpp project's codebases, issue trackers,
47
+ chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/yoshoku/llama_cpp.rb/blob/main/CODE_OF_CONDUCT.md).
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'mkmf'
4
+
5
+ abort 'libstdc++ is not found.' unless have_library('stdc++')
6
+
7
+ $srcs = %w[ggml.c llama.cpp llama_cpp.cpp]
8
+ $CFLAGS << ' -w'
9
+ $CXXFLAGS << ' -std=c++11'
10
+ $INCFLAGS << ' -I$(srcdir)/src'
11
+ $VPATH << '$(srcdir)/src'
12
+
13
+ create_makefile('llama_cpp/llama_cpp')
@@ -0,0 +1,500 @@
1
+
2
+ #include "llama_cpp.h"
3
+
4
+ VALUE rb_mLLaMACpp;
5
+ VALUE rb_cLLaMAContext;
6
+ VALUE rb_cLLaMAContextParams;
7
+
8
+ class LLaMAContextParamsWrapper {
9
+ public:
10
+ struct llama_context_params params;
11
+
12
+ LLaMAContextParamsWrapper() : params(llama_context_default_params()){};
13
+
14
+ ~LLaMAContextParamsWrapper(){};
15
+ };
16
+
17
+ class RbLLaMAContextParams {
18
+ public:
19
+ static VALUE llama_context_params_alloc(VALUE self) {
20
+ LLaMAContextParamsWrapper* ptr = (LLaMAContextParamsWrapper*)ruby_xmalloc(sizeof(LLaMAContextParamsWrapper));
21
+ new (ptr) LLaMAContextParamsWrapper();
22
+ return TypedData_Wrap_Struct(self, &llama_context_params_type, ptr);
23
+ };
24
+
25
+ static void llama_context_params_free(void* ptr) {
26
+ ((LLaMAContextParamsWrapper*)ptr)->~LLaMAContextParamsWrapper();
27
+ ruby_xfree(ptr);
28
+ };
29
+
30
+ static size_t llama_context_params_size(const void* ptr) {
31
+ return sizeof(*((LLaMAContextParamsWrapper*)ptr));
32
+ };
33
+
34
+ static LLaMAContextParamsWrapper* get_llama_context_params(VALUE self) {
35
+ LLaMAContextParamsWrapper* ptr;
36
+ TypedData_Get_Struct(self, LLaMAContextParamsWrapper, &llama_context_params_type, ptr);
37
+ return ptr;
38
+ };
39
+
40
+ static void define_class(VALUE outer) {
41
+ rb_cLLaMAContextParams = rb_define_class_under(outer, "ContextParams", rb_cObject);
42
+ rb_define_alloc_func(rb_cLLaMAContextParams, llama_context_params_alloc);
43
+ // rb_define_method(rb_cLLaMAContextParams, "initialize", RUBY_METHOD_FUNC(_llama_context_params_init), 0);
44
+ rb_define_method(rb_cLLaMAContextParams, "n_ctx=", RUBY_METHOD_FUNC(_llama_context_params_set_n_ctx), 1);
45
+ rb_define_method(rb_cLLaMAContextParams, "n_ctx", RUBY_METHOD_FUNC(_llama_context_params_get_n_ctx), 0);
46
+ rb_define_method(rb_cLLaMAContextParams, "n_parts=", RUBY_METHOD_FUNC(_llama_context_params_set_n_parts), 1);
47
+ rb_define_method(rb_cLLaMAContextParams, "n_parts", RUBY_METHOD_FUNC(_llama_context_params_get_n_parts), 0);
48
+ rb_define_method(rb_cLLaMAContextParams, "seed=", RUBY_METHOD_FUNC(_llama_context_params_set_seed), 1);
49
+ rb_define_method(rb_cLLaMAContextParams, "seed", RUBY_METHOD_FUNC(_llama_context_params_get_seed), 0);
50
+ rb_define_method(rb_cLLaMAContextParams, "f16_kv=", RUBY_METHOD_FUNC(_llama_context_params_set_f16_kv), 1);
51
+ rb_define_method(rb_cLLaMAContextParams, "f16_kv", RUBY_METHOD_FUNC(_llama_context_params_get_f16_kv), 0);
52
+ rb_define_method(rb_cLLaMAContextParams, "logits_all=", RUBY_METHOD_FUNC(_llama_context_params_set_logits_all), 1);
53
+ rb_define_method(rb_cLLaMAContextParams, "logits_all", RUBY_METHOD_FUNC(_llama_context_params_get_logits_all), 0);
54
+ rb_define_method(rb_cLLaMAContextParams, "vocab_only=", RUBY_METHOD_FUNC(_llama_context_params_set_vocab_only), 1);
55
+ rb_define_method(rb_cLLaMAContextParams, "vocab_only", RUBY_METHOD_FUNC(_llama_context_params_get_vocab_only), 0);
56
+ rb_define_method(rb_cLLaMAContextParams, "use_mlock=", RUBY_METHOD_FUNC(_llama_context_params_set_use_mlock), 1);
57
+ rb_define_method(rb_cLLaMAContextParams, "use_mlock", RUBY_METHOD_FUNC(_llama_context_params_get_use_mlock), 0);
58
+ rb_define_method(rb_cLLaMAContextParams, "embedding=", RUBY_METHOD_FUNC(_llama_context_params_set_embedding), 1);
59
+ rb_define_method(rb_cLLaMAContextParams, "embedding", RUBY_METHOD_FUNC(_llama_context_params_get_embedding), 0);
60
+ };
61
+
62
+ private:
63
+ static const rb_data_type_t llama_context_params_type;
64
+
65
+ // static VALUE _llama_context_params_init(VALUE self, VALUE seed) {
66
+ // LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
67
+ // new (ptr) LLaMAContextParamsWrapper();
68
+ // return self;
69
+ // };
70
+
71
+ // n_ctx
72
+ static VALUE _llama_context_params_set_n_ctx(VALUE self, VALUE n_ctx) {
73
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
74
+ ptr->params.n_ctx = NUM2INT(n_ctx);
75
+ return INT2NUM(ptr->params.n_ctx);
76
+ };
77
+
78
+ static VALUE _llama_context_params_get_n_ctx(VALUE self) {
79
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
80
+ return INT2NUM(ptr->params.n_ctx);
81
+ };
82
+
83
+ // n_parts
84
+ static VALUE _llama_context_params_set_n_parts(VALUE self, VALUE n_parts) {
85
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
86
+ ptr->params.n_parts = NUM2INT(n_parts);
87
+ return INT2NUM(ptr->params.n_parts);
88
+ };
89
+
90
+ static VALUE _llama_context_params_get_n_parts(VALUE self) {
91
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
92
+ return INT2NUM(ptr->params.n_parts);
93
+ };
94
+
95
+ // seed
96
+ static VALUE _llama_context_params_set_seed(VALUE self, VALUE seed) {
97
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
98
+ ptr->params.seed = NUM2INT(seed);
99
+ return INT2NUM(ptr->params.seed);
100
+ };
101
+
102
+ static VALUE _llama_context_params_get_seed(VALUE self) {
103
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
104
+ return INT2NUM(ptr->params.seed);
105
+ };
106
+
107
+ // f16_kv
108
+ static VALUE _llama_context_params_set_f16_kv(VALUE self, VALUE f16_kv) {
109
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
110
+ ptr->params.f16_kv = f16_kv == Qtrue ? true : false;
111
+ return ptr->params.f16_kv ? Qtrue : Qfalse;
112
+ };
113
+
114
+ static VALUE _llama_context_params_get_f16_kv(VALUE self) {
115
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
116
+ return ptr->params.f16_kv ? Qtrue : Qfalse;
117
+ };
118
+
119
+ // logits_all
120
+ static VALUE _llama_context_params_set_logits_all(VALUE self, VALUE logits_all) {
121
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
122
+ ptr->params.logits_all = logits_all == Qtrue ? true : false;
123
+ return ptr->params.logits_all ? Qtrue : Qfalse;
124
+ };
125
+
126
+ static VALUE _llama_context_params_get_logits_all(VALUE self) {
127
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
128
+ return ptr->params.logits_all ? Qtrue : Qfalse;
129
+ };
130
+
131
+ // vocab_only
132
+ static VALUE _llama_context_params_set_vocab_only(VALUE self, VALUE vocab_only) {
133
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
134
+ ptr->params.vocab_only = vocab_only == Qtrue ? true : false;
135
+ return ptr->params.vocab_only ? Qtrue : Qfalse;
136
+ };
137
+
138
+ static VALUE _llama_context_params_get_vocab_only(VALUE self) {
139
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
140
+ return ptr->params.vocab_only ? Qtrue : Qfalse;
141
+ };
142
+
143
+ // use_mlock
144
+ static VALUE _llama_context_params_set_use_mlock(VALUE self, VALUE use_mlock) {
145
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
146
+ ptr->params.use_mlock = use_mlock == Qtrue ? true : false;
147
+ return ptr->params.use_mlock ? Qtrue : Qfalse;
148
+ };
149
+
150
+ static VALUE _llama_context_params_get_use_mlock(VALUE self) {
151
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
152
+ return ptr->params.use_mlock ? Qtrue : Qfalse;
153
+ };
154
+
155
+ // embedding
156
+ static VALUE _llama_context_params_set_embedding(VALUE self, VALUE embedding) {
157
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
158
+ ptr->params.embedding = embedding == Qtrue ? true : false;
159
+ return ptr->params.embedding ? Qtrue : Qfalse;
160
+ };
161
+
162
+ static VALUE _llama_context_params_get_embedding(VALUE self) {
163
+ LLaMAContextParamsWrapper* ptr = get_llama_context_params(self);
164
+ return ptr->params.embedding ? Qtrue : Qfalse;
165
+ };
166
+ };
167
+
168
+ const rb_data_type_t RbLLaMAContextParams::llama_context_params_type = {
169
+ "RbLLaMAContextParams",
170
+ { NULL,
171
+ RbLLaMAContextParams::llama_context_params_free,
172
+ RbLLaMAContextParams::llama_context_params_size },
173
+ NULL,
174
+ NULL,
175
+ RUBY_TYPED_FREE_IMMEDIATELY
176
+ };
177
+
178
+ class LLaMAContextWrapper {
179
+ public:
180
+ struct llama_context* ctx;
181
+
182
+ LLaMAContextWrapper() : ctx(NULL){};
183
+
184
+ ~LLaMAContextWrapper() {
185
+ if (ctx != NULL) {
186
+ llama_free(ctx);
187
+ }
188
+ };
189
+ };
190
+
191
+ class RbLLaMAContext {
192
+ public:
193
+ static VALUE llama_context_alloc(VALUE self) {
194
+ LLaMAContextWrapper* ptr = (LLaMAContextWrapper*)ruby_xmalloc(sizeof(LLaMAContextWrapper));
195
+ new (ptr) LLaMAContextWrapper();
196
+ return TypedData_Wrap_Struct(self, &llama_context_type, ptr);
197
+ };
198
+
199
+ static void llama_context_free(void* ptr) {
200
+ ((LLaMAContextWrapper*)ptr)->~LLaMAContextWrapper();
201
+ ruby_xfree(ptr);
202
+ };
203
+
204
+ static size_t llama_context_size(const void* ptr) {
205
+ return sizeof(*((LLaMAContextWrapper*)ptr));
206
+ };
207
+
208
+ static LLaMAContextWrapper* get_llama_context(VALUE self) {
209
+ LLaMAContextWrapper* ptr;
210
+ TypedData_Get_Struct(self, LLaMAContextWrapper, &llama_context_type, ptr);
211
+ return ptr;
212
+ };
213
+
214
+ static void define_class(VALUE outer) {
215
+ rb_cLLaMAContext = rb_define_class_under(outer, "Context", rb_cObject);
216
+ rb_define_alloc_func(rb_cLLaMAContext, llama_context_alloc);
217
+ rb_define_method(rb_cLLaMAContext, "initialize", RUBY_METHOD_FUNC(_llama_context_initialize), -1);
218
+ rb_define_method(rb_cLLaMAContext, "eval", RUBY_METHOD_FUNC(_llama_context_eval), -1);
219
+ rb_define_method(rb_cLLaMAContext, "tokenize", RUBY_METHOD_FUNC(_llama_context_tokenize), -1);
220
+ // rb_define_method(rb_cLLaMAContext, "logits", RUBY_METHOD_FUNC(_llama_context_logits), 0);
221
+ rb_define_method(rb_cLLaMAContext, "embeddings", RUBY_METHOD_FUNC(_llama_context_embeddings), 0);
222
+ rb_define_method(rb_cLLaMAContext, "token_to_str", RUBY_METHOD_FUNC(_llama_context_token_to_str), 1);
223
+ rb_define_method(rb_cLLaMAContext, "sample_top_p_top_k", RUBY_METHOD_FUNC(_llama_context_sample_top_p_top_k), -1);
224
+ rb_define_method(rb_cLLaMAContext, "n_vocab", RUBY_METHOD_FUNC(_llama_context_n_vocab), 0);
225
+ rb_define_method(rb_cLLaMAContext, "n_ctx", RUBY_METHOD_FUNC(_llama_context_n_ctx), 0);
226
+ rb_define_method(rb_cLLaMAContext, "n_embd", RUBY_METHOD_FUNC(_llama_context_n_embd), 0);
227
+ rb_define_method(rb_cLLaMAContext, "print_timings", RUBY_METHOD_FUNC(_llama_context_print_timings), 0);
228
+ rb_define_method(rb_cLLaMAContext, "reset_timings", RUBY_METHOD_FUNC(_llama_context_reset_timings), 0);
229
+ };
230
+
231
+ private:
232
+ static const rb_data_type_t llama_context_type;
233
+
234
+ static VALUE _llama_context_initialize(int argc, VALUE* argv, VALUE self) {
235
+ VALUE kw_args = Qnil;
236
+ ID kw_table[2] = { rb_intern("model_path"), rb_intern("params") };
237
+ VALUE kw_values[2] = { Qundef, Qundef };
238
+ rb_scan_args(argc, argv, ":", &kw_args);
239
+ rb_get_kwargs(kw_args, kw_table, 2, 0, kw_values);
240
+
241
+ if (!RB_TYPE_P(kw_values[0], T_STRING)) {
242
+ rb_raise(rb_eArgError, "model_path must be a string");
243
+ return Qnil;
244
+ }
245
+ if (!rb_obj_is_kind_of(kw_values[1], rb_cLLaMAContextParams)) {
246
+ rb_raise(rb_eArgError, "params must be a LLaMAContextParams");
247
+ return Qnil;
248
+ }
249
+
250
+ VALUE filename = kw_values[0];
251
+ LLaMAContextParamsWrapper* prms_ptr = RbLLaMAContextParams::get_llama_context_params(kw_values[1]);
252
+ LLaMAContextWrapper* ctx_ptr = get_llama_context(self);
253
+ ctx_ptr->ctx = llama_init_from_file(StringValueCStr(filename), prms_ptr->params);
254
+ if (ctx_ptr->ctx == NULL) {
255
+ rb_raise(rb_eRuntimeError, "Failed to initialize LLaMA context");
256
+ return Qnil;
257
+ }
258
+
259
+ RB_GC_GUARD(filename);
260
+ return self;
261
+ };
262
+
263
+ static VALUE _llama_context_eval(int argc, VALUE* argv, VALUE self) {
264
+ VALUE kw_args = Qnil;
265
+ ID kw_table[4] = { rb_intern("tokens"), rb_intern("n_past"), rb_intern("n_tokens"), rb_intern("n_threads") };
266
+ VALUE kw_values[4] = { Qundef, Qundef, Qundef, Qundef };
267
+ rb_scan_args(argc, argv, ":", &kw_args);
268
+ rb_get_kwargs(kw_args, kw_table, 2, 2, kw_values);
269
+
270
+ if (!RB_TYPE_P(kw_values[0], T_ARRAY)) {
271
+ rb_raise(rb_eArgError, "tokens must be an Array");
272
+ return Qnil;
273
+ }
274
+ if (!RB_INTEGER_TYPE_P(kw_values[1])) {
275
+ rb_raise(rb_eArgError, "n_past must be an integer");
276
+ return Qnil;
277
+ }
278
+ if (kw_values[2] != Qundef && !RB_INTEGER_TYPE_P(kw_values[2])) {
279
+ rb_raise(rb_eArgError, "n_tokens must be an integer");
280
+ return Qnil;
281
+ }
282
+ if (kw_values[3] != Qundef && !RB_INTEGER_TYPE_P(kw_values[3])) {
283
+ rb_raise(rb_eArgError, "n_threads must be an integer");
284
+ return Qnil;
285
+ }
286
+
287
+ const size_t tokens_len = RARRAY_LEN(kw_values[0]);
288
+ std::vector<llama_token> embd(tokens_len);
289
+ for (size_t i = 0; i < tokens_len; i++) {
290
+ VALUE token = rb_ary_entry(kw_values[0], i);
291
+ if (!RB_INTEGER_TYPE_P(token)) {
292
+ rb_raise(rb_eArgError, "tokens must be an array of integers");
293
+ return Qnil;
294
+ }
295
+ embd[i] = NUM2INT(token);
296
+ }
297
+
298
+ const int n_tokens = kw_values[2] == Qundef ? (int)tokens_len : NUM2INT(kw_values[2]);
299
+ const int n_past = NUM2INT(kw_values[1]);
300
+ const int n_threads = kw_values[3] == Qundef ? 1 : NUM2INT(kw_values[3]);
301
+
302
+ LLaMAContextWrapper* ptr = get_llama_context(self);
303
+ if (llama_eval(ptr->ctx, embd.data(), n_tokens, n_past, n_threads) != 0) {
304
+ rb_raise(rb_eRuntimeError, "Failed to evaluate");
305
+ return Qnil;
306
+ }
307
+
308
+ return Qnil;
309
+ };
310
+
311
+ static VALUE _llama_context_tokenize(int argc, VALUE* argv, VALUE self) {
312
+ VALUE kw_args = Qnil;
313
+ ID kw_table[3] = { rb_intern("text"), rb_intern("n_max_tokens"), rb_intern("add_bos") };
314
+ VALUE kw_values[3] = { Qundef, Qundef, Qundef };
315
+ rb_scan_args(argc, argv, ":", &kw_args);
316
+ rb_get_kwargs(kw_args, kw_table, 1, 2, kw_values);
317
+
318
+ if (!RB_TYPE_P(kw_values[0], T_STRING)) {
319
+ rb_raise(rb_eArgError, "text must be a String");
320
+ return Qnil;
321
+ }
322
+ if (kw_values[1] != Qundef && !RB_INTEGER_TYPE_P(kw_values[1])) {
323
+ rb_raise(rb_eArgError, "n_max_tokens must be an integer");
324
+ return Qnil;
325
+ }
326
+ if (kw_values[2] != Qundef && (kw_values[2] != Qtrue && kw_values[2] != Qfalse)) {
327
+ rb_raise(rb_eArgError, "add_bos must be a boolean");
328
+ return Qnil;
329
+ }
330
+
331
+ VALUE text_ = kw_values[0];
332
+ std::string text = StringValueCStr(text_);
333
+ const bool add_bos = kw_values[2] == Qtrue ? true : false;
334
+ const int n_max_tokens = kw_values[1] != Qundef ? NUM2INT(kw_values[1]) : text.size() + (add_bos ? 1 : 0);
335
+
336
+ std::vector<llama_token> tokens(n_max_tokens);
337
+ LLaMAContextWrapper* ptr = get_llama_context(self);
338
+ const int n = llama_tokenize(ptr->ctx, text.c_str(), tokens.data(), n_max_tokens, add_bos);
339
+ if (n < 0) {
340
+ rb_raise(rb_eRuntimeError, "Failed to tokenize");
341
+ return Qnil;
342
+ }
343
+
344
+ VALUE output = rb_ary_new();
345
+ for (int i = 0; i < n; i++) {
346
+ rb_ary_push(output, INT2NUM(tokens[i]));
347
+ }
348
+
349
+ RB_GC_GUARD(text_);
350
+ return output;
351
+ };
352
+
353
+ static VALUE _llama_context_token_to_str(VALUE self, VALUE token_) {
354
+ LLaMAContextWrapper* ptr = get_llama_context(self);
355
+ if (ptr->ctx == NULL) {
356
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
357
+ return Qnil;
358
+ }
359
+ const llama_token token = NUM2INT(token_);
360
+ const char* str = llama_token_to_str(ptr->ctx, token);
361
+ return str != nullptr ? rb_utf8_str_new_cstr(str) : rb_utf8_str_new_cstr("");
362
+ };
363
+
364
+ static VALUE _llama_context_embeddings(VALUE self) {
365
+ LLaMAContextWrapper* ptr = get_llama_context(self);
366
+ if (ptr->ctx == NULL) {
367
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
368
+ return Qnil;
369
+ }
370
+
371
+ const int n_embd = llama_n_embd(ptr->ctx);
372
+ const float* embd = llama_get_embeddings(ptr->ctx);
373
+ VALUE output = rb_ary_new();
374
+ for (int i = 0; i < n_embd; i++) {
375
+ rb_ary_push(output, DBL2NUM((double)(embd[i])));
376
+ }
377
+
378
+ return output;
379
+ };
380
+
381
+ static VALUE _llama_context_sample_top_p_top_k(int argc, VALUE* argv, VALUE self) {
382
+ VALUE last_n_tokens = Qnil;
383
+ VALUE kw_args = Qnil;
384
+ ID kw_table[4] = { rb_intern("top_k"), rb_intern("top_p"), rb_intern("temp"), rb_intern("penalty") };
385
+ VALUE kw_values[4] = { Qundef, Qundef, Qundef, Qundef };
386
+ rb_scan_args(argc, argv, "1:", &last_n_tokens, &kw_args);
387
+ rb_get_kwargs(kw_args, kw_table, 4, 0, kw_values);
388
+
389
+ if (!RB_TYPE_P(last_n_tokens, T_ARRAY)) {
390
+ rb_raise(rb_eArgError, "last_n_tokens must be an Array");
391
+ return Qnil;
392
+ }
393
+
394
+ const int last_n_tokens_size = RARRAY_LEN(last_n_tokens);
395
+ const int top_k = NUM2INT(kw_values[0]);
396
+ const double top_p = NUM2DBL(kw_values[1]);
397
+ const double temp = NUM2DBL(kw_values[2]);
398
+ const double penalty = NUM2DBL(kw_values[3]);
399
+
400
+ std::vector<llama_token> last_n_tokens_data(last_n_tokens_size);
401
+ for (int i = 0; i < last_n_tokens_size; i++) {
402
+ last_n_tokens_data[i] = NUM2INT(rb_ary_entry(last_n_tokens, i));
403
+ }
404
+
405
+ LLaMAContextWrapper* ptr = get_llama_context(self);
406
+ llama_token token = llama_sample_top_p_top_k(ptr->ctx, last_n_tokens_data.data(), last_n_tokens_size, top_k, top_p, temp, penalty);
407
+
408
+ return INT2NUM(token);
409
+ }
410
+
411
+ static VALUE _llama_context_n_vocab(VALUE self) {
412
+ LLaMAContextWrapper* ptr = get_llama_context(self);
413
+ if (ptr->ctx == NULL) {
414
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
415
+ return Qnil;
416
+ }
417
+ return INT2NUM(llama_n_vocab(ptr->ctx));
418
+ };
419
+
420
+ static VALUE _llama_context_n_ctx(VALUE self) {
421
+ LLaMAContextWrapper* ptr = get_llama_context(self);
422
+ if (ptr->ctx == NULL) {
423
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
424
+ return Qnil;
425
+ }
426
+ return INT2NUM(llama_n_ctx(ptr->ctx));
427
+ };
428
+
429
+ static VALUE _llama_context_n_embd(VALUE self) {
430
+ LLaMAContextWrapper* ptr = get_llama_context(self);
431
+ if (ptr->ctx == NULL) {
432
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
433
+ return Qnil;
434
+ }
435
+ return INT2NUM(llama_n_embd(ptr->ctx));
436
+ };
437
+
438
+ static VALUE _llama_context_print_timings(VALUE self) {
439
+ LLaMAContextWrapper* ptr = get_llama_context(self);
440
+ if (ptr->ctx == NULL) {
441
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
442
+ return Qnil;
443
+ }
444
+ llama_print_timings(ptr->ctx);
445
+ return Qnil;
446
+ };
447
+
448
+ static VALUE _llama_context_reset_timings(VALUE self) {
449
+ LLaMAContextWrapper* ptr = get_llama_context(self);
450
+ if (ptr->ctx == NULL) {
451
+ rb_raise(rb_eRuntimeError, "LLaMA context is not initialized");
452
+ return Qnil;
453
+ }
454
+ llama_reset_timings(ptr->ctx);
455
+ return Qnil;
456
+ };
457
+ };
458
+
459
+ const rb_data_type_t RbLLaMAContext::llama_context_type = {
460
+ "RbLLaMAContext",
461
+ { NULL,
462
+ RbLLaMAContext::llama_context_free,
463
+ RbLLaMAContext::llama_context_size },
464
+ NULL,
465
+ NULL,
466
+ RUBY_TYPED_FREE_IMMEDIATELY
467
+ };
468
+
469
+ // module functions
470
+
471
+ static VALUE rb_llama_token_bos(VALUE self) {
472
+ return INT2NUM(llama_token_bos());
473
+ }
474
+
475
+ static VALUE rb_llama_token_eos(VALUE self) {
476
+ return INT2NUM(llama_token_eos());
477
+ }
478
+
479
+ static VALUE rb_llama_print_system_info(VALUE self) {
480
+ const char* result = llama_print_system_info();
481
+ return rb_utf8_str_new_cstr(result);
482
+ }
483
+
484
+ extern "C" void Init_llama_cpp(void) {
485
+ rb_mLLaMACpp = rb_define_module("LLaMACpp");
486
+ RbLLaMAContext::define_class(rb_mLLaMACpp);
487
+ RbLLaMAContextParams::define_class(rb_mLLaMACpp);
488
+
489
+ rb_define_module_function(rb_mLLaMACpp, "token_bos", rb_llama_token_bos, 0);
490
+ rb_define_module_function(rb_mLLaMACpp, "token_eos", rb_llama_token_eos, 0);
491
+ rb_define_module_function(rb_mLLaMACpp, "print_system_info", rb_llama_print_system_info, 0);
492
+
493
+ rb_define_const(rb_mLLaMACpp, "LLAMA_FILE_VERSION", rb_str_new2(std::to_string(LLAMA_FILE_VERSION).c_str()));
494
+ std::stringstream ss_magic;
495
+ ss_magic << std::showbase << std::hex << LLAMA_FILE_MAGIC;
496
+ rb_define_const(rb_mLLaMACpp, "LLAMA_FILE_MAGIC", rb_str_new2(ss_magic.str().c_str()));
497
+ std::stringstream ss_magic_unversioned;
498
+ ss_magic_unversioned << std::showbase << std::hex << LLAMA_FILE_MAGIC_UNVERSIONED;
499
+ rb_define_const(rb_mLLaMACpp, "LLAMA_FILE_MAGIC_UNVERSIONED", rb_str_new2(ss_magic_unversioned.str().c_str()));
500
+ }