llama_cpp 0.3.2 → 0.3.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f1fcd28849baae5e90c466665aff4fe5da1d848193ebcf74c3fe333c5674191c
4
- data.tar.gz: fcb0c64528d24c5cfad677f17bfd6e1e817a4b8279317ca5b2113302735598b9
3
+ metadata.gz: 35afb5cc65c290036ae7e45459eadc9b509f34f33a3f7708244cf47f1a38829f
4
+ data.tar.gz: 3301158526c63d9d2004e22bda0d1cc8025b4343d8d737df96260786531b074d
5
5
  SHA512:
6
- metadata.gz: c70b5f919feb7a585efbe21b3360254c2f5789504cd73fecee12fd686483c77eeb763ed91a8e7434d5852208555a78f168b358d0895f15b1ea7e774d36d6910a
7
- data.tar.gz: f554ad58fc9d68c39b80995b7f424468386b32a5847dbdefbceb1cba53ff7182da35be8599523d82a6daa8fee23667d07e06faedc4c727d52e8fc594d0bc7d3f
6
+ metadata.gz: b0a50f9f012f44f119a70790d3de07c7fcc64151246791e270e4ff9fc479a85a01c53cf2775945eba3145a3ba89da55a8d14891c6236cfeae16aed5ae455cf0d
7
+ data.tar.gz: ede388584e115ae93d509b6c15b288303c348f3cfe8ea46879a1b69e6c96be31a321edbb52cfbeb309a8fb456738f3f6b7cc1d3f71ce7addbd05b3a1e73d4755
data/CHANGELOG.md CHANGED
@@ -1,3 +1,40 @@
1
+ ## [[0.3.4](https://github.com/yoshoku/llama_cpp.rb/compare/v0.3.3...v0.3.4)] - 2023-07-23
2
+
3
+ - Bump bundled llama.cpp from master-32c5411 to master-d924522.
4
+ - Add `rope_freq_base` and `rope_freq_scale` options to ContextParams.
5
+ - Add `max_devices` module function to LLaMACpp.
6
+ - Add `n_vocab`, `n_ctx`, and `n_embd` methods to Model.
7
+ - Add `vocab`, `tokenize`, and `token_to_str` methods to Model.
8
+ ```ruby
9
+ require 'llama_cpp'
10
+
11
+ params = LLaMACpp::ContextParams.new
12
+ model = LLaMACpp::Model.new(model_path: '/path/to/model.bin', params: params)
13
+
14
+ p model.tokenize(text: 'hello, world')
15
+ # => [12199, 29892, 3186]
16
+
17
+ p model.token_to_str(12199)
18
+ # => "hello"
19
+ ```
20
+
21
+ **Breaking Changes**
22
+ - Fix to automatically call `backend_free` method when Ruby script exits.
23
+ - Remove `smooth_factor` argument from `sample_classifier_free_guidance methos` on Context.
24
+
25
+ ## [[0.3.3](https://github.com/yoshoku/llama_cpp.rb/compare/v0.3.2...v0.3.3)] - 2023-07-15
26
+
27
+ - Bump bundled llama.cpp from master-481f793 to master-32c5411.
28
+ - Add MPI config options:
29
+ ```
30
+ $ gem install llama_cpp -- --with-mpi
31
+ ```
32
+ - Add `backend_free` module function to `LLaMACpp`. This method should be called once at the end of the program when the MPI option is enabled.
33
+ - Add `sample_classifier_free_guidance` method to `Context`.
34
+
35
+ **Breaking Changes**
36
+ - Rename `init_backend` method to `backend_init`. This method is called internally at `require 'llama_cpp'`.
37
+
1
38
  ## [[0.3.2](https://github.com/yoshoku/llama_cpp.rb/compare/v0.3.1...v0.3.2)] - 2023-07-08
2
39
 
3
40
  - Bump bundled llama.cpp from master-b8c8dda to master-481f793.
@@ -7,6 +7,7 @@ abort 'libstdc++ is not found.' unless have_library('stdc++')
7
7
 
8
8
  $srcs = %w[ggml.c llama.cpp llama_cpp.cpp]
9
9
  $srcs << 'ggml-opencl.cpp' if with_config('clblast')
10
+ $srcs << 'ggml-mpi.c' if with_config('mpi')
10
11
  $CFLAGS << ' -w -DNDEBUG'
11
12
  $CXXFLAGS << ' -std=c++11 -DNDEBUG'
12
13
  $INCFLAGS << ' -I$(srcdir)/src'
@@ -76,6 +77,14 @@ if with_config('clblast')
76
77
  end
77
78
  end
78
79
 
80
+ if with_config('mpi')
81
+ abort 'libmpi is not found.' unless have_library('mpi')
82
+ abort 'mpi.h is not found.' unless have_header('mpi.h')
83
+
84
+ $CFLAGS << ' -DGGML_USE_MPI -Wno-cast-qual'
85
+ $CXXFLAGS << ' -DGGML_USE_MPI -Wno-cast-qual'
86
+ end
87
+
79
88
  UNAME_M = RbConfig::CONFIG['build_cpu'] || RbConfig::CONFIG['host_cpu'] || RbConfig::CONFIG['target_cpu']
80
89
 
81
90
  # rubocop:disable Layout/LineLength