llama_cpp 0.12.7 → 0.14.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 350a80cc8b804b23ee7b0f4e90604110b09664892d3d7c4217c4cd48c77cf775
4
- data.tar.gz: 7a127d3b83cb680969589368eb741c6a2ac6a9765adf9f57dd23c0c1b54ca13d
3
+ metadata.gz: c7d855ccd32ae097f26a671751d6a2178361cf8d8a6c1b99af37859f2c47ca03
4
+ data.tar.gz: 3b17318424d08c65ad34da3fa14956c86db0a2ea05ac174323a9b8d2b9e69d59
5
5
  SHA512:
6
- metadata.gz: dbf25eb8f0fd60332eb8452ea400294d5b9b2b09127d0f3c5ef347135f30f565b161123d0f76a8553bcabf9e35db9fac3fff6cdd9df407fb830ab124d0d85d47
7
- data.tar.gz: 2bbefd5b502150f052ab556c372c4f37b9cf2de2e22e34f4b2153a3b7ff93d7fca768eec5572d5514d7c46dc2a9c03121487907adc5ede612ecb6cea72de682d
6
+ metadata.gz: 2d90bf9fdd8dbaf5e67b7fb8797a9412168ae6ce5fcfc4c6aca34e194d5beb5204184b5bb36d65dc507a7a618ac9e938987e8d8bf5871e4eb6304b5e6de06020
7
+ data.tar.gz: eab524367ace146eb6e20786bd530cead145e1651bcdb726afbb5364609d04b22ca8a515016bb0c2d154ea97fb62f19222c122bc9bb5efe7fc389a6f259da6f0
data/CHANGELOG.md CHANGED
@@ -1,3 +1,27 @@
1
+ ## [[0.14.0](https://github.com/yoshoku/llama_cpp.rb/compare/v0.13.0...v0.14.0)] - 2024-03-09
2
+
3
+ **Breaking Changes**
4
+
5
+ - Bump bundled llama.cpp from b2303 to b2361.
6
+ - Rename embedding accessor to `embeddings` in `ContextParams`.
7
+ - Remove `do_pooling` accessor from `ContextParams`.
8
+ - Add `pooling_type` accessor to `ContextParams`.
9
+ - Fix the size of array returned by `embedding` method in `Context` from `n_embd` to `n_tokens * n_embd`.
10
+ - Add `embeddings_seq` method to `Context`.
11
+
12
+ ## [[0.13.0](https://github.com/yoshoku/llama_cpp.rb/compare/v0.12.7...v0.13.0)] - 2024-03-02
13
+
14
+ **Breaking Changes**
15
+
16
+ - Bump bundled llama.cpp from b2143 to b2303.
17
+ - Remove deprecated methods:
18
+ - `map_supported?`, `mlock_supported?`, `apply_lora_from_file`, `eval`, `eval_embd`, `sample_classifier_free_guidance`, `sample_temperature`, and `mul_mat_q`.
19
+ - Rename some constants.
20
+ - Rename `kv_cache_seq_shift` method to `kv_cache_seq_add`.
21
+ - Add `defrag_thold` accessor to `ContextParams`.
22
+ - Add `vocab_type` and `rope_type` methods to `Model`.
23
+ - Add `kv_cache_seq_pos_max`, `kv_cache_defrag`, and `kv_cache_update` methods to `Context`.
24
+
1
25
  ## [[0.12.7](https://github.com/yoshoku/llama_cpp.rb/compare/v0.12.6...v0.12.7)] - 2024-02-24
2
26
 
3
27
  - Bump bundled llama.cpp from b2106 to b2143.