llama-rb 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (4) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +2 -8
  3. data/lib/llama/version.rb +1 -1
  4. metadata +1 -1
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1371348a7ba9c4fa75ada41ec8afc6461e1d56dae2c3e3dede175d189ecdd7ea
4
- data.tar.gz: b45a9ed3c28a228a2405ec8874f4cf8239dfcb4cb3132e7a44be806b5c6a2a78
3
+ metadata.gz: '03801e4f99933be9c0e8d559008626991535c2167af88c8cb31defb31c88d0f6'
4
+ data.tar.gz: 6f17e50818de906f33de2686cf1b75c0e17aa052f0fba60889bad85df0591f59
5
5
  SHA512:
6
- metadata.gz: 88dd6f7a6f971f60753625dce11b469bbf46f606b4be4c8d308636d1f696666cacd9b174bda65bc5e42d503db413c9f1281c9a7129d838f1dfab3088717f603f
7
- data.tar.gz: 449673e8950cc869ad899500b85a6108d2a02b7915ca340733bda0f18fa49691df7e839a6efece440d76a0583d037c90a6226f505eacc08ba24a9ae510b840bc
6
+ metadata.gz: 40602fc8c253087a78fd4e5edf5fbae24f3a4ad0d9a3bb2f6730ef701753f6815e8716303220e8edcb1984484d5ffbd20c6adb7e07690244cd738ec6918c80e8
7
+ data.tar.gz: 9cbf6bed4fa4359bd007d083f99976a885b1557b0bf01c4d22a55e231515adf7f66e58e951e01bf731e827b893bf6fc278a306f8a566be3e133039f210214bc2
data/README.md CHANGED
@@ -42,21 +42,15 @@ m.predict('hello world')
42
42
  ```ruby
43
43
  def self.new(
44
44
  model, # path to model file, e.g. "models/7B/ggml-model-q4_0.bin"
45
- n_ctx: 512, # context size
46
- n_parts: -1, # amount of model parts (-1 = determine from model dimensions)
45
+ n_predict: 128 # number of tokens to predict
47
46
  seed: Time.now.to_i, # RNG seed
48
- memory_f16: true, # use f16 instead of f32 for memory kv
49
- use_mlock: false # use mlock to keep model in memory
50
47
  )
51
48
  ```
52
49
 
53
50
  #### Llama::Model#predict
54
51
 
55
52
  ```ruby
56
- def predict(
57
- prompt, # string used as prompt
58
- n_predict: 128 # number of tokens to predict
59
- )
53
+ def predict(prompt)
60
54
  ```
61
55
 
62
56
  ## Development
data/lib/llama/version.rb CHANGED
@@ -1,3 +1,3 @@
1
1
  module Llama
2
- VERSION = '0.2.0'.freeze
2
+ VERSION = '0.2.1'.freeze
3
3
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llama-rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.0
4
+ version: 0.2.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - zfletch