llm_memory 0.1.14 → 0.1.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1ddc205359564a6863d027d91243d6da9cc2079a8965a89c3e0dde0fe13c32cb
4
- data.tar.gz: 23a4980aa97684abcf9893c5598446243ec5b9c198c4c083a11d7cd25884c34c
3
+ metadata.gz: 54df42766547d29c56e8396eaa9e83182eaee3ec48798ad9ad68a981ba3d6243
4
+ data.tar.gz: 8082ab64110019e9d2313046029737ed5447004f999e8ad4a90891af960ffbc7
5
5
  SHA512:
6
- metadata.gz: 03df070d8832990b091c5c4814632e23c832ba595eceaaaa769a6cd06823d302783bd5b1865b6b5531cd13d07287abeb9ed7a31296d73efd6a3031738502074b
7
- data.tar.gz: 9b46d3197283516e44aec1de34799a0ae50b9232f00b6c55f462b8a529c8de9e38d2b6971b0eb9d09056a03cb4f6978787783921deca018863b8437accf37dc2
6
+ metadata.gz: 500f5f1c58ea2507b8b40bb5aa0b6b3c168559f070d9abfb59e183befeb48c2b5e57de6e85a6b7af7e126296734ca91db042108c2c2b25ab2722ea71809e632f
7
+ data.tar.gz: f33e32af701b590a507cde6571c2717d0b762025b164ff396032830f2d1cd855d221977dec9eb539021ee2c6e6305598d2423799b21bf74ab3ca6852f8b189b3
data/.ruby-version ADDED
@@ -0,0 +1 @@
1
+ 3.2.3
data/Gemfile.lock CHANGED
@@ -1,10 +1,10 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- llm_memory (0.1.14)
4
+ llm_memory (0.1.15)
5
5
  redis (~> 4.6.0)
6
6
  ruby-openai (~> 3.7.0)
7
- tokenizers (~> 0.3.3)
7
+ tokenizers (~> 0.6.1)
8
8
 
9
9
  GEM
10
10
  remote: https://rubygems.org/
@@ -87,8 +87,8 @@ GEM
87
87
  standard-performance (1.0.1)
88
88
  lint_roller (~> 1.0)
89
89
  rubocop-performance (~> 1.16.0)
90
- tokenizers (0.3.3-arm64-darwin)
91
- tokenizers (0.3.3-x86_64-linux)
90
+ tokenizers (0.6.1-arm64-darwin)
91
+ tokenizers (0.6.1-x86_64-linux)
92
92
  unicode-display_width (2.4.2)
93
93
  vcr (6.1.0)
94
94
  webmock (3.18.1)
@@ -98,6 +98,7 @@ GEM
98
98
 
99
99
  PLATFORMS
100
100
  arm64-darwin-22
101
+ arm64-darwin-23
101
102
  x86_64-linux
102
103
 
103
104
  DEPENDENCIES
@@ -115,4 +116,4 @@ DEPENDENCIES
115
116
  webmock (~> 3.18.1)
116
117
 
117
118
  BUNDLED WITH
118
- 2.4.6
119
+ 2.4.19
@@ -0,0 +1,118 @@
1
+ At this point of wriging this readme (Nov 20, 2023), we use RAGAS to evaluate RAG. To use ragas from llm_memory, we do the following steps.
2
+
3
+ #### 1. Prepare test data with the following format
4
+
5
+ ```
6
+ test_data = [
7
+ {
8
+ question: "Is it better to stay with my child during lessons, or should they be left alone?",
9
+ contexts: [],
10
+ answer: "",
11
+ ground_truths: ["To improve English proficiency, we recommend that children take lessons by themselves as much as possible. However, if your child is unable to concentrate, we ask that parents support their child. If you need to facilitate smooth progress in the lesson (for instance, by informing us of your child's interests to help with teacher-student conversations), please consult with the Student Relations department."]
12
+ },
13
+ # Prepare multiple question and groud_truths. 10 ~ 20 (Note that the groud truth is not mandatory)
14
+ ]
15
+ ```
16
+
17
+ #### 2. Use llm_memory to fill the `contexts` and `answer`
18
+
19
+ ```
20
+ test_data = [...]
21
+
22
+ hippocampus = LlmMemory::Hippocampus.new(index_name: "sr_emails")
23
+
24
+ prompt = <<-TEMPLATE
25
+ # YOUR PROMT
26
+ <%= query_str %>
27
+
28
+ Use the following info to respond
29
+ ---------------------
30
+ <% related_docs.each do |doc| %>
31
+ <%= doc[:content] %>
32
+ <% end %>
33
+ ---------------------
34
+ TEMPLATE
35
+
36
+ test_data.each{|td|
37
+ related_docs = hippocampus.query(td[:question], limit: 10)
38
+ td[:contexts] = related_docs
39
+
40
+ broca = LlmMemory::Broca.new(
41
+ prompt: prompt,
42
+ model: "gpt-4",
43
+ temperature: 0,
44
+ max_token: 8192
45
+ )
46
+ message = broca.respond(query_str: td[:question], related_docs: related_docs)
47
+ td[:answer] = message
48
+ }
49
+ ```
50
+
51
+ #### 3. Dump to file like JSON or YAML
52
+
53
+ ```
54
+ File.open('out.yaml', 'w') do |f| f.write(test_data.to_yaml) end
55
+ system('curl -F "file=@out.yaml" https://file.io') # heroku to upload the file
56
+ ```
57
+
58
+ #### 4. Install python/ragas
59
+
60
+ Set python environment if it's not there yet.
61
+
62
+ ```
63
+ $ pip install ragas
64
+ ```
65
+
66
+ #### 5. Create a python script to do the evaluation
67
+
68
+ ```
69
+ import yaml
70
+ from datasets import Dataset
71
+ from ragas import evaluate
72
+ import os
73
+
74
+ # Load the YAML file
75
+ file_path = 'out.yaml'
76
+ with open(file_path, 'r') as file:
77
+ yaml_data = yaml.safe_load(file)
78
+
79
+ # Updated parsing function to create a dictionary with lists for each column
80
+ def parse_data(yaml_data):
81
+ # Initialize a dictionary with keys and empty lists
82
+ parsed_data = {
83
+ 'question': [],
84
+ 'contexts': [],
85
+ 'answer': [],
86
+ 'ground_truths': []
87
+ }
88
+
89
+ # Populate the dictionary
90
+ for item in yaml_data:
91
+ parsed_data['question'].append(item[':question'])
92
+ parsed_data['contexts'].append([context[':content'] for context in item[':contexts']])
93
+ parsed_data['answer'].append(item[':answer'])
94
+ parsed_data['ground_truths'].append(item[':ground_truths'])
95
+
96
+ return parsed_data
97
+
98
+ # Convert the YAML data to the required format
99
+ parsed_yaml_data = parse_data(yaml_data)
100
+
101
+ # Create the dataset
102
+ dataset = Dataset.from_dict(parsed_yaml_data)
103
+ for row in dataset:
104
+ print(row)
105
+
106
+ # Run RAGAS evaluation
107
+ results = evaluate(dataset)
108
+ print(results)
109
+ ```
110
+
111
+ #### 5. Execute it
112
+ Name the previous script as `evaluate.py` and run it with the environment variable.
113
+
114
+ ```
115
+ $ RAGAS_DO_NOT_TRACK=true OPENAI_API_KEY=YOUR_KEY python evaluate.py
116
+ ...
117
+ {'answer_relevancy': 0.7575, 'context_precision': 0.5574, 'faithfulness': 0.7167, 'context_recall': 0.1250}
118
+ ```
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LlmMemory
4
- VERSION = "0.1.14"
4
+ VERSION = "0.1.15"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm_memory
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.14
4
+ version: 0.1.15
5
5
  platform: ruby
6
6
  authors:
7
7
  - Shohei Kameda
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-08-01 00:00:00.000000000 Z
11
+ date: 2025-09-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: tokenizers
@@ -16,14 +16,14 @@ dependencies:
16
16
  requirements:
17
17
  - - "~>"
18
18
  - !ruby/object:Gem::Version
19
- version: 0.3.3
19
+ version: 0.6.1
20
20
  type: :runtime
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
- version: 0.3.3
26
+ version: 0.6.1
27
27
  - !ruby/object:Gem::Dependency
28
28
  name: ruby-openai
29
29
  requirement: !ruby/object:Gem::Requirement
@@ -63,6 +63,7 @@ extensions: []
63
63
  extra_rdoc_files: []
64
64
  files:
65
65
  - ".rspec"
66
+ - ".ruby-version"
66
67
  - ".standard.yml"
67
68
  - ".vscode/settings.json"
68
69
  - CHANGELOG.md
@@ -72,6 +73,7 @@ files:
72
73
  - LICENSE.txt
73
74
  - README.md
74
75
  - Rakefile
76
+ - docs/how_to_use_ragas.md
75
77
  - lib/llm_memory.rb
76
78
  - lib/llm_memory/broca.rb
77
79
  - lib/llm_memory/configuration.rb
@@ -85,7 +87,6 @@ files:
85
87
  - lib/llm_memory/stores/redis_store.rb
86
88
  - lib/llm_memory/version.rb
87
89
  - lib/llm_memory/wernicke.rb
88
- - llm_memory.gemspec
89
90
  - sig/llm_memory.rbs
90
91
  homepage: https://github.com/shohey1226/llm_memory
91
92
  licenses:
@@ -109,7 +110,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
109
110
  - !ruby/object:Gem::Version
110
111
  version: '0'
111
112
  requirements: []
112
- rubygems_version: 3.4.6
113
+ rubygems_version: 3.4.19
113
114
  signing_key:
114
115
  specification_version: 4
115
116
  summary: A Ruby Gem for LLMs like ChatGPT to have memory using in-context learning
data/llm_memory.gemspec DELETED
@@ -1,40 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- require_relative "lib/llm_memory/version"
4
-
5
- Gem::Specification.new do |spec|
6
- spec.name = "llm_memory"
7
- spec.version = LlmMemory::VERSION
8
- spec.authors = ["Shohei Kameda"]
9
- spec.email = ["shoheik@cpan.org"]
10
-
11
- spec.summary = "A Ruby Gem for LLMs like ChatGPT to have memory using in-context learning"
12
- spec.description = "LLM Memory is a Ruby gem designed to provide large language models (LLMs) like ChatGPT with memory using in-context learning. This enables better integration with systems such as Rails and web services while providing a more user-friendly and abstract interface based on brain terms."
13
- spec.homepage = "https://github.com/shohey1226/llm_memory"
14
- spec.license = "MIT"
15
- spec.required_ruby_version = ">= 2.6.0"
16
-
17
- spec.metadata["homepage_uri"] = spec.homepage
18
- spec.metadata["source_code_uri"] = "https://github.com/shohey1226/llm_memory"
19
- spec.metadata["changelog_uri"] = "https://github.com/shohey1226/llm_memory/CHANGELOG.md"
20
-
21
- # Specify which files should be added to the gem when it is released.
22
- # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
23
- spec.files = Dir.chdir(__dir__) do
24
- `git ls-files -z`.split("\x0").reject do |f|
25
- (f == __FILE__) || f.match(%r{\A(?:(?:bin|test|spec|features)/|\.(?:git|circleci)|appveyor)})
26
- end
27
- end
28
- spec.bindir = "exe"
29
- spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
30
- spec.require_paths = ["lib"]
31
-
32
- # Uncomment to register a new dependency of your gem
33
- # spec.add_dependency "example-gem", "~> 1.0"
34
- spec.add_dependency "tokenizers", "~> 0.3.3"
35
- spec.add_dependency "ruby-openai", "~> 3.7.0"
36
- spec.add_dependency "redis", "~> 4.6.0"
37
-
38
- # For more information and examples about making a new gem, check out our
39
- # guide at: https://bundler.io/guides/creating_gem.html
40
- end