llm_memory 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: c3f9540b8e3ca30d11eacd2e3100b4aa4ef985d041017b2b09bd5fad52d2f776
4
+ data.tar.gz: 9998bca0ea546d93b3a051ac7a6c62a9597ae2da9e85d8d5b36aab4212c59641
5
+ SHA512:
6
+ metadata.gz: 9923ca576cf858eac89facf8da2f88d6284f7577366b252bb4e3ab197879a78035dfacba62e7890ab097dc193ee922f2b6abf16b321d4f1b09e6f1d30e4e9184
7
+ data.tar.gz: 44ff089feebead7cc9e1ef8b555212d20a938fcd6a4c72c13cfb55ae3029ebf6699722ea42e977ac108375e5d91c83aad450177ec2b9ad1037f87a83b83eadf7
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --format documentation
2
+ --color
3
+ --require spec_helper
data/.standard.yml ADDED
@@ -0,0 +1,3 @@
1
+ # For available configuration options, see:
2
+ # https://github.com/testdouble/standard
3
+ ruby_version: 2.6
@@ -0,0 +1,7 @@
1
+ {
2
+ "editor.formatOnSave": true,
3
+ "[ruby]": {
4
+ "editor.defaultFormatter": "testdouble.vscode-standard-ruby"
5
+ },
6
+ "standardRuby.autofix": true
7
+ }
data/CHANGELOG.md ADDED
@@ -0,0 +1,5 @@
1
+ ## [Unreleased]
2
+
3
+ ## [0.1.0] - 2023-05-01
4
+
5
+ - Initial release
@@ -0,0 +1,84 @@
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
6
+
7
+ We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
8
+
9
+ ## Our Standards
10
+
11
+ Examples of behavior that contributes to a positive environment for our community include:
12
+
13
+ * Demonstrating empathy and kindness toward other people
14
+ * Being respectful of differing opinions, viewpoints, and experiences
15
+ * Giving and gracefully accepting constructive feedback
16
+ * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
17
+ * Focusing on what is best not just for us as individuals, but for the overall community
18
+
19
+ Examples of unacceptable behavior include:
20
+
21
+ * The use of sexualized language or imagery, and sexual attention or
22
+ advances of any kind
23
+ * Trolling, insulting or derogatory comments, and personal or political attacks
24
+ * Public or private harassment
25
+ * Publishing others' private information, such as a physical or email
26
+ address, without their explicit permission
27
+ * Other conduct which could reasonably be considered inappropriate in a
28
+ professional setting
29
+
30
+ ## Enforcement Responsibilities
31
+
32
+ Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
33
+
34
+ Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
35
+
36
+ ## Scope
37
+
38
+ This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
39
+
40
+ ## Enforcement
41
+
42
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at shoheik@cpan.org. All complaints will be reviewed and investigated promptly and fairly.
43
+
44
+ All community leaders are obligated to respect the privacy and security of the reporter of any incident.
45
+
46
+ ## Enforcement Guidelines
47
+
48
+ Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
49
+
50
+ ### 1. Correction
51
+
52
+ **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
53
+
54
+ **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
55
+
56
+ ### 2. Warning
57
+
58
+ **Community Impact**: A violation through a single incident or series of actions.
59
+
60
+ **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
61
+
62
+ ### 3. Temporary Ban
63
+
64
+ **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
65
+
66
+ **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
67
+
68
+ ### 4. Permanent Ban
69
+
70
+ **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
71
+
72
+ **Consequence**: A permanent ban from any sort of public interaction within the community.
73
+
74
+ ## Attribution
75
+
76
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0,
77
+ available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
78
+
79
+ Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
80
+
81
+ [homepage]: https://www.contributor-covenant.org
82
+
83
+ For answers to common questions about this code of conduct, see the FAQ at
84
+ https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
data/Gemfile ADDED
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ source "https://rubygems.org"
4
+
5
+ # Specify your gem's dependencies in llm_memory.gemspec
6
+ gemspec
7
+
8
+ gem "rake", "~> 13.0"
9
+ gem "rspec", "~> 3.0"
10
+ gem "standard", "~> 1.3"
11
+ gem "vcr", "~> 6.1.0"
12
+ gem "webmock", "~> 3.18.1"
13
+ gem "ruby-openai"
14
+ gem "tiktoken_ruby"
15
+ gem "redis"
data/Gemfile.lock ADDED
@@ -0,0 +1,107 @@
1
+ PATH
2
+ remote: .
3
+ specs:
4
+ llm_memory (0.1.0)
5
+
6
+ GEM
7
+ remote: https://rubygems.org/
8
+ specs:
9
+ addressable (2.8.4)
10
+ public_suffix (>= 2.0.2, < 6.0)
11
+ ast (2.4.2)
12
+ connection_pool (2.4.0)
13
+ crack (0.4.5)
14
+ rexml
15
+ diff-lcs (1.5.0)
16
+ faraday (2.7.4)
17
+ faraday-net_http (>= 2.0, < 3.1)
18
+ ruby2_keywords (>= 0.0.4)
19
+ faraday-multipart (1.0.4)
20
+ multipart-post (~> 2)
21
+ faraday-net_http (3.0.2)
22
+ hashdiff (1.0.1)
23
+ json (2.6.3)
24
+ language_server-protocol (3.17.0.3)
25
+ lint_roller (1.0.0)
26
+ multipart-post (2.3.0)
27
+ parallel (1.23.0)
28
+ parser (3.2.2.1)
29
+ ast (~> 2.4.1)
30
+ public_suffix (5.0.1)
31
+ rainbow (3.1.1)
32
+ rake (13.0.6)
33
+ redis (5.0.6)
34
+ redis-client (>= 0.9.0)
35
+ redis-client (0.14.1)
36
+ connection_pool
37
+ regexp_parser (2.8.0)
38
+ rexml (3.2.5)
39
+ rspec (3.12.0)
40
+ rspec-core (~> 3.12.0)
41
+ rspec-expectations (~> 3.12.0)
42
+ rspec-mocks (~> 3.12.0)
43
+ rspec-core (3.12.2)
44
+ rspec-support (~> 3.12.0)
45
+ rspec-expectations (3.12.3)
46
+ diff-lcs (>= 1.2.0, < 2.0)
47
+ rspec-support (~> 3.12.0)
48
+ rspec-mocks (3.12.5)
49
+ diff-lcs (>= 1.2.0, < 2.0)
50
+ rspec-support (~> 3.12.0)
51
+ rspec-support (3.12.0)
52
+ rubocop (1.50.2)
53
+ json (~> 2.3)
54
+ parallel (~> 1.10)
55
+ parser (>= 3.2.0.0)
56
+ rainbow (>= 2.2.2, < 4.0)
57
+ regexp_parser (>= 1.8, < 3.0)
58
+ rexml (>= 3.2.5, < 4.0)
59
+ rubocop-ast (>= 1.28.0, < 2.0)
60
+ ruby-progressbar (~> 1.7)
61
+ unicode-display_width (>= 2.4.0, < 3.0)
62
+ rubocop-ast (1.28.0)
63
+ parser (>= 3.2.1.0)
64
+ rubocop-performance (1.16.0)
65
+ rubocop (>= 1.7.0, < 2.0)
66
+ rubocop-ast (>= 0.4.0)
67
+ ruby-openai (4.0.0)
68
+ faraday (>= 1)
69
+ faraday-multipart (>= 1)
70
+ ruby-progressbar (1.13.0)
71
+ ruby2_keywords (0.0.5)
72
+ standard (1.28.0)
73
+ language_server-protocol (~> 3.17.0.2)
74
+ lint_roller (~> 1.0)
75
+ rubocop (~> 1.50.2)
76
+ standard-custom (~> 1.0.0)
77
+ standard-performance (~> 1.0.1)
78
+ standard-custom (1.0.0)
79
+ lint_roller (~> 1.0)
80
+ standard-performance (1.0.1)
81
+ lint_roller (~> 1.0)
82
+ rubocop-performance (~> 1.16.0)
83
+ tiktoken_ruby (0.0.4-arm64-darwin)
84
+ unicode-display_width (2.4.2)
85
+ vcr (6.1.0)
86
+ webmock (3.18.1)
87
+ addressable (>= 2.8.0)
88
+ crack (>= 0.3.2)
89
+ hashdiff (>= 0.4.0, < 2.0.0)
90
+
91
+ PLATFORMS
92
+ arm64-darwin-22
93
+ x86_64-linux
94
+
95
+ DEPENDENCIES
96
+ llm_memory!
97
+ rake (~> 13.0)
98
+ redis
99
+ rspec (~> 3.0)
100
+ ruby-openai
101
+ standard (~> 1.3)
102
+ tiktoken_ruby
103
+ vcr (~> 6.1.0)
104
+ webmock (~> 3.18.1)
105
+
106
+ BUNDLED WITH
107
+ 2.4.6
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2023 Shohei Kameda
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,126 @@
1
+ # 🧠 LLM Memory 🌊🐴
2
+
3
+ LLM Memory is a Ruby gem designed to provide large language models (LLMs) like ChatGPT with memory using in-context learning.
4
+ This enables better integration with systems such as Rails and web services while providing a more user-friendly and abstract interface based on brain terms.
5
+
6
+ ## Key Features
7
+
8
+ - In-context learning through input prompt context insertion
9
+ - Data connectors for various data sources
10
+ - Inspired by the Python library, [LlamaIndex](https://github.com/jerryjliu/llama_index)
11
+ - Focus on integration with existing systems
12
+ - Easy-to-understand abstraction using brain-related terms
13
+ - Plugin architecture for custom loader creation and extending LLM support
14
+
15
+ ## LLM Memory Components
16
+
17
+ 1. LlmMemory::Wernicke: Responsible for loading external data (currently from files). More loader types are planned for future development.
18
+
19
+ > Wernicke's area in brain is involved in the comprehension of written and spoken language
20
+
21
+ 2. LlmMemory::Hippocampus: Handles interaction with vector databases to retrieve relevant information based on the query. Currently, Redis with the Redisearch module is used as the vector database. (Note that Redisearch is a proprietary modules and available on RedisCloud, which offers a free plan). The reason for choosing this is that Redis is commonly used and easy to integrate with web services.
22
+
23
+ > Hippocampus in brain plays important roles in the consolidation of information from short-term memory to long-term memory
24
+
25
+ 3. LlmMemory::Broca: Responds to queries using memories provided by the Hippocampus component. ERB is used for prompt templates, and a variety of templates can be found online (e.g., in [LangChain Hub](https://github.com/hwchase17/langchain-hub#-prompts)).
26
+
27
+ > Broca's area in brain is also known as the motor speech area.
28
+
29
+ ## Installation
30
+
31
+ TODO: Replace `UPDATE_WITH_YOUR_GEM_NAME_PRIOR_TO_RELEASE_TO_RUBYGEMS_ORG` with your gem name right after releasing it to RubyGems.org. Please do not do it earlier due to security reasons. Alternatively, replace this section with instructions to install your gem from git if you don't plan to release to RubyGems.org.
32
+
33
+ Install the gem and add to the application's Gemfile by executing:
34
+
35
+ $ bundle add UPDATE_WITH_YOUR_GEM_NAME_PRIOR_TO_RELEASE_TO_RUBYGEMS_ORG
36
+
37
+ If bundler is not being used to manage dependencies, install the gem by executing:
38
+
39
+ $ gem install UPDATE_WITH_YOUR_GEM_NAME_PRIOR_TO_RELEASE_TO_RUBYGEMS_ORG
40
+
41
+ ### Setup
42
+
43
+ Set environment variable `OPENAI_ACCESS_TOKEN` and `REDISCLOUD_URL`
44
+ or set in initializer.
45
+
46
+ ```ruby
47
+ LlmMemory.configure do |c|
48
+ c.openai_access_token = "xxxxx"
49
+ c.redis_url = "redis://xxxx:6379"
50
+ end
51
+ ```
52
+
53
+ ## Usage
54
+
55
+ To use LLM Memory, follow these steps:
56
+
57
+ 1. Install the gem: gem install llm_memory
58
+ 2. Set up Redis with Redisearch module enabled
59
+ 3. Configure LLM Memory to connect to your Redis instance
60
+ 4. Use LlmMemory::Wernicke to load data from your external sources
61
+ 5. Use LlmMemory::Hippocampus to search for relevant information based on user queries
62
+ 6. Create and use ERB templates with LlmMemory::Broca to generate responses based on the information retrieved
63
+
64
+ ```ruby
65
+ docs = LlmMemory::Wernicke.load(:file, "/tmp/a_directory")
66
+ # docs is just an array of hash.
67
+ # You don't have to use load method but
68
+ # create own hash with having content and metadata(optional)
69
+ # docs = [{
70
+ # content: "Hi there",
71
+ # metadata: {
72
+ # file_name: "a.txt"
73
+ # }
74
+ # },,,]
75
+
76
+ hippocampus = LlmMemory::Hippocampus.new
77
+ hippocampus.memorize(docs)
78
+
79
+ query_str = "What is my name?"
80
+ related_docs = hippocampus.query(query_str, limit: 3)
81
+ #[{
82
+ # vector_score: "0.192698478699",
83
+ # content: "My name is Mike",
84
+ # metadata: { ... }
85
+ #},,,]
86
+
87
+ # ERB
88
+ template = <<-TEMPLATE
89
+ Context information is below.
90
+ ---------------------
91
+ <% related_docs.each do |doc| %>
92
+ <%= doc[:content] %>
93
+ file: <%= doc[:metadata][:file_name] %>
94
+
95
+ <% end %>
96
+ ---------------------
97
+ Given the context information and not prior knowledge,
98
+ answer the question: <%= query_str %>
99
+ TEMPLATE
100
+
101
+ broca = LlmMemory::Broca.new(prompt: tempate, model: 'gpt-3.5-turbo')
102
+ messages = broca.respond(query_str: query_str, related_docs: related_docs)
103
+
104
+ ...
105
+ query_str2 = "How are you?"
106
+ related_docs = hippocampus.query(query_str2, limit: 3)
107
+ message2 = broca.respond(query_str: query_str2, related_docs: related_docs)
108
+ ```
109
+
110
+ ## Development
111
+
112
+ After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
113
+
114
+ To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
115
+
116
+ ## Contributing
117
+
118
+ Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/llm_memory. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/[USERNAME]/llm_memory/blob/master/CODE_OF_CONDUCT.md).
119
+
120
+ ## License
121
+
122
+ The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
123
+
124
+ ## Code of Conduct
125
+
126
+ Everyone interacting in the LlmMemory project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[USERNAME]/llm_memory/blob/master/CODE_OF_CONDUCT.md).
data/Rakefile ADDED
@@ -0,0 +1,10 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require "standard/rake"
9
+
10
+ task default: %i[spec standard]
@@ -0,0 +1,63 @@
1
+ require "erb"
2
+ require "tiktoken_ruby"
3
+
4
+ module LlmMemory
5
+ class Broca
6
+ include Llms::Openai
7
+ attr_accessor :messages
8
+
9
+ def initialize(
10
+ prompt:,
11
+ model: "gpt-3.5-turbo",
12
+ temperature: 0.7,
13
+ max_token: 4096
14
+ )
15
+ @prompt = prompt
16
+ @model = model
17
+ @messages = []
18
+ @temperature = temperature
19
+ @max_token = max_token
20
+ end
21
+
22
+ def respond(*args)
23
+ final_prompt = generate_prompt(*args)
24
+ @messages.push({role: "user", content: final_prompt})
25
+ adjust_token_count
26
+ response = client.chat(
27
+ parameters: {
28
+ model: @model,
29
+ messages: @messages,
30
+ temperature: @temperature
31
+ }
32
+ )
33
+ response_conent = response.dig("choices", 0, "message", "content")
34
+ @messages.push({role: "system", content: response_conent})
35
+ response_conent
36
+ end
37
+
38
+ def generate_prompt(*args)
39
+ merged_args = args.reduce(:merge)
40
+ erb = ERB.new(@prompt)
41
+ erb.result_with_hash(merged_args)
42
+ end
43
+
44
+ def adjust_token_count
45
+ count = 0
46
+ new_messages = []
47
+ @messages.reverse_each do |message|
48
+ encoded = tokenizer.encode(message[:content])
49
+ if count < @max_token
50
+ count += encoded.length
51
+ new_messages.push(message)
52
+ else
53
+ break
54
+ end
55
+ end
56
+ @messages = new_messages.reverse
57
+ end
58
+
59
+ def tokenizer
60
+ @tokenizer ||= Tiktoken.encoding_for_model("gpt-4")
61
+ end
62
+ end
63
+ end
@@ -0,0 +1,11 @@
1
+ module LlmMemory
2
+ class Configuration
3
+ attr_accessor :openai_access_token, :openai_organization_id, :redis_url
4
+
5
+ def initialize
6
+ @openai_access_token = ENV["OPENAI_ACCESS_TOKEN"]
7
+ @openai_organization_id = nil
8
+ @redis_url = ENV["REDISCLOUD_URL"] || "redis://localhost:6379"
9
+ end
10
+ end
11
+ end
@@ -0,0 +1,30 @@
1
+ # lib/lilyama_index/Embedding.rb
2
+ module LlmMemory
3
+ module Embedding
4
+ def self.included(base)
5
+ base.extend(ClassMethods)
6
+ end
7
+
8
+ module ClassMethods
9
+ def register_embedding(name)
10
+ LlmMemory::EmbeddingManager.register_embedding(name, self)
11
+ end
12
+ end
13
+
14
+ def embed_document(text)
15
+ raise NotImplementedError, "Each Embedding must implement the 'embed_document' method."
16
+ end
17
+ end
18
+
19
+ class EmbeddingManager
20
+ @embeddings = {}
21
+
22
+ def self.register_embedding(name, klass)
23
+ @embeddings[name] = klass
24
+ end
25
+
26
+ def self.embeddings
27
+ @embeddings
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,37 @@
1
+ require_relative "../embedding"
2
+ require_relative "../llms/openai"
3
+
4
+ module LlmMemory
5
+ module Embeddings
6
+ class Openai
7
+ include LlmMemory::Embedding
8
+ include Llms::Openai
9
+
10
+ register_embedding :openai
11
+
12
+ def embed_documents(texts, model: "text-embedding-ada-002")
13
+ embedding_list = []
14
+ texts.each do |txt|
15
+ res = client.embeddings(
16
+ parameters: {
17
+ model: model,
18
+ input: txt
19
+ }
20
+ )
21
+ embedding_list.push(res["data"][0]["embedding"])
22
+ end
23
+ embedding_list
24
+ end
25
+
26
+ def embed_document(text, model: "text-embedding-ada-002")
27
+ res = client.embeddings(
28
+ parameters: {
29
+ model: model,
30
+ input: text
31
+ }
32
+ )
33
+ res["data"][0]["embedding"]
34
+ end
35
+ end
36
+ end
37
+ end
@@ -0,0 +1,95 @@
1
+ require_relative "store"
2
+ require_relative "stores/redis_store"
3
+
4
+ require_relative "embedding"
5
+ require_relative "embeddings/openai"
6
+
7
+ module LlmMemory
8
+ class Hippocampus
9
+ def initialize(
10
+ embedding_name: :openai,
11
+ chunk_size: 1024,
12
+ chunk_overlap: 50,
13
+ store_name: :redis,
14
+ index_name: "llm_memory"
15
+ )
16
+ embedding_class = EmbeddingManager.embeddings[embedding_name]
17
+ raise "Embedding '#{embedding_name}' not found." unless embedding_class
18
+ @embedding_instance = embedding_class.new
19
+
20
+ store_class = StoreManager.stores[store_name]
21
+ raise "Store '#{store_name}' not found." unless store_class
22
+ @store = store_class.new(index_name: index_name)
23
+
24
+ # word count, not char count
25
+ @chunk_size = chunk_size
26
+ @chunk_overlap = chunk_overlap
27
+ end
28
+
29
+ def memorize(docs)
30
+ docs = make_chunks(docs)
31
+ docs = add_vectors(docs)
32
+ @store.create_index unless @store.index_exists?
33
+ @store.add(data: docs)
34
+ end
35
+
36
+ def query(query_str, limit: 3)
37
+ vector = @embedding_instance.embed_document(query_str)
38
+ response_list = @store.search(query: vector, k: limit)
39
+ response_list.shift # the first one is the size
40
+ # now [redis_key1, [],,, ]
41
+ result = response_list.each_slice(2).to_h.values.map { |v|
42
+ v.each_slice(2).to_h.transform_keys(&:to_sym)
43
+ }
44
+ result.each do |item|
45
+ item[:metadata] = JSON.parse(item[:metadata])
46
+ end
47
+ result
48
+ end
49
+
50
+ def forgot_all
51
+ @store.drop_index
52
+ end
53
+
54
+ def add_vectors(docs)
55
+ # embed documents and add vector
56
+ result = []
57
+ docs.each do |doc|
58
+ content = doc[:content]
59
+ metadata = doc[:metadata]
60
+ vector = @embedding_instance.embed_document(content)
61
+ result.push({
62
+ content: content,
63
+ metadata: metadata,
64
+ vector: vector
65
+ })
66
+ end
67
+ result
68
+ end
69
+
70
+ def make_chunks(docs)
71
+ result = []
72
+ docs.each do |item|
73
+ content = item[:content]
74
+ metadata = item[:metadata]
75
+ words = content.split
76
+
77
+ if words.length > @chunk_size
78
+ start_index = 0
79
+
80
+ while start_index < words.length
81
+ end_index = [start_index + @chunk_size, words.length].min
82
+ chunk_words = words[start_index...end_index]
83
+ chunk = chunk_words.join(" ")
84
+ result << {content: chunk, metadata: metadata}
85
+
86
+ start_index += @chunk_size - @chunk_overlap # Move index to create a overlap
87
+ end
88
+ else
89
+ result << {content: content, metadata: metadata}
90
+ end
91
+ end
92
+ result
93
+ end
94
+ end
95
+ end
@@ -0,0 +1,14 @@
1
+ require "openai"
2
+ require "llm_memory"
3
+
4
+ module LlmMemory
5
+ module Llms
6
+ module Openai
7
+ def client
8
+ @client ||= OpenAI::Client.new(
9
+ access_token: LlmMemory.configuration.openai_access_token
10
+ )
11
+ end
12
+ end
13
+ end
14
+ end
@@ -0,0 +1,30 @@
1
+ # lib/lilyama_index/loader.rb
2
+ module LlmMemory
3
+ module Loader
4
+ def self.included(base)
5
+ base.extend(ClassMethods)
6
+ end
7
+
8
+ module ClassMethods
9
+ def register_loader(name)
10
+ LlmMemory::LoaderManager.register_loader(name, self)
11
+ end
12
+ end
13
+
14
+ def load
15
+ raise NotImplementedError, "Each loader must implement the 'load' method."
16
+ end
17
+ end
18
+
19
+ class LoaderManager
20
+ @loaders = {}
21
+
22
+ def self.register_loader(name, klass)
23
+ @loaders[name] = klass
24
+ end
25
+
26
+ def self.loaders
27
+ @loaders
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,29 @@
1
+ require "find"
2
+ require_relative "../loader"
3
+
4
+ module LlmMemory
5
+ class FileLoader
6
+ include Loader
7
+
8
+ register_loader :file
9
+
10
+ def load(directory_path)
11
+ files_array = []
12
+ Find.find(directory_path) do |file_path|
13
+ next if File.directory?(file_path)
14
+
15
+ file_name = File.basename(file_path)
16
+ file_content = File.read(file_path)
17
+
18
+ files_array << {
19
+ content: file_content,
20
+ metadata: {
21
+ file_name: file_name
22
+ }
23
+ }
24
+ end
25
+
26
+ files_array
27
+ end
28
+ end
29
+ end
@@ -0,0 +1,42 @@
1
+ # lib/lilyama_index/store.rb
2
+ module LlmMemory
3
+ module Store
4
+ def self.included(base)
5
+ base.extend(ClassMethods)
6
+ end
7
+
8
+ module ClassMethods
9
+ def register_store(name)
10
+ LlmMemory::StoreManager.register_store(name, self)
11
+ end
12
+ end
13
+
14
+ def create_index
15
+ raise NotImplementedError, "Each store must implement the 'create_index' method."
16
+ end
17
+
18
+ def drop_index
19
+ raise NotImplementedError, "Each store must implement the 'drop_index' method."
20
+ end
21
+
22
+ def add
23
+ raise NotImplementedError, "Each store must implement the 'add' method."
24
+ end
25
+
26
+ def search
27
+ raise NotImplementedError, "Each store must implement the 'search' method."
28
+ end
29
+ end
30
+
31
+ class StoreManager
32
+ @stores = {}
33
+
34
+ def self.register_store(name, klass)
35
+ @stores[name] = klass
36
+ end
37
+
38
+ def self.stores
39
+ @stores
40
+ end
41
+ end
42
+ end
@@ -0,0 +1,154 @@
1
+ require "redis"
2
+ require_relative "../store"
3
+ require "json"
4
+
5
+ module LlmMemory
6
+ class RedisStore
7
+ include Store
8
+
9
+ register_store :redis
10
+
11
+ def initialize(
12
+ index_name: "llm_memory",
13
+ content_key: "content",
14
+ vector_key: "vector",
15
+ metadata_key: "metadata"
16
+ )
17
+ @index_name = index_name
18
+ @content_key = content_key
19
+ @vector_key = vector_key
20
+ @metadata_key = metadata_key
21
+ @client = Redis.new(url: LlmMemory.configuration.redis_url)
22
+ end
23
+
24
+ def info
25
+ @client.call(["INFO"])
26
+ end
27
+
28
+ def load_data(file_path)
29
+ CSV.read(file_path)
30
+ end
31
+
32
+ def list_indexes
33
+ @client.call("FT._LIST")
34
+ end
35
+
36
+ def index_exists?
37
+ begin
38
+ @client.call(["FT.INFO", @index_name])
39
+ rescue
40
+ return false
41
+ end
42
+ true
43
+ end
44
+
45
+ def drop_index
46
+ # DD deletes all document hashes
47
+ @client.call(["FT.DROPINDEX", @index_name, "DD"])
48
+ end
49
+
50
+ # dimention: 1536 for ada-002
51
+ def create_index(dim: 1536, distance_metric: "COSINE")
52
+ # LangChain index
53
+ # schema = (
54
+ # TextField(name=content_key),
55
+ # TextField(name=metadata_key),
56
+ # VectorField(
57
+ # vector_key,
58
+ # "FLAT",
59
+ # {
60
+ # "TYPE": "FLOAT32",
61
+ # "DIM": dim,
62
+ # "DISTANCE_METRIC": distance_metric,
63
+ # },
64
+ # ),
65
+ # )
66
+ command = [
67
+ "FT.CREATE", @index_name, "ON", "HASH",
68
+ "PREFIX", "1", "#{@index_name}:",
69
+ "SCHEMA",
70
+ @content_key, "TEXT",
71
+ @metadata_key, "TEXT",
72
+ @vector_key, "VECTOR", "FLAT", 6, "TYPE", "FLOAT32", "DIM", dim, "DISTANCE_METRIC", distance_metric
73
+ ]
74
+ @client.call(command)
75
+ end
76
+
77
+ # data = [{ content: "", content_vector: [], metadata: {} }]
78
+ def add(data: [])
79
+ result = {}
80
+ @client.pipelined do |pipeline|
81
+ data.each_with_index do |d, i|
82
+ key = "#{@index_name}:#{SecureRandom.uuid.delete("-")}"
83
+ meta_json = d[:metadata].nil? ? "" : d[:metadata].to_json # serialize
84
+ vector_value = d[:vector].map(&:to_f).pack("f*")
85
+ pipeline.hset(
86
+ key,
87
+ {
88
+ @content_key => d[:content],
89
+ @vector_key => vector_value,
90
+ @metadata_key => meta_json
91
+ }
92
+ )
93
+ result[key] = d[:content]
94
+ end
95
+ end
96
+ result
97
+ # data.each_with_index do |d, i|
98
+ # key = "#{@index_name}:#{i}"
99
+ # vector_value = d[:content_vector].map(&:to_f).pack("f*")
100
+ # pp vector_value
101
+ # @client.hset(
102
+ # key,
103
+ # {
104
+ # @content_key => d[:content],
105
+ # @vector_key => vector_value,
106
+ # @metadata_key => ""
107
+ # }
108
+ # )
109
+ # end
110
+ # rescue Redis::Pipeline::Error => e
111
+ # # Handle the error if there is any issue with the pipeline execution
112
+ # puts "Pipeline Error: #{e.message}"
113
+ # rescue Redis::BaseConnectionError => e
114
+ # # Handle connection errors
115
+ # puts "Connection Error: #{e.message}"
116
+ rescue => e
117
+ # Handle any other errors
118
+ puts "Unexpected Error: #{e.message}"
119
+ end
120
+
121
+ def delete
122
+ end
123
+
124
+ def update
125
+ end
126
+
127
+ def search(query: [], k: 3)
128
+ packed_query = query.map(&:to_f).pack("f*")
129
+ command = [
130
+ "FT.SEARCH",
131
+ @index_name,
132
+ "*=>[KNN #{k} @vector $blob AS vector_score]",
133
+ "PARAMS",
134
+ 2,
135
+ "blob",
136
+ packed_query,
137
+ "SORTBY",
138
+ "vector_score",
139
+ "ASC",
140
+ "LIMIT",
141
+ 0,
142
+ k,
143
+ "RETURN",
144
+ 3,
145
+ "vector_score",
146
+ @content_key,
147
+ @metadata_key,
148
+ "DIALECT",
149
+ 2
150
+ ]
151
+ @client.call(command)
152
+ end
153
+ end
154
+ end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LlmMemory
4
+ VERSION = "0.1.0"
5
+ end
@@ -0,0 +1,14 @@
1
+ # loader
2
+ require_relative "loader"
3
+ require_relative "loaders/file_loader"
4
+
5
+ module LlmMemory
6
+ class Wernicke
7
+ def self.load(loader_name, *args)
8
+ loader_class = LoaderManager.loaders[loader_name]
9
+ raise "Loader '#{loader_name}' not found." unless loader_class
10
+ loader_instance = loader_class.new
11
+ loader_instance.load(*args)
12
+ end
13
+ end
14
+ end
data/lib/llm_memory.rb ADDED
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ # config
4
+ require_relative "llm_memory/configuration"
5
+
6
+ require_relative "llm_memory/hippocampus"
7
+ require_relative "llm_memory/broca"
8
+ require_relative "llm_memory/wernicke"
9
+
10
+ require_relative "llm_memory/version"
11
+
12
+ module LlmMemory
13
+ class Error < StandardError; end
14
+
15
+ class << self
16
+ attr_accessor :configuration
17
+ end
18
+
19
+ def self.configure
20
+ self.configuration ||= Configuration.new
21
+ yield(configuration) if block_given?
22
+ end
23
+ configure # init for default values
24
+ end
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "lib/llm_memory/version"
4
+
5
+ Gem::Specification.new do |spec|
6
+ spec.name = "llm_memory"
7
+ spec.version = LlmMemory::VERSION
8
+ spec.authors = ["Shohei Kameda"]
9
+ spec.email = ["shoheik@cpan.org"]
10
+
11
+ spec.summary = "A Ruby Gem for LLMs like ChatGPT to have memory using in-context learning"
12
+ spec.description = "LLM Memory is a Ruby gem designed to provide large language models (LLMs) like ChatGPT with memory using in-context learning. This enables better integration with systems such as Rails and web services while providing a more user-friendly and abstract interface based on brain terms."
13
+ spec.homepage = "https://github.com/shohey1226/llm_memory"
14
+ spec.license = "MIT"
15
+ spec.required_ruby_version = ">= 2.6.0"
16
+
17
+ spec.metadata["homepage_uri"] = spec.homepage
18
+ spec.metadata["source_code_uri"] = "https://github.com/shohey1226/llm_memory"
19
+ spec.metadata["changelog_uri"] = "https://github.com/shohey1226/llm_memory/CHANGELOG.md"
20
+
21
+ # Specify which files should be added to the gem when it is released.
22
+ # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
23
+ spec.files = Dir.chdir(__dir__) do
24
+ `git ls-files -z`.split("\x0").reject do |f|
25
+ (f == __FILE__) || f.match(%r{\A(?:(?:bin|test|spec|features)/|\.(?:git|circleci)|appveyor)})
26
+ end
27
+ end
28
+ spec.bindir = "exe"
29
+ spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
30
+ spec.require_paths = ["lib"]
31
+
32
+ # Uncomment to register a new dependency of your gem
33
+ # spec.add_dependency "example-gem", "~> 1.0"
34
+
35
+ # For more information and examples about making a new gem, check out our
36
+ # guide at: https://bundler.io/guides/creating_gem.html
37
+ end
@@ -0,0 +1,4 @@
1
+ module LlmMemory
2
+ VERSION: String
3
+ # See the writing guide of rbs: https://github.com/ruby/rbs#guides
4
+ end
metadata ADDED
@@ -0,0 +1,74 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: llm_memory
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Shohei Kameda
8
+ autorequire:
9
+ bindir: exe
10
+ cert_chain: []
11
+ date: 2023-05-04 00:00:00.000000000 Z
12
+ dependencies: []
13
+ description: LLM Memory is a Ruby gem designed to provide large language models (LLMs)
14
+ like ChatGPT with memory using in-context learning. This enables better integration
15
+ with systems such as Rails and web services while providing a more user-friendly
16
+ and abstract interface based on brain terms.
17
+ email:
18
+ - shoheik@cpan.org
19
+ executables: []
20
+ extensions: []
21
+ extra_rdoc_files: []
22
+ files:
23
+ - ".rspec"
24
+ - ".standard.yml"
25
+ - ".vscode/settings.json"
26
+ - CHANGELOG.md
27
+ - CODE_OF_CONDUCT.md
28
+ - Gemfile
29
+ - Gemfile.lock
30
+ - LICENSE.txt
31
+ - README.md
32
+ - Rakefile
33
+ - lib/llm_memory.rb
34
+ - lib/llm_memory/broca.rb
35
+ - lib/llm_memory/configuration.rb
36
+ - lib/llm_memory/embedding.rb
37
+ - lib/llm_memory/embeddings/openai.rb
38
+ - lib/llm_memory/hippocampus.rb
39
+ - lib/llm_memory/llms/openai.rb
40
+ - lib/llm_memory/loader.rb
41
+ - lib/llm_memory/loaders/file_loader.rb
42
+ - lib/llm_memory/store.rb
43
+ - lib/llm_memory/stores/redis_store.rb
44
+ - lib/llm_memory/version.rb
45
+ - lib/llm_memory/wernicke.rb
46
+ - llm_memory.gemspec
47
+ - sig/llm_memory.rbs
48
+ homepage: https://github.com/shohey1226/llm_memory
49
+ licenses:
50
+ - MIT
51
+ metadata:
52
+ homepage_uri: https://github.com/shohey1226/llm_memory
53
+ source_code_uri: https://github.com/shohey1226/llm_memory
54
+ changelog_uri: https://github.com/shohey1226/llm_memory/CHANGELOG.md
55
+ post_install_message:
56
+ rdoc_options: []
57
+ require_paths:
58
+ - lib
59
+ required_ruby_version: !ruby/object:Gem::Requirement
60
+ requirements:
61
+ - - ">="
62
+ - !ruby/object:Gem::Version
63
+ version: 2.6.0
64
+ required_rubygems_version: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ requirements: []
70
+ rubygems_version: 3.4.6
71
+ signing_key:
72
+ specification_version: 4
73
+ summary: A Ruby Gem for LLMs like ChatGPT to have memory using in-context learning
74
+ test_files: []