aoororachain 0.1.3 → 0.1.4

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 55f23c15305124c7290243efeb8d0af00c2db4d39a86a53f33b385ac7f5c0eea
4
- data.tar.gz: 69554775d50528239ef2405fb79f0349ae7ec56bd915992ae088daf1f33ca080
3
+ metadata.gz: 8fd799856d56a712154c7c6adf5c20e73e34ce7740aa70490dc84c37f7dd7905
4
+ data.tar.gz: e4d4a828187141c420cecff71b38c17fdd9556b9f486a21c336ee83a1b6b20d4
5
5
  SHA512:
6
- metadata.gz: a126b07255d4b2b4ebd06017d351e0ab7ef220713503bcdd8b897dc02ebac1bf0810fffd52250a63cfd5396397b1c5711d25a8779da825e437eb677dca498249
7
- data.tar.gz: 80624eb9468969821af3fecf082b5f161216e7140f9b23a7ae170b33569b5c1618c8b256c41a1febe2c9ef5a9dba62e937cd025f4dcd579f9edd78e574090521
6
+ metadata.gz: a157218dc07395105e4f6bda26f2a97df95d882b514a800401fb7a86596d9fe8583575395f4d16797f91800913c9b4fd639826001d5d1360c104fa0896da48e7
7
+ data.tar.gz: 554a6f6c5bbac2d094a0d35fa296feb6d5bd7fc999d6579d3d7d44c8161f52acc282b098bdcc6d387a3e5bcd74d2ae29a7602ce04509aa49948308a618d4052e
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- aoororachain (0.1.3)
4
+ aoororachain (0.1.4)
5
5
  chroma-db (~> 0.6.0)
6
6
  llm_client (~> 0.1.2)
7
7
  pdf-reader (~> 2.11)
data/README.md CHANGED
@@ -1,26 +1,243 @@
1
1
  # Aoororachain
2
2
 
3
- Aoororachain is Ruby chain tool to work with LLMs
3
+ Aoororachain is Ruby chain tool to work with LLMs.
4
4
 
5
5
  ## Installation
6
6
 
7
- Install the gem and add to the application's Gemfile by executing:
7
+ Install the gem and add to the applications Gemfile by executing:
8
8
 
9
- $ bundle add aoororachain
9
+ ```bash
10
+ $ bundle add aoororachain
11
+ ```
10
12
 
11
13
  If bundler is not being used to manage dependencies, install the gem by executing:
12
14
 
13
- $ gem install aoororachain
15
+ ```bash
16
+ $ gem install aoororachain
17
+ ```
18
+
19
+ ## Requisites
20
+
21
+ Aoororachain was primarily created to work locally with private data and Open Source LLMs. If you are looking for a tool to integrate OpenAI or any other service, there are a handful of tools to do it in Ruby, Python, or Javascript in Github.
22
+
23
+ With this in mind, a few requisites are needed before you start working in a chain.
24
+
25
+ * Llama.cpp. First, you need to setup [llama.cpp](https://github.com/ggerganov/llama.cpp), an inference tool for the Llama model.
26
+ * LLM Server. [LLM Server](https://github.com/mariochavez/llm_server) is a Ruby server that exposes *llama.cpp* via an API interfase.
27
+ * Open Source LLM model. Refer to *llama.cpp* or *LLM Server* for options to download an Open Source model. Llama, Open Llama or Vicuna models are good models to start.
28
+ * Chroma DB. [Chroma DB]( [https://www.trychroma.com/](https://www.trychroma.com/) ) is an Open Source Vector database for document information retrieval.
29
+ * Python environment. Aoororachain uses Open Source embedding models. It uses by default any of `hkunlp/instructor-large`, `hkunlp/instructor-xl`, and `sentence-transformers/all-mpnet-base-v2`.
30
+
31
+ ### Python environment and Open Source embedding models.
32
+
33
+ You can install a Python environment using [miniconda](https://docs.conda.io/en/latest/miniconda.html). Here are the instructions for using it and installing additional dependencies and the Embedding models.
34
+
35
+ ```bash
36
+ # This assumes installing miniconda on MacOS with Homebrew. If you use a different OS, follow the instructions on miniconda website.
37
+ $ brew install miniconda
38
+ # Initialize miniconda with your shell. Restart your shell for this to take effect.
39
+ $ conda init zsh
40
+ # After the shell restarts, create an environment and set Python version.
41
+ $ conda create -n llm python=3.9
42
+ # Now activate your new environment
43
+ $ conda activate llm
44
+ # Install Embedding models dependencies
45
+ $ pip -q install langchain sentence_transformers InstructorEmbedding
46
+ ```
47
+
48
+ The next step is to install the Embedding model or models you want to use. Here are the links to each model.
49
+
50
+ * [hkunlp/instructor-xl](https://huggingface.co/hkunlp/instructor-xl). 5Gb.
51
+ * [hkunlp/instructor-large](https://huggingface.co/hkunlp/instructor-large). 1.34Gb
52
+ * [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). 438Mb
53
+
54
+ To install any models, execute the following code in a Python repl. Replace *MODEL* with the name of the model. _Be aware that this will download the model from Internet._
55
+
56
+ ```python
57
+ from InstructorEmbedding import INSTRUCTOR
58
+ from langchain.embeddings
59
+ import HuggingFaceInstructEmbeddings
60
+
61
+ instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="MODEL")
62
+
63
+ instructor_embeddings.embed_documents(list("Hello Ruby!"))
64
+ ```
65
+
66
+ You can skip this step, but Aoororachain will download the specified model on the first run.
14
67
 
15
68
  ## Usage
16
69
 
17
- TODO: Write usage instructions here
70
+ Aoororachain currently focused on QA Retrieval for your own documents. Hence, let's start with how to create embeddings for a set of documents.
71
+
72
+ ### Document embeddings
73
+
74
+ Being able to QA your documents requires texts to be converted to numbers. These numbers are organized in vectors; they capture the word features and correlations in sentences. This is helpful when a question is asked and, through the vector, a program can find texts that are similar to the question asked.
75
+
76
+ The similar texts can then be sent to a Large Language Model (LLM) to make sense of them and produce a response in Natural Language Process (NLP).
77
+
78
+ Due to the context size limit of LLMs you can feed them a huge document for QA Retrieval, you need to chunk large texts into meaningful blocks. This process is part of the embedding creation process.
79
+
80
+ The process looks like the following: 
81
+
82
+ 1. Load documents—in this example, Ruby 3.2 documentation from 9,747 text files.
83
+
84
+ This is an example of one of the 9,747 text files:
85
+
86
+ ```ruby
87
+ Object Array
88
+ Method collect
89
+ Method type instance_method
90
+ Call sequence ["array.map {|element| ... } -> new_array\narray.map -> new_enumerator"]
91
+ Source code 3.2:ruby-3.2.0/array.c:3825
92
+
93
+ Calls the block, if given, with each element of self; returns a new Array whose elements are the return values from the block:
94
+
95
+ a = [:foo, 'bar', 2]
96
+ a1 = a.map {|element| element.class }
97
+ a1 # => [Symbol, String, Integer]
98
+
99
+ Returns a new Enumerator if no block given:
100
+ a = [:foo, 'bar', 2]
101
+ a1 = a.map
102
+ a1 # => #
103
+
104
+ Array#collect is an alias for Array#map.
105
+ Examples static VALUE
106
+ rb_ary_collect(VALUE ary)
107
+ {
108
+ long i;
109
+ VALUE collect;
110
+
111
+ RETURN_SIZED_ENUMERATOR(ary, 0, 0, ary_enum_length);
112
+ collect = rb_ary_new2(RARRAY_LEN(ary));
113
+ for (i = 0; i < RARRAY_LEN(ary); i++) {
114
+ rb_ary_push(collect, rb_yield(RARRAY_AREF(ary, i)));
115
+ }
116
+ return collect;
117
+ }
118
+ ```
119
+
120
+ 2. Chunk texts into meaningful blocks.
121
+ 3. Create embeddings for texts.
122
+ 4. Store embeddings in a vector database.
123
+
124
+ Aoororachain uses the Chroma vector database to store and query embeddings.
125
+
126
+ Here is an example for loading and creating the embeddings. 
127
+
128
+ ```ruby
129
+ require "aoororachain"
130
+
131
+ # Setup logger.
132
+ Aoororachain.logger = Logger.new($stdout)
133
+ Aoororachain.log_level = Aoororachain::LEVEL_DEBUG
134
+
135
+ chroma_host = "http://localhost:8000"
136
+ collection_name = "ruby-documentation"
137
+
138
+ # You can define a custom Parser to clean data and maybe extract metadata.
139
+ # Here is the code of RubyDocParser that does exactly that.
140
+ class RubyDocParser
141
+ def self.parse(text)
142
+ name_match = text.match(/Name (\w+)/)
143
+ constant_match = text.match(/Constant (\w+)/)
144
+
145
+ object_match = text.match(/Object (\w+)/)
146
+ method_match = text.match(/Method ([\w\[\]\+\=\-\*\%\/]+)/)
147
+
148
+ metadata = {}
149
+ metadata[:name] = name_match[1] if name_match
150
+ metadata[:constant] = constant_match[1] if constant_match
151
+ metadata[:object] = object_match[1] if object_match
152
+ metadata[:method] = method_match[1] if method_match
153
+ metadata[:lang] = :ruby
154
+ metadata[:version] = "3.2"
155
+
156
+ text.gsub!(/\s+/, " ").strip!
157
+ [text, metadata]
158
+ end
159
+ end
160
+
161
+ # A DirectoryLoader points to a path and sets the glob for the files you want to load. 
162
+ # A loader is also specified. FileLoader just opens and reads the file content. 
163
+ # The RubyDocParser is set as well. This is optional in case you data is very nice and needs no pre-processing.
164
+ directory_loader = Aoororachain::Loaders::DirectoryLoader.new(path: "./ruby-docs", glob: "**/*.txt", loader: Aoororachain::Loaders::FileLoader, parser: RubyDocParser)
165
+ files = directory_loader.load
166
+
167
+ # With your data clean and ready, now it is time to chunk it. The chunk size depends of the context size of the LLMs that you want to use.
168
+ # 512 is a good number to start, don't go lower than that. An overlap can also be specified.
169
+ text_splitter = Aoororachain::RecursiveTextSplitter.new(size: 512, overlap: 0)
170
+
171
+ texts = []
172
+ files.each do |file|
173
+ texts.concat(text_splitter.split_documents(file))
174
+ end
175
+
176
+ # The final step is to create and store the embeddings.
177
+ # First, select an embedding model
178
+ model = Aoororachain::Embeddings::LocalPythonEmbedding::MODEL_INSTRUCTOR_L
179
+ # Create an instance of the embedder. device is optional. Possible options are:
180
+ # - cuda. If you have an external GPU
181
+ # - mps. If you have an Apple Sillicon chip (M1 to M2).
182
+ # - cpu or empty. It will use the CPU by default.
183
+ embedder = Aoororachain::Embeddings::LocalPythonEmbedding.new(model:, device: "mps")
184
+ # Configure your Vector database.
185
+ vector_database = Aoororachain::VectorStores::Chroma.new(embedder: embedder, options: {host: chroma_host})
186
+
187
+ # Embbed your files. This can take a few minutes up to hours, depending on the size of your documents and the model used.
188
+ vector_database.from_documents(texts, index: collection_name)
189
+ ```
190
+
191
+ With embedding loaded in the database, you can use a tool like Chroma UI -**not yet released** - to query documents.
192
+ ![chroma-ui](https://github.com/mariochavez/aoororachain/assets/59967/d65dea13-c6ef-452a-9774-8cf3b47c048f)
193
+
194
+ But it is more useful to query with Aoororachain.
195
+
196
+ ```ruby
197
+ # Define a retriever for the Vector database.
198
+ retriever = Aoororachain::VectorStores::Retriever.new(vector_database)
199
+
200
+ # Query documents, results by default is 3.
201
+ documents = retriever.search("how can I use the Data class?", results: 4)
202
+
203
+ # Print retrieved documents and their similarity distance from the question.
204
+ puts documents.map(&:document).join(" ")
205
+ puts documents.map(&:distance)
206
+ ```
207
+
208
+ ### Query LLM with context.
209
+
210
+ With embeddings ready, it is time to create a _chain_ to perform QA Retrieval using the embedded documents as context.
211
+
212
+ ```ruby
213
+ require "aoororachain"
214
+
215
+ # Setup logger.
216
+ Aoororachain.logger = Logger.new($stdout)
217
+ Aoororachain.log_level = Aoororachain::LEVEL_DEBUG
18
218
 
19
- ## Development
219
+ llm_host = "http://localhost:9292"
220
+ chroma_host = "http://localhost:8000"
221
+ collection_name = "ruby-documentation"
20
222
 
21
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
223
+ model = Aoororachain::Embeddings::LocalPythonEmbedding::MODEL_INSTRUCTOR_L
224
+ embedder = Aoororachain::Embeddings::LocalPythonEmbedding.new(model:, device: "mps")
225
+ vector_database = Aoororachain::VectorStores::Chroma.new(embedder: embedder, options: {host: chroma_host, log_level: Chroma::LEVEL_DEBUG})
226
+ vector_database.from_index(collection_name)
22
227
 
23
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
228
+ retriever = Aoororachain::VectorStores::Retriever.new(vector_database)
229
+
230
+ # Configure the LLM Server
231
+ llm = Aoororachain::Llms::LlamaServer.new(llm_host)
232
+
233
+ # Create the chain to connect the Vector database retriever with the LLM.
234
+ chain = Aoororachain::Chains::RetrievalQA.new(llm, retriever)
235
+
236
+ # Create a template for the LLM. Aoororachain does not include any templates because these are model specific. The following template is for the Vicuna model.
237
+ template = "A conversation between a human and an AI assistant. The assistant responds to a question using the context. Context: ===%{context}===. Question: %{prompt}"
238
+
239
+ response = chain.complete(prompt: "how can I use the Data class to define a new class?", prompt_template: template)
240
+ ```
24
241
 
25
242
  ## Contributing
26
243
 
@@ -9,15 +9,12 @@ module Aoororachain
9
9
  @type = type
10
10
  end
11
11
 
12
- def complete(prompt:)
12
+ def complete(prompt:, prompt_template:)
13
13
  context = @retriever.search(prompt)
14
14
 
15
- system_prompt = "Una conversación entre un humano y un asistente de inteligencia artificial. El asistente response usando el contexto la pregunta. Si no sabes la respuesta, simplemente di que no sabes, no trates de inventar una."
16
- context_prompt = "Contexto: #{context.map(&:document).join(" ").tr("\n", " ")}"
17
- question_prompt = "Pregunta: #{prompt}"
15
+ stuff_prompt = prompt_template % {context: context.map(&:document).join(" ").tr("\n", " "), prompt:}
18
16
 
19
- stuff_prompt = [system_prompt, context_prompt, question_prompt]
20
- success, response = @llm.complete(prompt: stuff_prompt.join(". "))
17
+ success, response = @llm.complete(prompt: stuff_prompt)
21
18
 
22
19
  if success
23
20
  completion = {
@@ -12,9 +12,7 @@ module Aoororachain
12
12
  @device = options.delete(:device) || "cpu"
13
13
 
14
14
  Aoororachain::Util.log_info("Using", data: {model: @model, device: @device})
15
- Aoororachain::Util.log_info("This embedding calls Python code using system call. First time initialization might take long due to Python dependencies installation.")
16
-
17
- install_python_dependencies
15
+ Aoororachain::Util.log_info("This embedding calls Python code using system call.")
18
16
  end
19
17
 
20
18
  def embed_documents(documents, include_metadata: false)
@@ -151,18 +149,6 @@ module Aoororachain
151
149
 
152
150
  file_path
153
151
  end
154
-
155
- def install_python_dependencies
156
- stdout_data, stderr_data, exit_code = run_system("pip -q install langchain sentence_transformers InstructorEmbedding")
157
-
158
- if exit_code != 0
159
- Aoororachain.log_error("Failed to install Python dependencies: #{stderr_data}")
160
- return false
161
- end
162
-
163
- Aoororachain::Util.log_debug("Python installed dependencies: #{stdout_data}")
164
- true
165
- end
166
152
  end
167
153
  end
168
154
  end
@@ -14,7 +14,7 @@ module Aoororachain
14
14
  def complete(prompt:)
15
15
  result = LlmClient.completion(prompt)
16
16
 
17
- [result.success?, result.success? ? result.success.body["response"].gsub(/Usuario:.*Asistente:/, "") : result.failure.message]
17
+ [result.success?, result.success? ? result.success.body["response"].gsub(/Usuario:.*Asistente:/, "") : result.failure.body]
18
18
  end
19
19
  end
20
20
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Aoororachain
4
- VERSION = "0.1.3"
4
+ VERSION = "0.1.4"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: aoororachain
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.3
4
+ version: 0.1.4
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mario Alberto Chávez
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-06-27 00:00:00.000000000 Z
11
+ date: 2023-07-04 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: chroma-db
@@ -106,7 +106,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
106
106
  - !ruby/object:Gem::Version
107
107
  version: '0'
108
108
  requirements: []
109
- rubygems_version: 3.4.14
109
+ rubygems_version: 3.4.15
110
110
  signing_key:
111
111
  specification_version: 4
112
112
  summary: Aoororachain for working with LLMs