sublayer 0.0.4 → 0.0.6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: c0e73536f0855136d66096a10b35fe12acc7860e92b7c61a6408450716a7610d
4
- data.tar.gz: 7de63a9367edf5e27424e15200132024e6fbde702d76f36156a0a95cb1928194
3
+ metadata.gz: 7a49a0cdd72c4a8171f246646ac8021f72bcdf35a0591931614cbbf7599c3bad
4
+ data.tar.gz: 3e0da0784fc6ace719556110c47a4145acb01d11ff90555bc25fd7ecb7c6f43f
5
5
  SHA512:
6
- metadata.gz: bf794d00c4704c474586a5c3943a704109516af3dec85d431c8b610fc38d90f574a95f7c72e8d3e15daaa6214ec236d362e2350c435073807ec92c25d6cf1215
7
- data.tar.gz: e5ec8d95d8966006830213a8d4058fbbf8679ae33353b41c25e5e3b620167226e760380961ae17f4adf4779d0920b7b5f60c7880c87800c1c908a3b99f763779
6
+ metadata.gz: 9fc567c2e6f84886b525ccac9a842b16e6e519b17d46d6eb86cec502eeafb29c110936ad8a11978124b478f8a3271415c8f96473aaebbd3d28f4678a8736340f
7
+ data.tar.gz: 17a53b8f40146f09f2380f4c25ad86526d6b78dac8562ab1a06f047ab05e16646cdac4bc1453533f313e6c9bdb5a6e1537a7f8efb2ab515286608eec990ee26e
data/README.md CHANGED
@@ -1,16 +1,22 @@
1
1
  # Sublayer
2
2
 
3
- A model-agnostic Ruby Generative AI DSL and framework. Provides base classes for
3
+ A model-agnostic Ruby AI Agent framework. Provides base classes for
4
4
  building Generators, Actions, Tasks, and Agents that can be used to build AI
5
5
  powered applications in Ruby.
6
6
 
7
+ For more detailed documentation visit our documentation site: [https://docs.sublayer.com](https://docs.sublayer.com).
8
+
7
9
  ## Installation
8
10
 
9
11
  Install the gem by running the following commands:
10
12
 
11
- $ bundle
12
- $ gem build sublayer.gemspec
13
- $ gem install sublayer-0.0.1.gem
13
+ $ gem install sublayer
14
+
15
+ Or add this line to your application's Gemfile:
16
+
17
+ ```ruby
18
+ gem 'sublayer'
19
+ ```
14
20
 
15
21
  ## Choose your AI Model
16
22
 
@@ -65,26 +71,60 @@ Sublayer.configuration.ai_provider = Sublayer::Providers::Groq
65
71
  Sublayer.configuration.ai_model = "mixtral-8x7b-32768"
66
72
  ```
67
73
 
68
- ### Local (Experimental with Hermes2)
74
+ ### Local
69
75
 
70
- *Tested against the latest [Hermes 2 Mistral
71
- Model](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF)*
76
+ If you've never run a local model before see the [Local Model Quickstart](#local-model-quickstart) below. Know that local models take several GB of space.
72
77
 
73
- Support for running a local model and serving an API on https://localhost:8080
78
+ The model you use must have the ChatML formatted v1/chat/completions endpoint to work with sublayer (many models do by default)
74
79
 
75
- The simplest way to do this is to download
76
- [llamafile](https://github.com/Mozilla-Ocho/llamafile) and download one of the
77
- server llamafiles they provide.
78
-
79
- We've also tested with [Hermes 2 Mistral](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF) from
80
- [Nous
81
- Research](https://nousresearch.com/).
80
+ Usage:
82
81
 
82
+ Run your local model on http://localhost:8080 and then set:
83
83
  ```ruby
84
84
  Sublayer.configuration.ai_provider = Sublayer::Providers::Local
85
85
  Sublayer.configuration.ai_model = "LLaMA_CPP"
86
86
  ```
87
87
 
88
+ #### Local Model Quickstart:
89
+
90
+ Instructions to run a local model
91
+
92
+ 1. Setting up Llamafile
93
+
94
+ ```bash
95
+ cd where/you/keep/your/projects
96
+ git clone git@github.com:Mozilla-Ocho/llamafile.git
97
+ cd llamafile
98
+ ```
99
+
100
+ Download: https://cosmo.zip/pub/cosmos/bin/make (windows users need this too: https://justine.lol/cosmo3/)
101
+
102
+ ```bash
103
+ # within llamafile directory
104
+ chmod +x path/to/the/downloaded/make
105
+ path/to/the/downloaded/make -j8
106
+ sudo path/to/the/downloaded/make install PREFIX=/usr/local
107
+ ```
108
+ You can now run llamfile
109
+
110
+ 2. Downloading Model
111
+
112
+ click [here](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/resolve/main/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf?download=true) to download Mistral_7b.Q5_K_M (5.13 GB)
113
+
114
+ 3. Running Llamafile with a model
115
+
116
+ ```bash
117
+ llamafile -ngl 9999 -m path/to/the/downloaded/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 -c 4096
118
+ ```
119
+
120
+ You are now running a local model on http://localhost:8080
121
+
122
+ #### Recommended Settings for Apple M1 users:
123
+ ```bash
124
+ llamafile -ngl 9999 -m Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 --nobrowser -c 2048 --gpu APPLE -t 12
125
+ ```
126
+ run `sysctl -n hw.logicalcpu` to see what number to give the `-t` threads option
127
+
88
128
  ## Concepts
89
129
 
90
130
  ### Generators
@@ -0,0 +1,6 @@
1
+ module Sublayer
2
+ module Agents
3
+ class Base
4
+ end
5
+ end
6
+ end
@@ -44,10 +44,9 @@ module Sublayer
44
44
  raise "Error generating with Claude, error: #{response.body}" unless response.code == 200
45
45
 
46
46
  text_containing_xml = JSON.parse(response.body).dig("content", 0, "text")
47
- xml = text_containing_xml.match(/\<response\>(.*?)\<\/response\>/m).to_s
48
- response_xml = ::Nokogiri::XML(xml)
49
- function_output = response_xml.at_xpath("//response/function_calls/invoke/parameters/#{output_adapter.name}").children.to_s
47
+ function_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
50
48
 
49
+ raise "Claude did not format response, error: #{response.body}" unless function_output
51
50
  return function_output
52
51
  end
53
52
  end
@@ -5,21 +5,49 @@ module Sublayer
5
5
  module Providers
6
6
  class Gemini
7
7
  def self.call(prompt:, output_adapter:)
8
+ system_prompt = <<-PROMPT
9
+ You have access to a set of tools to answer the prompt.
10
+
11
+ You may call tools like this:
12
+ <tool_calls>
13
+ <tool_call>
14
+ <tool_name>$TOOL_NAME</tool_name>
15
+ <parameters>
16
+ <$PARAMETER_NAME>$VALUE</$PARAMETER_NAME>
17
+ ...
18
+ </parameters>
19
+ </tool_call>
20
+ </tool_calls>
21
+
22
+ Here are the tools available:
23
+ <tools>
24
+ <tool>
25
+ #{output_adapter.to_xml}
26
+ </tool>
27
+ </tools>
28
+
29
+ Respond only with valid xml.
30
+ The entire response should be wrapped in a <response> tag.
31
+ Your response should call a tool inside a <tool_calls> tag.
32
+ PROMPT
33
+
8
34
  response = HTTParty.post(
9
35
  "https://generativelanguage.googleapis.com/v1beta/models/#{Sublayer.configuration.ai_model}:generateContent?key=#{ENV['GEMINI_API_KEY']}",
10
36
  body: {
11
- tools: { function_declarations: [output_adapter.to_hash] },
12
- contents: { role: "user", parts: { text: prompt } }
37
+ contents: { role: "user", parts: { text: "#{system_prompt}\n#{prompt}" } }
13
38
  }.to_json,
14
39
  headers: {
15
40
  "Content-Type" => "application/json"
16
- })
41
+ }
42
+ )
43
+
44
+ raise "Error generating with Gemini, error: #{response.body}" unless response.success?
17
45
 
18
- part = response.dig('candidates', 0, 'content', 'parts', 0)
19
- raise "No function called" unless part['functionCall']
46
+ text_containing_xml = response.dig('candidates', 0, 'content', 'parts', 0, 'text')
47
+ tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
20
48
 
21
- args = part['functionCall']['args']
22
- args[output_adapter.name]
49
+ raise "Gemini did not format response, error: #{response.body}" unless tool_output
50
+ return tool_output
23
51
  end
24
52
  end
25
53
  end
@@ -6,27 +6,27 @@ module Sublayer
6
6
  class Groq
7
7
  def self.call(prompt:, output_adapter:)
8
8
  system_prompt = <<-PROMPT
9
- In this environment you have access to a set of tools you can use to answer the user's question.
9
+ You have access to a set of tools to answer the prompt.
10
10
 
11
- You may call them like this:
12
- <function_calls>
13
- <invoke>
14
- <tool_name>$TOOL_NAME</tool_name>
15
- <parameters>
16
- <#{output_adapter.name}>value</#{output_adapter.name}>
17
- ...
18
- </parameters>
19
- </invoke>
20
- </function_calls>
11
+ You may call tools like this:
12
+ <tool_calls>
13
+ <tool_call>
14
+ <tool_name>$TOOL_NAME</tool_name>
15
+ <parameters>
16
+ <#{output_adapter.name}>$VALUE</#{output_adapter.name}>
17
+ ...
18
+ </parameters>
19
+ </tool_call>
20
+ </tool_calls>
21
21
 
22
22
  Here are the tools available:
23
23
  <tools>
24
- #{output_adapter.to_xml}
24
+ #{output_adapter.to_xml}
25
25
  </tools>
26
26
 
27
27
  Respond only with valid xml.
28
28
  The entire response should be wrapped in a <response> tag.
29
- Any additional information not inside a tool call should go in a <scratch> tag.
29
+ Your response should call a tool inside a <tool_calls> tag.
30
30
  PROMPT
31
31
 
32
32
  response = HTTParty.post(
@@ -41,12 +41,11 @@ module Sublayer
41
41
  }.to_json
42
42
  )
43
43
 
44
- text_containing_xml = JSON.parse(response.body).dig("choices", 0, "message", "content")
45
- xml = text_containing_xml.match(/\<response\>(.*?)\<\/response\>/m).to_s
46
- response_xml = ::Nokogiri::XML(xml)
47
- function_output = response_xml.at_xpath("//response/function_calls/invoke/parameters/command").children.to_s
44
+ text_containing_xml = response.dig("choices", 0, "message", "content")
45
+ tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
46
+ raise "Groq did not format response correctly, error: #{response.body}" unless tool_output
48
47
 
49
- return function_output
48
+ return tool_output
50
49
  end
51
50
  end
52
51
  end
@@ -6,23 +6,23 @@ module Sublayer
6
6
  class Local
7
7
  def self.call(prompt:, output_adapter:)
8
8
  system_prompt = <<-PROMPT
9
- You are a function calling AI agent
10
- You can call only one function at a time
11
- You are provided with function signatures within <tools></tools> XML tags.
9
+ You have access to a set of tools to respond to the prompt.
12
10
 
13
- Please call a function and wait for results to be provided to you in the next iteration.
14
- Don't make assumptions about what values to plug into function arguments.
11
+ You may call a tool with xml like this:
12
+ <parameters>
13
+ <#{output_adapter.name}>$VALUE</#{output_adapter.name}>
14
+ ...
15
+ </parameters>
15
16
 
16
- Here are the available tools:
17
+ Here are descriptions of the available tools:
17
18
  <tools>
18
- #{output_adapter.to_hash.to_json}
19
+ <tool>
20
+ #{output_adapter.to_xml}
21
+ </tool>
19
22
  </tools>
20
23
 
21
- For the function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
22
-
23
- <tool_call>
24
- {"arguments": <args-dict>, "name": <function-name>}
25
- </tool_call>
24
+ Respond only with valid xml.
25
+ Your response should call a tool with xml inside a <parameters> tag.
26
26
  PROMPT
27
27
 
28
28
  response = HTTParty.post(
@@ -34,17 +34,16 @@ module Sublayer
34
34
  body: {
35
35
  "model": Sublayer.configuration.ai_model,
36
36
  "messages": [
37
- { "role": "system", "content": system_prompt },
38
- { "role": "user", "content": prompt }
37
+ { "role": "user", "content": "#{system_prompt}\n#{prompt}}" }
39
38
  ]
40
39
  }.to_json
41
40
  )
42
41
 
43
- text_containing_xml = JSON.parse(response.body).dig("choices", 0, "message", "content")
44
- results = JSON.parse(::Nokogiri::XML(text_containing_xml).at_xpath("//tool_call").children.to_s.strip)
45
- function_output = results["arguments"][output_adapter.name]
42
+ text_containing_xml = response.dig("choices", 0, "message", "content")
43
+ tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
44
+ raise "The response was not formatted correctly: #{response.body}" unless tool_output
46
45
 
47
- return function_output
46
+ return tool_output
48
47
  end
49
48
  end
50
49
  end
@@ -0,0 +1,6 @@
1
+ module Sublayer
2
+ module Tasks
3
+ class Base
4
+ end
5
+ end
6
+ end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Sublayer
4
- VERSION = "0.0.4"
4
+ VERSION = "0.0.6"
5
5
  end
data/sublayer.gemspec CHANGED
@@ -11,10 +11,12 @@ Gem::Specification.new do |spec|
11
11
 
12
12
  spec.summary = "A model-agnostic Ruby GenerativeAI DSL and Framework"
13
13
  spec.description = "A DSL and framework for building AI powered applications through the use of Generators, Actions, Tasks, and Agents"
14
- spec.homepage = "https://www.sublayer.com"
14
+ spec.homepage = "https://docs.sublayer.com"
15
15
  spec.required_ruby_version = ">= 2.6.0"
16
16
 
17
- spec.metadata["homepage_uri"] = spec.homepage
17
+ spec.metadata["homepage_uri"] = "https://docs.sublayer.com"
18
+ spec.metadata["documentation_uri"] = "https://docs.sublayer.com"
19
+ spec.metadata["bug_tracker_uri"] = "https://github.com/sublayerapp/sublayer/issues"
18
20
 
19
21
  # Specify which files should be added to the gem when it is released.
20
22
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sublayer
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.4
4
+ version: 0.0.6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Scott Werner
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-04-04 00:00:00.000000000 Z
11
+ date: 2024-04-14 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: ruby-openai
@@ -153,6 +153,7 @@ files:
153
153
  - examples/invalid_to_valid_json_generator.rb
154
154
  - lib/sublayer.rb
155
155
  - lib/sublayer/actions/base.rb
156
+ - lib/sublayer/agents/base.rb
156
157
  - lib/sublayer/components/output_adapters.rb
157
158
  - lib/sublayer/components/output_adapters/single_string.rb
158
159
  - lib/sublayer/generators/base.rb
@@ -161,13 +162,16 @@ files:
161
162
  - lib/sublayer/providers/groq.rb
162
163
  - lib/sublayer/providers/local.rb
163
164
  - lib/sublayer/providers/open_ai.rb
165
+ - lib/sublayer/tasks/base.rb
164
166
  - lib/sublayer/version.rb
165
167
  - sublayer.gemspec
166
- homepage: https://www.sublayer.com
168
+ homepage: https://docs.sublayer.com
167
169
  licenses:
168
170
  - MIT
169
171
  metadata:
170
- homepage_uri: https://www.sublayer.com
172
+ homepage_uri: https://docs.sublayer.com
173
+ documentation_uri: https://docs.sublayer.com
174
+ bug_tracker_uri: https://github.com/sublayerapp/sublayer/issues
171
175
  post_install_message:
172
176
  rdoc_options: []
173
177
  require_paths: