sublayer 0.0.7 → 0.1.0.pre.alpha

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 07ead9b290805b3d7e173f8e613dd4670bdd304a094cb7a00505ac9d94285b9b
4
- data.tar.gz: 89eba4a05eebb012013a68e06f4d44e8a1710db0ea8a808ed03557241f4baf74
3
+ metadata.gz: 5a723acaec8ce50c64470d067dc29769fb8efe9415e4a867507272ef8961e687
4
+ data.tar.gz: 76508d4f490e5d9601d3827e35191b20ba7887e2086696bb610567e0f49639bd
5
5
  SHA512:
6
- metadata.gz: a6f07414c4a7a2798f2b93133bd921cae8ffe380104f813564109ab16b985dd58c416f6b176f33d75910898b56d47ea98262a1f1035071cb29dddf2f53277b82
7
- data.tar.gz: 45953c8a3a63c526bd310f4928a1570210e54e6e1be5718ff50b28041a8e9e93ca175d8d984ca6abd8aa729b87f5c84bf922e547c6a8f22ef62272bd240eade2
6
+ metadata.gz: 983e31c6d8b85a6116accf2b7eb395b257e79de967e79f4046ddfacf046522bc274335e931e8665f67864704fd2af7b331cd93cc294f50d9533f44bf99d94f39
7
+ data.tar.gz: 3a18adf3d93a074897855874780f7ae0205ed5de3541923f5711a238ab0823fadedb3a0a82fe32c87cbca5da918656d0fa9d982bb2a3ef1651a5a606f5647d81
data/README.md CHANGED
@@ -6,6 +6,20 @@ powered applications in Ruby.
6
6
 
7
7
  For more detailed documentation visit our documentation site: [https://docs.sublayer.com](https://docs.sublayer.com).
8
8
 
9
+ ## Note on Versioning
10
+
11
+ Pre-1.0 we anticipate many breaking changes to the API. Our current plan is to
12
+ keep breaking changes to minor, 0.x releases, and patch releases (0.x.y) will be used
13
+ for new features and bug fixes.
14
+
15
+ To maintain stability in your application, we recommend pinning the version of
16
+ Sublayer in your Gemfile to a specific minor version. For example, to pin to
17
+ version 0.1.x, you would add the following line to your Gemfile:
18
+
19
+ ```ruby
20
+ gem 'sublayer', '~> 0.1'
21
+ ```
22
+
9
23
  ## Installation
10
24
 
11
25
  Install the gem by running the following commands:
@@ -15,7 +29,7 @@ Install the gem by running the following commands:
15
29
  Or add this line to your application's Gemfile:
16
30
 
17
31
  ```ruby
18
- gem 'sublayer'
32
+ gem 'sublayer', '~> 0.1'
19
33
  ```
20
34
 
21
35
  ## Choose your AI Model
@@ -59,72 +73,6 @@ Sublayer.configuration.ai_provider = Sublayer::Providers::Claude
59
73
  Sublayer.configuration.ai_model ="claude-3-opus-20240229"
60
74
  ```
61
75
 
62
- ### Groq
63
-
64
- Expects you to have a Groq API key set in the `GROQ_API_KEY` environment variable.
65
-
66
- Visit [Groq Console](https://console.groq.com/) to get an API key.
67
-
68
- Usage:
69
- ```ruby
70
- Sublayer.configuration.ai_provider = Sublayer::Providers::Groq
71
- Sublayer.configuration.ai_model = "mixtral-8x7b-32768"
72
- ```
73
-
74
- ### Local
75
-
76
- If you've never run a local model before see the [Local Model Quickstart](#local-model-quickstart) below. Know that local models take several GB of space.
77
-
78
- The model you use must have the ChatML formatted v1/chat/completions endpoint to work with sublayer (many models do by default)
79
-
80
- Usage:
81
-
82
- Run your local model on http://localhost:8080 and then set:
83
- ```ruby
84
- Sublayer.configuration.ai_provider = Sublayer::Providers::Local
85
- Sublayer.configuration.ai_model = "LLaMA_CPP"
86
- ```
87
-
88
- #### Local Model Quickstart:
89
-
90
- Instructions to run a local model
91
-
92
- 1. Setting up Llamafile
93
-
94
- ```bash
95
- cd where/you/keep/your/projects
96
- git clone git@github.com:Mozilla-Ocho/llamafile.git
97
- cd llamafile
98
- ```
99
-
100
- Download: https://cosmo.zip/pub/cosmos/bin/make (windows users need this too: https://justine.lol/cosmo3/)
101
-
102
- ```bash
103
- # within llamafile directory
104
- chmod +x path/to/the/downloaded/make
105
- path/to/the/downloaded/make -j8
106
- sudo path/to/the/downloaded/make install PREFIX=/usr/local
107
- ```
108
- You can now run llamfile
109
-
110
- 2. Downloading Model
111
-
112
- click [here](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/resolve/main/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf?download=true) to download Mistral_7b.Q5_K_M (5.13 GB)
113
-
114
- 3. Running Llamafile with a model
115
-
116
- ```bash
117
- llamafile -ngl 9999 -m path/to/the/downloaded/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 -c 4096
118
- ```
119
-
120
- You are now running a local model on http://localhost:8080
121
-
122
- #### Recommended Settings for Apple M1 users:
123
- ```bash
124
- llamafile -ngl 9999 -m Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 --nobrowser -c 2048 --gpu APPLE -t 12
125
- ```
126
- run `sysctl -n hw.logicalcpu` to see what number to give the `-t` threads option
127
-
128
76
  ## Concepts
129
77
 
130
78
  ### Generators
@@ -189,11 +137,3 @@ base for generating new code.
189
137
 
190
138
  - [Clag](https://github.com/sublayerapp/clag) - A ruby gem that generates
191
139
  command line commands from a simple description right in your terminal.
192
-
193
- ## Development
194
-
195
- TBD
196
-
197
- ## Contributing
198
-
199
- TBD
@@ -2,41 +2,15 @@ module Sublayer
2
2
  module Components
3
3
  module OutputAdapters
4
4
  class SingleString
5
- attr_reader :name
5
+ attr_reader :name, :description
6
6
 
7
7
  def initialize(options)
8
8
  @name = options[:name]
9
9
  @description = options[:description]
10
10
  end
11
11
 
12
- def to_hash
13
- {
14
- name: @name,
15
- description: @description,
16
- parameters: {
17
- type: "object",
18
- properties: {
19
- @name => {
20
- type: "string",
21
- description: @description
22
- }
23
- }
24
- }
25
- }
26
- end
27
-
28
- def to_xml
29
- <<-XML
30
- <tool_description>
31
- <tool_name>#{@name}</tool_name>
32
- <tool_description>#{@description}</tool_description>
33
- <parameters>
34
- <name>#{@name}</name>
35
- <type>string</type>
36
- <description>#{@description}</description>
37
- </parameters>
38
- </tool_description>
39
- XML
12
+ def properties
13
+ [OpenStruct.new(name: @name, type: 'string', description: @description, required: true)]
40
14
  end
41
15
  end
42
16
  end
@@ -0,0 +1,30 @@
1
+ module Sublayer
2
+ module Components
3
+ module OutputAdapters
4
+ class StringSelectionFromList
5
+ attr_reader :name, :description, :options
6
+
7
+ def initialize(options)
8
+ @name = options[:name]
9
+ @description = options[:description]
10
+ @list = options[:options]
11
+ end
12
+
13
+ def properties
14
+ [OpenStruct.new(name: @name, type: 'string', description: @description, required: true, enum: @list)]
15
+ end
16
+
17
+ def load_instance_data(generator)
18
+ case @list
19
+ when Proc
20
+ @list = generator.instance_exec(&@list)
21
+ when Symbol
22
+ @list = generator.send(@list)
23
+ else
24
+ @list
25
+ end
26
+ end
27
+ end
28
+ end
29
+ end
30
+ end
@@ -9,6 +9,7 @@ module Sublayer
9
9
  end
10
10
 
11
11
  def generate
12
+ self.class::OUTPUT_ADAPTER.load_instance_data(self) if self.class::OUTPUT_ADAPTER.respond_to?(:load_instance_data)
12
13
  @results = Sublayer.configuration.ai_provider.call(prompt: prompt, output_adapter: self.class::OUTPUT_ADAPTER)
13
14
  end
14
15
  end
@@ -0,0 +1,29 @@
1
+ class RouteSelectionFromUserIntentGenerator < Sublayer::Generators::Base
2
+ llm_output_adapter type: :string_selection_from_list,
3
+ name: "route",
4
+ description: "A route selected from the list",
5
+ options: :available_routes
6
+
7
+ def initialize(user_intent:)
8
+ @user_intent = user_intent
9
+ end
10
+
11
+ def generate
12
+ super
13
+ end
14
+
15
+ def available_routes
16
+ ["GET /", "GET /users", "GET /users/:id", "POST /users", "PUT /users/:id", "DELETE /users/:id"]
17
+ end
18
+
19
+ def prompt
20
+ <<-PROMPT
21
+ You are skilled at selecting routes based on user intent.
22
+
23
+ Your task is to choose a route based on the following intent:
24
+
25
+ The user's intent is:
26
+ #{@user_intent}
27
+ PROMPT
28
+ end
29
+ end
@@ -0,0 +1,26 @@
1
+ class SentimentFromTextGenerator < Sublayer::Generators::Base
2
+ llm_output_adapter type: :string_selection_from_list,
3
+ name: "sentiment_value",
4
+ description: "A sentiment value from the list",
5
+ options: -> { @sentiment_options }
6
+
7
+ def initialize(text:, sentiment_options:)
8
+ @text = text
9
+ @sentiment_options = sentiment_options
10
+ end
11
+
12
+ def generate
13
+ super
14
+ end
15
+
16
+ def prompt
17
+ <<-PROMPT
18
+ You are an expert at determining sentiment from text.
19
+
20
+ You are tasked with analyzing the following text and determining its sentiment value.
21
+
22
+ The text is:
23
+ #{@text}
24
+ PROMPT
25
+ end
26
+ end
@@ -5,49 +5,49 @@ module Sublayer
5
5
  module Providers
6
6
  class Claude
7
7
  def self.call(prompt:, output_adapter:)
8
- system_prompt = <<-PROMPT
9
- In this environment you have access to a set of tools you can use to answer the user's question.
10
-
11
- You may call them like this:
12
- <function_calls>
13
- <invoke>
14
- <tool_name>$TOOL_NAME</tool_name>
15
- <parameters>
16
- <$PARAMETER_NAME>$PARAMETER_VALUE</$PARAMETER_NAME>
17
- ...
18
- </parameters>
19
- </invoke>
20
- </function_calls>
21
-
22
- Here are the tools available:
23
- <tools>
24
- #{output_adapter.to_xml}
25
- </tools>
26
-
27
- Respond only with valid xml. The entire response should be wrapped in a <response> tag. Any additional information not inside a tool call should go in a <scratch> tag.
28
- PROMPT
29
-
30
8
  response = HTTParty.post(
31
9
  "https://api.anthropic.com/v1/messages",
32
10
  headers: {
33
11
  "x-api-key": ENV.fetch("ANTHROPIC_API_KEY"),
34
12
  "anthropic-version": "2023-06-01",
35
- "content-type": "application/json"
13
+ "content-type": "application/json",
14
+ "anthropic-beta": "tools-2024-04-04"
36
15
  },
37
16
  body: {
38
17
  model: Sublayer.configuration.ai_model,
39
18
  max_tokens: 4096,
40
- system: system_prompt,
41
- messages: [ { "role": "user", "content": prompt }]
19
+ tools: [
20
+ {
21
+ name: output_adapter.name,
22
+ description: output_adapter.description,
23
+ input_schema: {
24
+ type: "object",
25
+ properties: format_properties(output_adapter),
26
+ required: output_adapter.properties.select(&:required).map(&:name)
27
+ }
28
+ }
29
+ ],
30
+ messages: [{ "role": "user", "content": prompt }]
42
31
  }.to_json
43
32
  )
44
33
  raise "Error generating with Claude, error: #{response.body}" unless response.code == 200
45
34
 
46
- text_containing_xml = JSON.parse(response.body).dig("content", 0, "text")
47
- function_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
35
+ function_input = JSON.parse(response.body).dig("content").find {|content| content['type'] == 'tool_use'}.dig("input")
36
+ function_input[output_adapter.name]
37
+ end
38
+
39
+ private
40
+ def self.format_properties(output_adapter)
41
+ output_adapter.properties.each_with_object({}) do |property, hash|
42
+ hash[property.name] = {
43
+ type: property.type,
44
+ description: property.description
45
+ }
48
46
 
49
- raise "Claude did not format response, error: #{response.body}" unless function_output
50
- return function_output
47
+ if property.enum
48
+ hash[property.name][:enum] = property.enum
49
+ end
50
+ end
51
51
  end
52
52
  end
53
53
  end
@@ -21,9 +21,13 @@ module Sublayer
21
21
 
22
22
  Here are the tools available:
23
23
  <tools>
24
- <tool>
25
- #{output_adapter.to_xml}
26
- </tool>
24
+ <tool_description>
25
+ <tool_name>#{output_adapter.name}</tool_name>
26
+ <tool_description>#{output_adapter.description}</tool_description>
27
+ <parameters>
28
+ #{format_properties(output_adapter)}
29
+ </parameters>
30
+ </tool_description>
27
31
  </tools>
28
32
 
29
33
  Respond only with valid xml.
@@ -49,6 +53,17 @@ module Sublayer
49
53
  raise "Gemini did not format response, error: #{response.body}" unless tool_output
50
54
  return tool_output
51
55
  end
56
+
57
+ private
58
+ def self.format_properties(output_adapter)
59
+ output_adapter.properties.each_with_object("") do |property, xml|
60
+ xml << "<name>#{property.name}</name>"
61
+ xml << "<type>#{property.type}</type>"
62
+ xml << "<description>#{property.description}</description>"
63
+ xml << "<required>#{property.required}</required>"
64
+ xml << "<enum>#{property.enum}</enum>" if property.enum
65
+ end
66
+ end
52
67
  end
53
68
  end
54
69
  end
@@ -16,18 +16,44 @@ module Sublayer
16
16
  "content": prompt
17
17
  }
18
18
  ],
19
- function_call: { name: output_adapter.name },
20
- functions: [
21
- output_adapter.to_hash
19
+ tool_choice: { type: "function", function: { name: output_adapter.name }},
20
+ tools: [
21
+ {
22
+ type: "function",
23
+ function: {
24
+ name: output_adapter.name,
25
+ description: output_adapter.description,
26
+ parameters: {
27
+ type: "object",
28
+ properties: OpenAI.format_properties(output_adapter)
29
+ },
30
+ required: [output_adapter.properties.select(&:required).map(&:name)]
31
+ }
32
+ }
22
33
  ]
34
+
23
35
  })
24
36
 
25
37
  message = response.dig("choices", 0, "message")
26
- raise "No function called" unless message["function_call"]
27
38
 
28
- function_name = message.dig("function_call", output_adapter.name)
29
- args_from_llm = message.dig("function_call", "arguments")
30
- JSON.parse(args_from_llm)[output_adapter.name]
39
+ raise "No function called" unless message["tool_calls"].length > 0
40
+
41
+ function_body = message.dig("tool_calls", 0, "function", "arguments")
42
+ JSON.parse(function_body)[output_adapter.name]
43
+ end
44
+
45
+ private
46
+ def self.format_properties(output_adapter)
47
+ output_adapter.properties.each_with_object({}) do |property, hash|
48
+ hash[property.name] = {
49
+ type: property.type,
50
+ description: property.description
51
+ }
52
+
53
+ if property.enum
54
+ hash[property.name][:enum] = property.enum
55
+ end
56
+ end
31
57
  end
32
58
  end
33
59
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Sublayer
4
- VERSION = "0.0.7"
4
+ VERSION = "0.1.0-alpha"
5
5
  end
data/sublayer.gemspec CHANGED
@@ -34,7 +34,7 @@ Gem::Specification.new do |spec|
34
34
  spec.add_dependency "colorize"
35
35
  spec.add_dependency "activesupport"
36
36
  spec.add_dependency "zeitwerk"
37
- spec.add_dependency "nokogiri"
37
+ spec.add_dependency "nokogiri", "~> 1.16.5"
38
38
  spec.add_dependency "httparty"
39
39
 
40
40
  spec.add_development_dependency "rspec", "~> 3.12"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: sublayer
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.7
4
+ version: 0.1.0.pre.alpha
5
5
  platform: ruby
6
6
  authors:
7
7
  - Scott Werner
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-04-22 00:00:00.000000000 Z
11
+ date: 2024-05-16 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: ruby-openai
@@ -70,16 +70,16 @@ dependencies:
70
70
  name: nokogiri
71
71
  requirement: !ruby/object:Gem::Requirement
72
72
  requirements:
73
- - - ">="
73
+ - - "~>"
74
74
  - !ruby/object:Gem::Version
75
- version: '0'
75
+ version: 1.16.5
76
76
  type: :runtime
77
77
  prerelease: false
78
78
  version_requirements: !ruby/object:Gem::Requirement
79
79
  requirements:
80
- - - ">="
80
+ - - "~>"
81
81
  - !ruby/object:Gem::Version
82
- version: '0'
82
+ version: 1.16.5
83
83
  - !ruby/object:Gem::Dependency
84
84
  name: httparty
85
85
  requirement: !ruby/object:Gem::Requirement
@@ -180,15 +180,16 @@ files:
180
180
  - lib/sublayer/agents/base.rb
181
181
  - lib/sublayer/components/output_adapters.rb
182
182
  - lib/sublayer/components/output_adapters/single_string.rb
183
+ - lib/sublayer/components/output_adapters/string_selection_from_list.rb
183
184
  - lib/sublayer/generators/base.rb
184
185
  - lib/sublayer/generators/examples/code_from_blueprint_generator.rb
185
186
  - lib/sublayer/generators/examples/code_from_description_generator.rb
186
187
  - lib/sublayer/generators/examples/description_from_code_generator.rb
187
188
  - lib/sublayer/generators/examples/invalid_to_valid_json_generator.rb
189
+ - lib/sublayer/generators/examples/route_selection_from_user_intent_generator.rb
190
+ - lib/sublayer/generators/examples/sentiment_from_text_generator.rb
188
191
  - lib/sublayer/providers/claude.rb
189
192
  - lib/sublayer/providers/gemini.rb
190
- - lib/sublayer/providers/groq.rb
191
- - lib/sublayer/providers/local.rb
192
193
  - lib/sublayer/providers/open_ai.rb
193
194
  - lib/sublayer/tasks/base.rb
194
195
  - lib/sublayer/version.rb
@@ -212,9 +213,9 @@ required_ruby_version: !ruby/object:Gem::Requirement
212
213
  version: 2.6.0
213
214
  required_rubygems_version: !ruby/object:Gem::Requirement
214
215
  requirements:
215
- - - ">="
216
+ - - ">"
216
217
  - !ruby/object:Gem::Version
217
- version: '0'
218
+ version: 1.3.1
218
219
  requirements: []
219
220
  rubygems_version: 3.3.26
220
221
  signing_key:
@@ -1,52 +0,0 @@
1
- # Sublayer.configuration.ai_provider = Sublayer::Providers::Groq
2
- # Sublayer.configuration.ai_model = "mixtral-8x7b-32768"
3
-
4
- module Sublayer
5
- module Providers
6
- class Groq
7
- def self.call(prompt:, output_adapter:)
8
- system_prompt = <<-PROMPT
9
- You have access to a set of tools to answer the prompt.
10
-
11
- You may call tools like this:
12
- <tool_calls>
13
- <tool_call>
14
- <tool_name>$TOOL_NAME</tool_name>
15
- <parameters>
16
- <#{output_adapter.name}>$VALUE</#{output_adapter.name}>
17
- ...
18
- </parameters>
19
- </tool_call>
20
- </tool_calls>
21
-
22
- Here are the tools available:
23
- <tools>
24
- #{output_adapter.to_xml}
25
- </tools>
26
-
27
- Respond only with valid xml.
28
- The entire response should be wrapped in a <response> tag.
29
- Your response should call a tool inside a <tool_calls> tag.
30
- PROMPT
31
-
32
- response = HTTParty.post(
33
- "https://api.groq.com/openai/v1/chat/completions",
34
- headers: {
35
- "Authorization": "Bearer #{ENV["GROQ_API_KEY"]}",
36
- "Content-Type": "application/json"
37
- },
38
- body: {
39
- "messages": [{"role": "user", "content": "#{system_prompt}\n#{prompt}"}],
40
- "model": Sublayer.configuration.ai_model
41
- }.to_json
42
- )
43
-
44
- text_containing_xml = response.dig("choices", 0, "message", "content")
45
- tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
46
- raise "Groq did not format response correctly, error: #{response.body}" unless tool_output
47
-
48
- return tool_output
49
- end
50
- end
51
- end
52
- end
@@ -1,50 +0,0 @@
1
- # Sublayer.configuration.ai_provider = Sublayer::Providers::Local
2
- # Sublayer.configuration.ai_model = "LLaMA_CPP"
3
-
4
- module Sublayer
5
- module Providers
6
- class Local
7
- def self.call(prompt:, output_adapter:)
8
- system_prompt = <<-PROMPT
9
- You have access to a set of tools to respond to the prompt.
10
-
11
- You may call a tool with xml like this:
12
- <parameters>
13
- <#{output_adapter.name}>$VALUE</#{output_adapter.name}>
14
- ...
15
- </parameters>
16
-
17
- Here are descriptions of the available tools:
18
- <tools>
19
- <tool>
20
- #{output_adapter.to_xml}
21
- </tool>
22
- </tools>
23
-
24
- Respond only with valid xml.
25
- Your response should call a tool with xml inside a <parameters> tag.
26
- PROMPT
27
-
28
- response = HTTParty.post(
29
- "http://localhost:8080/v1/chat/completions",
30
- headers: {
31
- "Authorization": "Bearer no-key",
32
- "Content-Type": "application/json"
33
- },
34
- body: {
35
- "model": Sublayer.configuration.ai_model,
36
- "messages": [
37
- { "role": "user", "content": "#{system_prompt}\n#{prompt}}" }
38
- ]
39
- }.to_json
40
- )
41
-
42
- text_containing_xml = response.dig("choices", 0, "message", "content")
43
- tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
44
- raise "The response was not formatted correctly: #{response.body}" unless tool_output
45
-
46
- return tool_output
47
- end
48
- end
49
- end
50
- end