sublayer 0.0.9 → 0.1.0.pre.alpha
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +3 -77
- data/lib/sublayer/components/output_adapters/single_string.rb +3 -29
- data/lib/sublayer/components/output_adapters/string_selection_from_list.rb +30 -0
- data/lib/sublayer/generators/base.rb +1 -0
- data/lib/sublayer/generators/examples/code_from_blueprint_generator.rb +30 -0
- data/lib/sublayer/generators/examples/code_from_description_generator.rb +26 -0
- data/lib/sublayer/generators/examples/description_from_code_generator.rb +23 -0
- data/lib/sublayer/generators/examples/invalid_to_valid_json_generator.rb +23 -0
- data/lib/sublayer/generators/examples/route_selection_from_user_intent_generator.rb +29 -0
- data/lib/sublayer/generators/examples/sentiment_from_text_generator.rb +26 -0
- data/lib/sublayer/providers/claude.rb +29 -29
- data/lib/sublayer/providers/gemini.rb +18 -3
- data/lib/sublayer/providers/open_ai.rb +33 -7
- data/lib/sublayer/version.rb +1 -1
- metadata +12 -7
- data/lib/sublayer/providers/groq.rb +0 -52
- data/lib/sublayer/providers/local.rb +0 -50
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 5a723acaec8ce50c64470d067dc29769fb8efe9415e4a867507272ef8961e687
|
4
|
+
data.tar.gz: 76508d4f490e5d9601d3827e35191b20ba7887e2086696bb610567e0f49639bd
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 983e31c6d8b85a6116accf2b7eb395b257e79de967e79f4046ddfacf046522bc274335e931e8665f67864704fd2af7b331cd93cc294f50d9533f44bf99d94f39
|
7
|
+
data.tar.gz: 3a18adf3d93a074897855874780f7ae0205ed5de3541923f5711a238ab0823fadedb3a0a82fe32c87cbca5da918656d0fa9d982bb2a3ef1651a5a606f5647d81
|
data/README.md
CHANGED
@@ -14,10 +14,10 @@ for new features and bug fixes.
|
|
14
14
|
|
15
15
|
To maintain stability in your application, we recommend pinning the version of
|
16
16
|
Sublayer in your Gemfile to a specific minor version. For example, to pin to
|
17
|
-
version 0.
|
17
|
+
version 0.1.x, you would add the following line to your Gemfile:
|
18
18
|
|
19
19
|
```ruby
|
20
|
-
gem 'sublayer', '~> 0.
|
20
|
+
gem 'sublayer', '~> 0.1'
|
21
21
|
```
|
22
22
|
|
23
23
|
## Installation
|
@@ -29,7 +29,7 @@ Install the gem by running the following commands:
|
|
29
29
|
Or add this line to your application's Gemfile:
|
30
30
|
|
31
31
|
```ruby
|
32
|
-
gem 'sublayer', '~> 0.
|
32
|
+
gem 'sublayer', '~> 0.1'
|
33
33
|
```
|
34
34
|
|
35
35
|
## Choose your AI Model
|
@@ -73,72 +73,6 @@ Sublayer.configuration.ai_provider = Sublayer::Providers::Claude
|
|
73
73
|
Sublayer.configuration.ai_model ="claude-3-opus-20240229"
|
74
74
|
```
|
75
75
|
|
76
|
-
### Groq
|
77
|
-
|
78
|
-
Expects you to have a Groq API key set in the `GROQ_API_KEY` environment variable.
|
79
|
-
|
80
|
-
Visit [Groq Console](https://console.groq.com/) to get an API key.
|
81
|
-
|
82
|
-
Usage:
|
83
|
-
```ruby
|
84
|
-
Sublayer.configuration.ai_provider = Sublayer::Providers::Groq
|
85
|
-
Sublayer.configuration.ai_model = "mixtral-8x7b-32768"
|
86
|
-
```
|
87
|
-
|
88
|
-
### Local
|
89
|
-
|
90
|
-
If you've never run a local model before see the [Local Model Quickstart](#local-model-quickstart) below. Know that local models take several GB of space.
|
91
|
-
|
92
|
-
The model you use must have the ChatML formatted v1/chat/completions endpoint to work with sublayer (many models do by default)
|
93
|
-
|
94
|
-
Usage:
|
95
|
-
|
96
|
-
Run your local model on http://localhost:8080 and then set:
|
97
|
-
```ruby
|
98
|
-
Sublayer.configuration.ai_provider = Sublayer::Providers::Local
|
99
|
-
Sublayer.configuration.ai_model = "LLaMA_CPP"
|
100
|
-
```
|
101
|
-
|
102
|
-
#### Local Model Quickstart:
|
103
|
-
|
104
|
-
Instructions to run a local model
|
105
|
-
|
106
|
-
1. Setting up Llamafile
|
107
|
-
|
108
|
-
```bash
|
109
|
-
cd where/you/keep/your/projects
|
110
|
-
git clone git@github.com:Mozilla-Ocho/llamafile.git
|
111
|
-
cd llamafile
|
112
|
-
```
|
113
|
-
|
114
|
-
Download: https://cosmo.zip/pub/cosmos/bin/make (windows users need this too: https://justine.lol/cosmo3/)
|
115
|
-
|
116
|
-
```bash
|
117
|
-
# within llamafile directory
|
118
|
-
chmod +x path/to/the/downloaded/make
|
119
|
-
path/to/the/downloaded/make -j8
|
120
|
-
sudo path/to/the/downloaded/make install PREFIX=/usr/local
|
121
|
-
```
|
122
|
-
You can now run llamfile
|
123
|
-
|
124
|
-
2. Downloading Model
|
125
|
-
|
126
|
-
click [here](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF/resolve/main/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf?download=true) to download Mistral_7b.Q5_K_M (5.13 GB)
|
127
|
-
|
128
|
-
3. Running Llamafile with a model
|
129
|
-
|
130
|
-
```bash
|
131
|
-
llamafile -ngl 9999 -m path/to/the/downloaded/Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 -c 4096
|
132
|
-
```
|
133
|
-
|
134
|
-
You are now running a local model on http://localhost:8080
|
135
|
-
|
136
|
-
#### Recommended Settings for Apple M1 users:
|
137
|
-
```bash
|
138
|
-
llamafile -ngl 9999 -m Hermes-2-Pro-Mistral-7B.Q5_K_M.gguf --host 0.0.0.0 --nobrowser -c 2048 --gpu APPLE -t 12
|
139
|
-
```
|
140
|
-
run `sysctl -n hw.logicalcpu` to see what number to give the `-t` threads option
|
141
|
-
|
142
76
|
## Concepts
|
143
77
|
|
144
78
|
### Generators
|
@@ -203,11 +137,3 @@ base for generating new code.
|
|
203
137
|
|
204
138
|
- [Clag](https://github.com/sublayerapp/clag) - A ruby gem that generates
|
205
139
|
command line commands from a simple description right in your terminal.
|
206
|
-
|
207
|
-
## Development
|
208
|
-
|
209
|
-
TBD
|
210
|
-
|
211
|
-
## Contributing
|
212
|
-
|
213
|
-
TBD
|
@@ -2,41 +2,15 @@ module Sublayer
|
|
2
2
|
module Components
|
3
3
|
module OutputAdapters
|
4
4
|
class SingleString
|
5
|
-
attr_reader :name
|
5
|
+
attr_reader :name, :description
|
6
6
|
|
7
7
|
def initialize(options)
|
8
8
|
@name = options[:name]
|
9
9
|
@description = options[:description]
|
10
10
|
end
|
11
11
|
|
12
|
-
def
|
13
|
-
|
14
|
-
name: @name,
|
15
|
-
description: @description,
|
16
|
-
parameters: {
|
17
|
-
type: "object",
|
18
|
-
properties: {
|
19
|
-
@name => {
|
20
|
-
type: "string",
|
21
|
-
description: @description
|
22
|
-
}
|
23
|
-
}
|
24
|
-
}
|
25
|
-
}
|
26
|
-
end
|
27
|
-
|
28
|
-
def to_xml
|
29
|
-
<<-XML
|
30
|
-
<tool_description>
|
31
|
-
<tool_name>#{@name}</tool_name>
|
32
|
-
<tool_description>#{@description}</tool_description>
|
33
|
-
<parameters>
|
34
|
-
<name>#{@name}</name>
|
35
|
-
<type>string</type>
|
36
|
-
<description>#{@description}</description>
|
37
|
-
</parameters>
|
38
|
-
</tool_description>
|
39
|
-
XML
|
12
|
+
def properties
|
13
|
+
[OpenStruct.new(name: @name, type: 'string', description: @description, required: true)]
|
40
14
|
end
|
41
15
|
end
|
42
16
|
end
|
@@ -0,0 +1,30 @@
|
|
1
|
+
module Sublayer
|
2
|
+
module Components
|
3
|
+
module OutputAdapters
|
4
|
+
class StringSelectionFromList
|
5
|
+
attr_reader :name, :description, :options
|
6
|
+
|
7
|
+
def initialize(options)
|
8
|
+
@name = options[:name]
|
9
|
+
@description = options[:description]
|
10
|
+
@list = options[:options]
|
11
|
+
end
|
12
|
+
|
13
|
+
def properties
|
14
|
+
[OpenStruct.new(name: @name, type: 'string', description: @description, required: true, enum: @list)]
|
15
|
+
end
|
16
|
+
|
17
|
+
def load_instance_data(generator)
|
18
|
+
case @list
|
19
|
+
when Proc
|
20
|
+
@list = generator.instance_exec(&@list)
|
21
|
+
when Symbol
|
22
|
+
@list = generator.send(@list)
|
23
|
+
else
|
24
|
+
@list
|
25
|
+
end
|
26
|
+
end
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
end
|
@@ -9,6 +9,7 @@ module Sublayer
|
|
9
9
|
end
|
10
10
|
|
11
11
|
def generate
|
12
|
+
self.class::OUTPUT_ADAPTER.load_instance_data(self) if self.class::OUTPUT_ADAPTER.respond_to?(:load_instance_data)
|
12
13
|
@results = Sublayer.configuration.ai_provider.call(prompt: prompt, output_adapter: self.class::OUTPUT_ADAPTER)
|
13
14
|
end
|
14
15
|
end
|
@@ -0,0 +1,30 @@
|
|
1
|
+
class CodeFromBlueprintGenerator < Sublayer::Generators::Base
|
2
|
+
llm_output_adapter type: :single_string,
|
3
|
+
name: "generated_code",
|
4
|
+
description: "The generated code for the description"
|
5
|
+
|
6
|
+
def initialize(blueprint_description:, blueprint_code:, description:)
|
7
|
+
@blueprint_description = blueprint_description
|
8
|
+
@blueprint_code = blueprint_code
|
9
|
+
@description = description
|
10
|
+
end
|
11
|
+
|
12
|
+
def generate
|
13
|
+
super
|
14
|
+
end
|
15
|
+
|
16
|
+
def prompt
|
17
|
+
<<-PROMPT
|
18
|
+
You are an expert programmer and are great at looking at and understanding existing patterns and applying them to new situations.
|
19
|
+
|
20
|
+
The blueprint we're working with is: #{@blueprint_description}.
|
21
|
+
The code for that blueprint is:
|
22
|
+
#{@blueprint_code}
|
23
|
+
|
24
|
+
You need to use the blueprint above and modify it so that it satisfied the following description:
|
25
|
+
#{@description}
|
26
|
+
|
27
|
+
Take a deep breath and think step by step before you start coding.
|
28
|
+
PROMPT
|
29
|
+
end
|
30
|
+
end
|
@@ -0,0 +1,26 @@
|
|
1
|
+
class CodeFromDescriptionGenerator < Sublayer::Generators::Base
|
2
|
+
llm_output_adapter type: :single_string,
|
3
|
+
name: "generated_code",
|
4
|
+
description: "The generated code in the requested language"
|
5
|
+
|
6
|
+
def initialize(description:, technologies:)
|
7
|
+
@description = description
|
8
|
+
@technologies = technologies
|
9
|
+
end
|
10
|
+
|
11
|
+
def generate
|
12
|
+
super
|
13
|
+
end
|
14
|
+
|
15
|
+
def prompt
|
16
|
+
<<-PROMPT
|
17
|
+
You are an expert programmer in #{@technologies.join(", ")}.
|
18
|
+
|
19
|
+
You are tasked with writing code using the following technologies: #{@technologies.join(", ")}.
|
20
|
+
|
21
|
+
The description of the task is #{@description}
|
22
|
+
|
23
|
+
Take a deep breath and think step by step before you start coding.
|
24
|
+
PROMPT
|
25
|
+
end
|
26
|
+
end
|
@@ -0,0 +1,23 @@
|
|
1
|
+
class DescriptionFromCodeGenerator < Sublayer::Generators::Base
|
2
|
+
llm_output_adapter type: :single_string,
|
3
|
+
name: "code_description",
|
4
|
+
description: "A description of what the code in the file does"
|
5
|
+
|
6
|
+
def initialize(code:)
|
7
|
+
@code = code
|
8
|
+
end
|
9
|
+
|
10
|
+
def generate
|
11
|
+
super
|
12
|
+
end
|
13
|
+
|
14
|
+
def prompt
|
15
|
+
<<-PROMPT
|
16
|
+
You are an experienced software engineer. Below is a chunk of code:
|
17
|
+
|
18
|
+
#{@code}
|
19
|
+
|
20
|
+
Please read the code carefully and provide a high-level description of what this code does, including its purpose, functionalities, and any noteworthy details.
|
21
|
+
PROMPT
|
22
|
+
end
|
23
|
+
end
|
@@ -0,0 +1,23 @@
|
|
1
|
+
class InvalidToValidJsonGenerator < Sublayer::Generators::Base
|
2
|
+
llm_output_adapter type: :single_string,
|
3
|
+
name: "valid_json",
|
4
|
+
description: "The valid JSON string"
|
5
|
+
|
6
|
+
def initialize(invalid_json:)
|
7
|
+
@invalid_json = invalid_json
|
8
|
+
end
|
9
|
+
|
10
|
+
def generate
|
11
|
+
super
|
12
|
+
end
|
13
|
+
|
14
|
+
def prompt
|
15
|
+
<<-PROMPT
|
16
|
+
You are an expert in JSON parsing.
|
17
|
+
|
18
|
+
The given string is not a valid JSON: #{@invalid_json}
|
19
|
+
|
20
|
+
Please fix this and produce a valid JSON.
|
21
|
+
PROMPT
|
22
|
+
end
|
23
|
+
end
|
@@ -0,0 +1,29 @@
|
|
1
|
+
class RouteSelectionFromUserIntentGenerator < Sublayer::Generators::Base
|
2
|
+
llm_output_adapter type: :string_selection_from_list,
|
3
|
+
name: "route",
|
4
|
+
description: "A route selected from the list",
|
5
|
+
options: :available_routes
|
6
|
+
|
7
|
+
def initialize(user_intent:)
|
8
|
+
@user_intent = user_intent
|
9
|
+
end
|
10
|
+
|
11
|
+
def generate
|
12
|
+
super
|
13
|
+
end
|
14
|
+
|
15
|
+
def available_routes
|
16
|
+
["GET /", "GET /users", "GET /users/:id", "POST /users", "PUT /users/:id", "DELETE /users/:id"]
|
17
|
+
end
|
18
|
+
|
19
|
+
def prompt
|
20
|
+
<<-PROMPT
|
21
|
+
You are skilled at selecting routes based on user intent.
|
22
|
+
|
23
|
+
Your task is to choose a route based on the following intent:
|
24
|
+
|
25
|
+
The user's intent is:
|
26
|
+
#{@user_intent}
|
27
|
+
PROMPT
|
28
|
+
end
|
29
|
+
end
|
@@ -0,0 +1,26 @@
|
|
1
|
+
class SentimentFromTextGenerator < Sublayer::Generators::Base
|
2
|
+
llm_output_adapter type: :string_selection_from_list,
|
3
|
+
name: "sentiment_value",
|
4
|
+
description: "A sentiment value from the list",
|
5
|
+
options: -> { @sentiment_options }
|
6
|
+
|
7
|
+
def initialize(text:, sentiment_options:)
|
8
|
+
@text = text
|
9
|
+
@sentiment_options = sentiment_options
|
10
|
+
end
|
11
|
+
|
12
|
+
def generate
|
13
|
+
super
|
14
|
+
end
|
15
|
+
|
16
|
+
def prompt
|
17
|
+
<<-PROMPT
|
18
|
+
You are an expert at determining sentiment from text.
|
19
|
+
|
20
|
+
You are tasked with analyzing the following text and determining its sentiment value.
|
21
|
+
|
22
|
+
The text is:
|
23
|
+
#{@text}
|
24
|
+
PROMPT
|
25
|
+
end
|
26
|
+
end
|
@@ -5,49 +5,49 @@ module Sublayer
|
|
5
5
|
module Providers
|
6
6
|
class Claude
|
7
7
|
def self.call(prompt:, output_adapter:)
|
8
|
-
system_prompt = <<-PROMPT
|
9
|
-
In this environment you have access to a set of tools you can use to answer the user's question.
|
10
|
-
|
11
|
-
You may call them like this:
|
12
|
-
<function_calls>
|
13
|
-
<invoke>
|
14
|
-
<tool_name>$TOOL_NAME</tool_name>
|
15
|
-
<parameters>
|
16
|
-
<$PARAMETER_NAME>$PARAMETER_VALUE</$PARAMETER_NAME>
|
17
|
-
...
|
18
|
-
</parameters>
|
19
|
-
</invoke>
|
20
|
-
</function_calls>
|
21
|
-
|
22
|
-
Here are the tools available:
|
23
|
-
<tools>
|
24
|
-
#{output_adapter.to_xml}
|
25
|
-
</tools>
|
26
|
-
|
27
|
-
Respond only with valid xml. The entire response should be wrapped in a <response> tag. Any additional information not inside a tool call should go in a <scratch> tag.
|
28
|
-
PROMPT
|
29
|
-
|
30
8
|
response = HTTParty.post(
|
31
9
|
"https://api.anthropic.com/v1/messages",
|
32
10
|
headers: {
|
33
11
|
"x-api-key": ENV.fetch("ANTHROPIC_API_KEY"),
|
34
12
|
"anthropic-version": "2023-06-01",
|
35
|
-
"content-type": "application/json"
|
13
|
+
"content-type": "application/json",
|
14
|
+
"anthropic-beta": "tools-2024-04-04"
|
36
15
|
},
|
37
16
|
body: {
|
38
17
|
model: Sublayer.configuration.ai_model,
|
39
18
|
max_tokens: 4096,
|
40
|
-
|
41
|
-
|
19
|
+
tools: [
|
20
|
+
{
|
21
|
+
name: output_adapter.name,
|
22
|
+
description: output_adapter.description,
|
23
|
+
input_schema: {
|
24
|
+
type: "object",
|
25
|
+
properties: format_properties(output_adapter),
|
26
|
+
required: output_adapter.properties.select(&:required).map(&:name)
|
27
|
+
}
|
28
|
+
}
|
29
|
+
],
|
30
|
+
messages: [{ "role": "user", "content": prompt }]
|
42
31
|
}.to_json
|
43
32
|
)
|
44
33
|
raise "Error generating with Claude, error: #{response.body}" unless response.code == 200
|
45
34
|
|
46
|
-
|
47
|
-
|
35
|
+
function_input = JSON.parse(response.body).dig("content").find {|content| content['type'] == 'tool_use'}.dig("input")
|
36
|
+
function_input[output_adapter.name]
|
37
|
+
end
|
38
|
+
|
39
|
+
private
|
40
|
+
def self.format_properties(output_adapter)
|
41
|
+
output_adapter.properties.each_with_object({}) do |property, hash|
|
42
|
+
hash[property.name] = {
|
43
|
+
type: property.type,
|
44
|
+
description: property.description
|
45
|
+
}
|
48
46
|
|
49
|
-
|
50
|
-
|
47
|
+
if property.enum
|
48
|
+
hash[property.name][:enum] = property.enum
|
49
|
+
end
|
50
|
+
end
|
51
51
|
end
|
52
52
|
end
|
53
53
|
end
|
@@ -21,9 +21,13 @@ module Sublayer
|
|
21
21
|
|
22
22
|
Here are the tools available:
|
23
23
|
<tools>
|
24
|
-
<
|
25
|
-
|
26
|
-
|
24
|
+
<tool_description>
|
25
|
+
<tool_name>#{output_adapter.name}</tool_name>
|
26
|
+
<tool_description>#{output_adapter.description}</tool_description>
|
27
|
+
<parameters>
|
28
|
+
#{format_properties(output_adapter)}
|
29
|
+
</parameters>
|
30
|
+
</tool_description>
|
27
31
|
</tools>
|
28
32
|
|
29
33
|
Respond only with valid xml.
|
@@ -49,6 +53,17 @@ module Sublayer
|
|
49
53
|
raise "Gemini did not format response, error: #{response.body}" unless tool_output
|
50
54
|
return tool_output
|
51
55
|
end
|
56
|
+
|
57
|
+
private
|
58
|
+
def self.format_properties(output_adapter)
|
59
|
+
output_adapter.properties.each_with_object("") do |property, xml|
|
60
|
+
xml << "<name>#{property.name}</name>"
|
61
|
+
xml << "<type>#{property.type}</type>"
|
62
|
+
xml << "<description>#{property.description}</description>"
|
63
|
+
xml << "<required>#{property.required}</required>"
|
64
|
+
xml << "<enum>#{property.enum}</enum>" if property.enum
|
65
|
+
end
|
66
|
+
end
|
52
67
|
end
|
53
68
|
end
|
54
69
|
end
|
@@ -16,18 +16,44 @@ module Sublayer
|
|
16
16
|
"content": prompt
|
17
17
|
}
|
18
18
|
],
|
19
|
-
|
20
|
-
|
21
|
-
|
19
|
+
tool_choice: { type: "function", function: { name: output_adapter.name }},
|
20
|
+
tools: [
|
21
|
+
{
|
22
|
+
type: "function",
|
23
|
+
function: {
|
24
|
+
name: output_adapter.name,
|
25
|
+
description: output_adapter.description,
|
26
|
+
parameters: {
|
27
|
+
type: "object",
|
28
|
+
properties: OpenAI.format_properties(output_adapter)
|
29
|
+
},
|
30
|
+
required: [output_adapter.properties.select(&:required).map(&:name)]
|
31
|
+
}
|
32
|
+
}
|
22
33
|
]
|
34
|
+
|
23
35
|
})
|
24
36
|
|
25
37
|
message = response.dig("choices", 0, "message")
|
26
|
-
raise "No function called" unless message["function_call"]
|
27
38
|
|
28
|
-
|
29
|
-
|
30
|
-
|
39
|
+
raise "No function called" unless message["tool_calls"].length > 0
|
40
|
+
|
41
|
+
function_body = message.dig("tool_calls", 0, "function", "arguments")
|
42
|
+
JSON.parse(function_body)[output_adapter.name]
|
43
|
+
end
|
44
|
+
|
45
|
+
private
|
46
|
+
def self.format_properties(output_adapter)
|
47
|
+
output_adapter.properties.each_with_object({}) do |property, hash|
|
48
|
+
hash[property.name] = {
|
49
|
+
type: property.type,
|
50
|
+
description: property.description
|
51
|
+
}
|
52
|
+
|
53
|
+
if property.enum
|
54
|
+
hash[property.name][:enum] = property.enum
|
55
|
+
end
|
56
|
+
end
|
31
57
|
end
|
32
58
|
end
|
33
59
|
end
|
data/lib/sublayer/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: sublayer
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.0.
|
4
|
+
version: 0.1.0.pre.alpha
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Scott Werner
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2024-05-
|
11
|
+
date: 2024-05-16 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: ruby-openai
|
@@ -180,11 +180,16 @@ files:
|
|
180
180
|
- lib/sublayer/agents/base.rb
|
181
181
|
- lib/sublayer/components/output_adapters.rb
|
182
182
|
- lib/sublayer/components/output_adapters/single_string.rb
|
183
|
+
- lib/sublayer/components/output_adapters/string_selection_from_list.rb
|
183
184
|
- lib/sublayer/generators/base.rb
|
185
|
+
- lib/sublayer/generators/examples/code_from_blueprint_generator.rb
|
186
|
+
- lib/sublayer/generators/examples/code_from_description_generator.rb
|
187
|
+
- lib/sublayer/generators/examples/description_from_code_generator.rb
|
188
|
+
- lib/sublayer/generators/examples/invalid_to_valid_json_generator.rb
|
189
|
+
- lib/sublayer/generators/examples/route_selection_from_user_intent_generator.rb
|
190
|
+
- lib/sublayer/generators/examples/sentiment_from_text_generator.rb
|
184
191
|
- lib/sublayer/providers/claude.rb
|
185
192
|
- lib/sublayer/providers/gemini.rb
|
186
|
-
- lib/sublayer/providers/groq.rb
|
187
|
-
- lib/sublayer/providers/local.rb
|
188
193
|
- lib/sublayer/providers/open_ai.rb
|
189
194
|
- lib/sublayer/tasks/base.rb
|
190
195
|
- lib/sublayer/version.rb
|
@@ -208,11 +213,11 @@ required_ruby_version: !ruby/object:Gem::Requirement
|
|
208
213
|
version: 2.6.0
|
209
214
|
required_rubygems_version: !ruby/object:Gem::Requirement
|
210
215
|
requirements:
|
211
|
-
- - "
|
216
|
+
- - ">"
|
212
217
|
- !ruby/object:Gem::Version
|
213
|
-
version:
|
218
|
+
version: 1.3.1
|
214
219
|
requirements: []
|
215
|
-
rubygems_version: 3.
|
220
|
+
rubygems_version: 3.3.26
|
216
221
|
signing_key:
|
217
222
|
specification_version: 4
|
218
223
|
summary: A model-agnostic Ruby GenerativeAI DSL and Framework
|
@@ -1,52 +0,0 @@
|
|
1
|
-
# Sublayer.configuration.ai_provider = Sublayer::Providers::Groq
|
2
|
-
# Sublayer.configuration.ai_model = "mixtral-8x7b-32768"
|
3
|
-
|
4
|
-
module Sublayer
|
5
|
-
module Providers
|
6
|
-
class Groq
|
7
|
-
def self.call(prompt:, output_adapter:)
|
8
|
-
system_prompt = <<-PROMPT
|
9
|
-
You have access to a set of tools to answer the prompt.
|
10
|
-
|
11
|
-
You may call tools like this:
|
12
|
-
<tool_calls>
|
13
|
-
<tool_call>
|
14
|
-
<tool_name>$TOOL_NAME</tool_name>
|
15
|
-
<parameters>
|
16
|
-
<#{output_adapter.name}>$VALUE</#{output_adapter.name}>
|
17
|
-
...
|
18
|
-
</parameters>
|
19
|
-
</tool_call>
|
20
|
-
</tool_calls>
|
21
|
-
|
22
|
-
Here are the tools available:
|
23
|
-
<tools>
|
24
|
-
#{output_adapter.to_xml}
|
25
|
-
</tools>
|
26
|
-
|
27
|
-
Respond only with valid xml.
|
28
|
-
The entire response should be wrapped in a <response> tag.
|
29
|
-
Your response should call a tool inside a <tool_calls> tag.
|
30
|
-
PROMPT
|
31
|
-
|
32
|
-
response = HTTParty.post(
|
33
|
-
"https://api.groq.com/openai/v1/chat/completions",
|
34
|
-
headers: {
|
35
|
-
"Authorization": "Bearer #{ENV["GROQ_API_KEY"]}",
|
36
|
-
"Content-Type": "application/json"
|
37
|
-
},
|
38
|
-
body: {
|
39
|
-
"messages": [{"role": "user", "content": "#{system_prompt}\n#{prompt}"}],
|
40
|
-
"model": Sublayer.configuration.ai_model
|
41
|
-
}.to_json
|
42
|
-
)
|
43
|
-
|
44
|
-
text_containing_xml = response.dig("choices", 0, "message", "content")
|
45
|
-
tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
|
46
|
-
raise "Groq did not format response correctly, error: #{response.body}" unless tool_output
|
47
|
-
|
48
|
-
return tool_output
|
49
|
-
end
|
50
|
-
end
|
51
|
-
end
|
52
|
-
end
|
@@ -1,50 +0,0 @@
|
|
1
|
-
# Sublayer.configuration.ai_provider = Sublayer::Providers::Local
|
2
|
-
# Sublayer.configuration.ai_model = "LLaMA_CPP"
|
3
|
-
|
4
|
-
module Sublayer
|
5
|
-
module Providers
|
6
|
-
class Local
|
7
|
-
def self.call(prompt:, output_adapter:)
|
8
|
-
system_prompt = <<-PROMPT
|
9
|
-
You have access to a set of tools to respond to the prompt.
|
10
|
-
|
11
|
-
You may call a tool with xml like this:
|
12
|
-
<parameters>
|
13
|
-
<#{output_adapter.name}>$VALUE</#{output_adapter.name}>
|
14
|
-
...
|
15
|
-
</parameters>
|
16
|
-
|
17
|
-
Here are descriptions of the available tools:
|
18
|
-
<tools>
|
19
|
-
<tool>
|
20
|
-
#{output_adapter.to_xml}
|
21
|
-
</tool>
|
22
|
-
</tools>
|
23
|
-
|
24
|
-
Respond only with valid xml.
|
25
|
-
Your response should call a tool with xml inside a <parameters> tag.
|
26
|
-
PROMPT
|
27
|
-
|
28
|
-
response = HTTParty.post(
|
29
|
-
"http://localhost:8080/v1/chat/completions",
|
30
|
-
headers: {
|
31
|
-
"Authorization": "Bearer no-key",
|
32
|
-
"Content-Type": "application/json"
|
33
|
-
},
|
34
|
-
body: {
|
35
|
-
"model": Sublayer.configuration.ai_model,
|
36
|
-
"messages": [
|
37
|
-
{ "role": "user", "content": "#{system_prompt}\n#{prompt}}" }
|
38
|
-
]
|
39
|
-
}.to_json
|
40
|
-
)
|
41
|
-
|
42
|
-
text_containing_xml = response.dig("choices", 0, "message", "content")
|
43
|
-
tool_output = Nokogiri::HTML.parse(text_containing_xml.match(/\<#{output_adapter.name}\>(.*?)\<\/#{output_adapter.name}\>/m)[1]).text
|
44
|
-
raise "The response was not formatted correctly: #{response.body}" unless tool_output
|
45
|
-
|
46
|
-
return tool_output
|
47
|
-
end
|
48
|
-
end
|
49
|
-
end
|
50
|
-
end
|