chatgpt-ruby 1.0.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6a246b757af45ec2b58fd7fad4cda501d6408f273f62c697705a4b85b824c07e
4
- data.tar.gz: b314a13e482bf33e88d2dd7adec626850612cb1da247ed0a247492dc40bd09af
3
+ metadata.gz: adc49f5bb5b5d4862838cbb111ecd00bccf7a4adc3b7edb23b8f4e5157da2a8b
4
+ data.tar.gz: f4ec243a9aeeb0ee03c82b58241d9df097a2985ca7ba15a77707645b3922ff13
5
5
  SHA512:
6
- metadata.gz: 2cd02878303337251add7d71e82bd9d29504313217dfeb6bcc2228a9d61f14abe0306523e55374465d431e65f438106b8054936b22d9d2a6de9aaa7b1d1e8a29
7
- data.tar.gz: b30e1470736ccfd7ebd26c380792eb1759707bb630bbb3d4ad2dc446968d6e414c3ace6ac3164e7681ff57f07445a50d9025e72b5c497f77e6b3de949a537b79
6
+ metadata.gz: 8b7edafba1fdf4d46adada02994ea92ced8c510b11bfe03fadcf4445e39813b169a594f4baae6ae5cddb44d2146d8111827a328f5e424174826ab4eecbb2d0c0
7
+ data.tar.gz: d5c1d54f84fe5c478bbffc2d85571e9f139072a460272152088dc85e84bbbfa8b24a5b53df3030602ba22de0cbcdad019fef0f7a5ef99f718c8d1e89d18bfbf5
data/CHANGELOG.md CHANGED
@@ -1,4 +1,18 @@
1
- ## [Unreleased]
1
+ ## [2.0.0] - 2023-07-08
2
+
3
+ ### Added
4
+
5
+ - Added support for chat interactions with the GPT-3.5-turbo model.
6
+ - Introduced a new `chat` method to the `ChatGPT::Client` class, which sends an array of chat messages to the API and receives the generated messages in return.
7
+ - Updated the README to provide usage instructions for the new `chat` method.
8
+
9
+ ## [1.0.0] - 2023-04-30
10
+
11
+ ### Added
12
+
13
+ - Initial release of the `chatgpt-ruby` gem.
14
+ - Implements ChatGPT Client with methods for completions, search, classification, summary, and answer generation.
15
+ - Includes unit tests for each method.
2
16
 
3
17
  ## [0.1.0] - 2023-03-22
4
18
 
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- chatgpt-ruby (0.6.0)
4
+ chatgpt-ruby (1.0.0)
5
5
  rest-client
6
6
 
7
7
  GEM
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  [![Gem Version](https://badge.fury.io/rb/chatgpt-ruby.svg)](https://badge.fury.io/rb/chatgpt-ruby) [![License](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Maintainability](https://api.codeclimate.com/v1/badges/08c7e7b58e9fbe7156eb/maintainability)](https://codeclimate.com/github/nagstler/chatgpt-ruby/maintainability) [![Test Coverage](https://api.codeclimate.com/v1/badges/08c7e7b58e9fbe7156eb/test_coverage)](https://codeclimate.com/github/nagstler/chatgpt-ruby/test_coverage) [![CI](https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml) [![GitHub contributors](https://img.shields.io/github/contributors/nagstler/chatgpt-ruby)](https://github.com/nagstler/chatgpt-ruby/graphs/contributors)
4
4
 
5
- The `chatgpt-ruby` is a Ruby SDK for the OpenAI API, including methods for generating text, completing prompts, and more ❤️
5
+ The `chatgpt-ruby` is a Ruby SDK for the OpenAI API, providing methods for generating text and completing prompts using the ChatGPT model.
6
6
 
7
7
  ## Installation
8
8
 
@@ -12,16 +12,12 @@ Add this line to your application's Gemfile:
12
12
  gem 'chatgpt-ruby'
13
13
  ```
14
14
 
15
-
16
-
17
15
  And then execute:
18
16
 
19
17
  ```ruby
20
18
  $ bundle install
21
19
  ```
22
20
 
23
-
24
-
25
21
  Or install it yourself as:
26
22
 
27
23
  ```ruby
@@ -41,8 +37,7 @@ api_key = 'your-api-key'
41
37
  client = ChatGPT::Client.new(api_key)
42
38
  ```
43
39
 
44
-
45
- ### Completions
40
+ ## Completions
46
41
 
47
42
  To generate completions given a prompt, you can use the `completions` method:
48
43
 
@@ -53,8 +48,6 @@ completions = client.completions(prompt)
53
48
  # Output: an array of completion strings
54
49
  ```
55
50
 
56
-
57
-
58
51
  You can customize the generation process by passing in additional parameters as a hash:
59
52
 
60
53
  ```ruby
@@ -65,99 +58,59 @@ params = {
65
58
  }
66
59
  completions = client.completions(prompt, params)
67
60
 
61
+ puts completions["choices"].map { |c| c["text"] }
68
62
  # Output: an array of completion strings
69
63
  ```
70
64
 
65
+ ## Chat
71
66
 
72
- ### Search
73
-
74
- To perform a search query given a set of documents and a search query, you can use the `search` method:
67
+ The `chat` method allows for a dynamic conversation with the GPT model. It requires an array of messages where each message is a hash with two properties: `role` and `content`.
75
68
 
76
- ```ruby
77
- documents = ['Document 1', 'Document 2', 'Document 3']
78
- query = 'Search query'
79
- results = client.search(documents, query)
80
-
81
- # Output: an array of search result objects
82
- ```
69
+ `role` can be:
70
+ - `'system'`: Used for instructions that guide the conversation.
71
+ - `'user'`: Represents the user's input.
72
+ - `'assistant'`: Represents the model's output.
83
73
 
74
+ `content` contains the text message from the corresponding role.
84
75
 
85
-
86
- You can customize the search process by passing in additional parameters as a hash:
76
+ Here is how you would start a chat:
87
77
 
88
78
  ```ruby
89
- params = {
90
- engine: 'ada',
91
- max_rerank: 100
92
- }
93
- results = client.search(documents, query, params)
94
79
 
95
- # Output: an array of search result objects
80
+ # Define the conversation messages
81
+ messages = [
82
+ {
83
+ role: "system",
84
+ content: "You are a helpful assistant."
85
+ },
86
+ {
87
+ role: "user",
88
+ content: "Who won the world series in 2020?"
89
+ }
90
+ ]
91
+
92
+ # Start a chat
93
+ response = client.chat(messages)
96
94
  ```
97
95
 
98
-
99
- ### Classify
100
-
101
- To classify a given text, you can use the `classify` method:
96
+ The response will be a hash containing the model's message(s). You can extract the assistant's message like this:
102
97
 
103
98
  ```ruby
104
- text = 'This is a sample text'
105
- label = client.classify(text)
106
99
 
107
- # Output: a string representing the classified label
100
+ puts response['choices'][0]['message']['content'] # Outputs the assistant's message
108
101
  ```
109
102
 
110
-
111
-
112
- You can customize the classification process by passing in additional parameters as a hash:
103
+ The conversation can be continued by extending the `messages` array and calling the `chat` method again:
113
104
 
114
105
  ```ruby
115
- params = {
116
- model: 'text-davinci-002'
117
- }
118
- label = client.classify(text, params)
119
106
 
120
- # Output: a string representing the classified label
121
- ```
107
+ messages << {role: "user", content: "Tell me more about it."}
122
108
 
123
-
124
- ### Generate Summaries
125
-
126
- To generate summaries given a set of documents, you can use the `generate_summaries` method:
127
-
128
- ```ruby
129
- documents = ['Document 1', 'Document 2', 'Document 3']
130
- summary = client.generate_summaries(documents)
131
-
132
- # Output: a string representing the generated summary
133
- ```
134
-
135
-
136
-
137
- You can customize the summary generation process by passing in additional parameters as a hash:
138
-
139
- ```ruby
140
- params = {
141
- model: 'text-davinci-002',
142
- max_tokens: 100
143
- }
144
- summary = client.generate_summaries(documents, params)
145
-
146
- # Output: a string representing the generated summary
109
+ response = client.chat(messages)
110
+ puts response['choices'][0]['message']['content'] # Outputs the assistant's new message
147
111
  ```
148
112
 
149
-
150
- ### Generate Answers
151
-
152
- To generate answers given a prompt and a set of documents, you can use the `generate_answers` method:
153
-
154
- ```ruby
155
- prompt = 'What is the capital of France?'
156
- documents = ['France is a country in Europe', 'Paris is the capital of France']
157
- answer = client.generate_answers(prompt, documents)
158
-
159
- # Output
160
- ```
113
+ With this method, you can build an ongoing conversation with the model.
161
114
 
162
115
  ## Development
163
116
 
@@ -1,16 +1,20 @@
1
- # lib/chatgpt/client.rb
2
-
3
1
  require 'rest-client'
4
2
  require 'json'
5
3
 
6
4
  module ChatGPT
7
5
  class Client
6
+ # Initialize the client with the API key
7
+ #
8
+ # @param api_key [String] The API key for the GPT-3 service
8
9
  def initialize(api_key)
9
10
  @api_key = api_key
11
+ # Base endpoint for the OpenAI API
10
12
  @endpoint = 'https://api.openai.com/v1'
11
13
  end
12
14
 
13
- # Helper method to prepare headers
15
+ # Prepare headers for the API request
16
+ #
17
+ # @return [Hash] The headers for the API request
14
18
  def headers
15
19
  {
16
20
  'Content-Type' => 'application/json',
@@ -18,15 +22,24 @@ module ChatGPT
18
22
  }
19
23
  end
20
24
 
21
- # Completion-related methods
25
+ # Generate completions based on a given prompt
26
+ #
27
+ # @param prompt [String] The prompt to be completed
28
+ # @param params [Hash] Additional parameters for the completion request
29
+ #
30
+ # @return [Hash] The completion results from the API
22
31
  def completions(prompt, params = {})
32
+ # Set default parameters
23
33
  engine = params[:engine] || 'text-davinci-002'
24
34
  max_tokens = params[:max_tokens] || 16
25
35
  temperature = params[:temperature] || 0.5
26
36
  top_p = params[:top_p] || 1.0
27
37
  n = params[:n] || 1
28
38
 
39
+ # Construct the URL for the completion request
29
40
  url = "#{@endpoint}/engines/#{engine}/completions"
41
+
42
+ # Prepare the data for the request
30
43
  data = {
31
44
  prompt: prompt,
32
45
  max_tokens: max_tokens,
@@ -34,86 +47,65 @@ module ChatGPT
34
47
  top_p: top_p,
35
48
  n: n
36
49
  }
50
+
51
+ # Make the request to the API
37
52
  request_api(url, data)
38
53
  end
39
54
 
40
- # Search-related methods
41
- def search(documents, query, params = {})
42
- engine = params[:engine] || 'ada'
43
- max_rerank = params[:max_rerank] || 200
44
55
 
45
- url = "#{@endpoint}/engines/#{engine}/search"
46
- data = {
47
- documents: documents,
48
- query: query,
49
- max_rerank: max_rerank
50
- }
51
- response = request_api(url, data)
52
- response['data']
53
- end
56
+ # This method sends a chat message to the API
57
+ #
58
+ # @param messages [Array<Hash>] The array of messages for the conversation.
59
+ # Each message is a hash with a `role` and `content` key. The `role` key can be 'system', 'user', or 'assistant',
60
+ # and the `content` key contains the text of the message.
61
+ #
62
+ # @param params [Hash] Optional parameters for the chat request. This can include the 'model' key to specify
63
+ # the model to be used for the chat. If no 'model' key is provided, 'gpt-3.5-turbo' is used by default.
64
+ #
65
+ # @return [Hash] The response from the API.
66
+ def chat(messages, params = {})
67
+ # Set default parameters
68
+ model = params[:model] || 'gpt-3.5-turbo'
54
69
 
55
- # Classification-related methods
56
- def classify(text, params = {})
57
- model = params[:model] || 'text-davinci-002'
70
+ # Construct the URL for the chat request
71
+ url = "#{@endpoint}/chat/completions"
58
72
 
59
- url = "#{@endpoint}/classifications/#{model}"
73
+ # Prepare the data for the request. The data is a hash with 'model' and 'messages' keys.
60
74
  data = {
61
75
  model: model,
62
- input: text
76
+ messages: messages
63
77
  }
64
- response = request_api(url, data)
65
- response['data'][0]['label']
66
- end
67
-
68
- # Summary-related methods
69
- def generate_summaries(documents, params = {})
70
- model = params[:model] || 'text-davinci-002'
71
- max_tokens = params[:max_tokens] || 60
72
- temperature = params[:temperature] || 0.5
73
- top_p = params[:top_p] || 1.0
74
- frequency_penalty = params[:frequency_penalty] || 0.0
75
- presence_penalty = params[:presence_penalty] || 0.0
76
-
77
- url = "#{@endpoint}/engines/#{model}/generate"
78
- data = {
79
- prompt: '',
80
- max_tokens: max_tokens,
81
- temperature: temperature,
82
- top_p: top_p,
83
- frequency_penalty: frequency_penalty,
84
- presence_penalty: presence_penalty,
85
- documents: documents
86
- }
87
- response = request_api(url, data)
88
- response['choices'][0]['text']
78
+
79
+ # Make the API request and return the response.
80
+ request_api(url, data)
89
81
  end
90
82
 
91
- # Answer-generation-related methods
92
- def generate_answers(prompt, documents, params = {})
93
- model = params[:model] || 'text-davinci-002'
94
- max_tokens = params[:max_tokens] || 5
95
-
96
- url = "#{@endpoint}/engines/#{model}/answers"
97
- data = {
98
- prompt: prompt,
99
- documents: documents,
100
- max_tokens: max_tokens
101
- }
102
- response = request_api(url, data)
103
- response['data'][0]['answer']
104
- end
105
83
 
106
84
  private
107
-
108
- # Helper method to make API requests
109
- def request_api(url, data)
85
+ # Make a request to the API
86
+ #
87
+ # @param url [String] The URL for the request
88
+ # @param data [Hash] The data to be sent in the request
89
+ # @param method [Symbol] The HTTP method for the request (:post by default)
90
+ #
91
+ # @return [Hash] The response from the API
92
+ #
93
+ # @raise [RestClient::ExceptionWithResponse] If the API request fails
94
+ def request_api(url, data, method = :post)
110
95
  begin
111
- response = RestClient.post(url, data.to_json, headers)
96
+ # Execute the request
97
+ response = RestClient::Request.execute(method: method, url: url, payload: data.to_json, headers: headers)
98
+
99
+ # Parse and return the response body
112
100
  JSON.parse(response.body)
113
101
  rescue RestClient::ExceptionWithResponse => e
102
+ # Parse the error message from the API response
114
103
  error_msg = JSON.parse(e.response.body)['error']['message']
104
+
105
+ # Raise an exception with the API error message
115
106
  raise RestClient::ExceptionWithResponse.new("#{e.message}: #{error_msg} (#{e.http_code})"), nil, e.backtrace
116
107
  end
117
108
  end
109
+
118
110
  end
119
111
  end
@@ -2,6 +2,6 @@
2
2
 
3
3
  module Chatgpt
4
4
  module Ruby
5
- VERSION = "1.0.0"
5
+ VERSION = "2.0.0"
6
6
  end
7
7
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: chatgpt-ruby
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.0
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Nagendra Dhanakeerthi
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-04-30 00:00:00.000000000 Z
11
+ date: 2023-07-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rest-client