ai_refactor 0.5.4 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: ca299458e2af2bc7ba495ce6473e1e6b1824e679893815616a209697178aefbd
4
- data.tar.gz: b13da64462ef3f5572a9afebb0020ecdb5adc538b92e7c80b3b6bbd756473f9d
3
+ metadata.gz: 8bd5fdbe52b59e22f97921422925d317e01e05355bc4af0baa9ae17952e5f4d1
4
+ data.tar.gz: ee025ff41eaf40421288c66b737336f61d24a9951f80f48d968a72b0c4f99554
5
5
  SHA512:
6
- metadata.gz: 5714f5535a9a6436beba5a6b33a79a0eb02c095f4ca93a5cdfd794fc231cdcab6341a87565fc8c9f32ef24e1edf35957d5268b48f259c2046f867d722520760a
7
- data.tar.gz: d986828235c4f26ba78a443ca267455be12d2b9647b672f7cb8a4f4ce53eed5f02c426f626531a9d567058ad8f3e4a34f0c43072529820ca803719542ea0c9e0
6
+ metadata.gz: fe1321b14189e4043ceea64457126cdf0f39a387a3568aca51a9ff0e8a5e76837ae551b20f83363ad97699b69e5c5a0823b71a0d40aedcceec7ef19a9cf2b5bf
7
+ data.tar.gz: 910dba4e7bccf116a1122d0e06a1b2d92406288c6200dac5f3bdbd943c66ae5696b73e88abb75ce02ce46eaad06d5d0529015a6042e28ac63af26525b5b2ac9f
data/CHANGELOG.md CHANGED
@@ -1,5 +1,22 @@
1
1
  # AI Refactor Changelog
2
2
 
3
+ ## [Unreleased]
4
+
5
+ ### Changes
6
+
7
+
8
+ ## [0.6.0] - 2024-06-19
9
+
10
+ ### Added
11
+
12
+ - Now supports Anthropic AI models. Eg pass `-m claude-3-opus-20240229` to use the current Claude Opus model.
13
+
14
+ ### Changes
15
+ - Default openAI model is now `gpt-4-turbo`
16
+
17
+ ### Fixed
18
+
19
+ - example test run should use `bundle exec` to ensure the correct version of the gem is used.
3
20
 
4
21
  ## [0.5.4] - 2024-02-07
5
22
 
data/README.md CHANGED
@@ -2,20 +2,25 @@
2
2
 
3
3
  __The goal for AIRefactor is to use LLMs to apply repetitive refactoring tasks to code.__
4
4
 
5
- First the human decides what refactoring is needed and builds up a prompt to describe the task, or uses one of AIRefactors provided prompts.
5
+ ## The workflow
6
6
 
7
- AIRefactor then helps to apply the refactoring to one or more files.
7
+ 1) the human decides what refactoring is needed
8
+ 2) the human selects an existing built-in refactoring command, and/or builds up a prompt to describe the task
9
+ 3) the human selects some source files to act as context (eg examples of the code post-refactor, or related classes etc)
10
+ 4) the human runs the tool with the command, source files and context files
11
+ 5) the AI generates the refactored code and outputs it either to a file or stdout.
12
+ 6) In some cases, the tool can then check the generated code by running tests and comparing test outputs.
8
13
 
9
- In some cases, the tool can then check the generated code by running tests and comparing test outputs.
14
+ AIRefactor can apply the refactoring to multiple files, allowing batch processing.
10
15
 
11
16
  #### Notes
12
17
 
13
18
  AI Refactor is an experimental tool and under active development as I explore the idea myself. It may not work as expected, or
14
19
  change in ways that break existing functionality.
15
20
 
16
- The focus of the tool is work with the Ruby programming language ecosystem, but it can be used with any language.
21
+ The focus of the tool is work with the **Ruby programming language ecosystem**, but it can be used with any language.
17
22
 
18
- AI Refactor currently uses [OpenAI's ChatGPT](https://platform.openai.com/).
23
+ AI Refactor currently uses [OpenAI's ChatGPT](https://platform.openai.com/) or [Anthropic Claude](https://docs.anthropic.com/en/docs/about-claude/models) to generate code.
19
24
 
20
25
  ## Examples
21
26
 
@@ -64,7 +69,9 @@ Use a pre-built prompt:
64
69
 
65
70
  ### User supplied prompts, eg `custom`, `ruby/write_ruby` and `ruby/refactor_ruby`
66
71
 
67
- Applies the refactor specified by prompting the AI with the user supplied prompt. You must supply a prompt file with the `-p` option.
72
+ You can use these commands in conjunction with a user supplied prompt.
73
+
74
+ You must supply a prompt file with the `-p` option.
68
75
 
69
76
  The output is written to `stdout`, or to a file with the `--output` option.
70
77
 
@@ -141,7 +148,7 @@ Where REFACTOR_TYPE_OR_COMMAND_FILE is either the path to a command YML file, or
141
148
  -p, --prompt PROMPT_FILE Specify path to a text file that contains the ChatGPT 'system' prompt.
142
149
  -f, --diffs Request AI generate diffs of changes rather than writing out the whole file.
143
150
  -C, --continue [MAX_MESSAGES] If ChatGPT stops generating due to the maximum token count being reached, continue to generate more messages, until a stop condition or MAX_MESSAGES. MAX_MESSAGES defaults to 3
144
- -m, --model MODEL_NAME Specify a ChatGPT model to use (default gpt-4-turbo-preview).
151
+ -m, --model MODEL_NAME Specify a ChatGPT model to use (default gpt-4-turbo).
145
152
  --temperature TEMP Specify the temperature parameter for ChatGPT (default 0.7).
146
153
  --max-tokens MAX_TOKENS Specify the max number of tokens of output ChatGPT can generate. Max will depend on the size of the prompt (default 1500)
147
154
  -t, --timeout SECONDS Specify the max wait time for ChatGPT response.
@@ -178,19 +185,26 @@ output_file_path: output file or directory
178
185
  output_template_path: output file template (see docs)
179
186
  prompt_file_path: path
180
187
  prompt: |
181
- A custom prompt to send to ChatGPT if the command needs it (otherwise read from file)
188
+ A custom prompt to send to AI if the command needs it (otherwise read from file)
182
189
  context_file_paths:
183
190
  - file1.rb
184
191
  - file2.rb
192
+ context_file_paths_from_gems:
193
+ gem_name:
194
+ - path/from/gem_root/file1.rb
195
+ - lib/gem_name/file2.rb
196
+ gem_name2:
197
+ - lib/gem_name2/file1.rb
198
+ - app/controllers/file2.rb
185
199
  # Other configuration options:
186
200
  context_text: |
187
201
  Some extra info to prepend to the prompt
188
202
  diff: true/false (default false)
189
203
  ai_max_attempts: max times to generate more if AI does not complete generating (default 3)
190
- ai_model: ChatGPT model name (default gpt-4-turbo-preview)
191
- ai_temperature: ChatGPT temperature (default 0.7)
192
- ai_max_tokens: ChatGPT max tokens (default 1500)
193
- ai_timeout: ChatGPT timeout (default 60)
204
+ ai_model: AI model name, OpenAI GPT or Anthropic Claude (default gpt-4-turbo)
205
+ ai_temperature: AI temperature (default 0.7)
206
+ ai_max_tokens: AI max tokens (default 1500)
207
+ ai_timeout: AI timeout (default 60)
194
208
  overwrite: y/n/a (default a)
195
209
  verbose: true/false (default false)
196
210
  debug: true/false (default false)
@@ -254,12 +268,6 @@ This file provides default CLI switches to add to any `ai_refactor` command.
254
268
 
255
269
  The tool keeps a history of commands run in the `.ai_refactor_history` file in the current working directory.
256
270
 
257
- ## Note on performance and ChatGPT version
258
-
259
- _The quality of results depend very much on the version of ChatGPT being used._
260
-
261
- I have tested with both 3.5 and 4 and see **significantly** better performance with version 4.
262
-
263
271
  ## Development
264
272
 
265
273
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
data/ai_refactor.gemspec CHANGED
@@ -12,7 +12,7 @@ Gem::Specification.new do |spec|
12
12
  spec.description = "Use OpenAI's ChatGPT to automate converting Rails RSpec tests to minitest (ActiveSupport::TestCase)."
13
13
  spec.homepage = "https://github.com/stevegeek/ai_refactor"
14
14
  spec.license = "MIT"
15
- spec.required_ruby_version = ">= 2.7.0"
15
+ spec.required_ruby_version = ">= 3.3.0"
16
16
 
17
17
  spec.metadata["homepage_uri"] = spec.homepage
18
18
  spec.metadata["source_code_uri"] = "https://github.com/stevegeek/ai_refactor"
@@ -21,7 +21,7 @@ Gem::Specification.new do |spec|
21
21
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
22
22
  spec.files = Dir.chdir(__dir__) do
23
23
  `git ls-files -z`.split("\x0").reject do |f|
24
- (File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ .git .circleci appveyor])
24
+ (File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ examples/ .github/ .git .circleci appveyor])
25
25
  end
26
26
  end
27
27
  spec.bindir = "exe"
@@ -32,5 +32,6 @@ Gem::Specification.new do |spec|
32
32
  spec.add_dependency "colorize", "< 2.0"
33
33
  spec.add_dependency "open3", "< 2.0"
34
34
  spec.add_dependency "ruby-openai", ">= 3.4.0", "< 6.0"
35
+ spec.add_dependency "anthropic", ">= 0.1.0", "< 1.0"
35
36
  spec.add_dependency "zeitwerk", "~> 2.6"
36
37
  end
data/exe/ai_refactor CHANGED
@@ -3,6 +3,7 @@
3
3
  require "optparse"
4
4
  require "colorize"
5
5
  require "openai"
6
+ require "anthropic"
6
7
  require "shellwords"
7
8
  require_relative "../lib/ai_refactor"
8
9
 
@@ -37,11 +38,11 @@ option_parser = OptionParser.new do |parser|
37
38
  run_config.context_text = c
38
39
  end
39
40
 
40
- parser.on("-r", "--review-prompt", "Show the prompt that will be sent to ChatGPT but do not actually call ChatGPT or make changes to files.") do
41
+ parser.on("-r", "--review-prompt", "Show the prompt that will be sent to the AI but do not actually call the AI or make changes to files.") do
41
42
  run_config.review_prompt = true
42
43
  end
43
44
 
44
- parser.on("-p", "--prompt PROMPT_FILE", String, "Specify path to a text file that contains the ChatGPT 'system' prompt.") do |f|
45
+ parser.on("-p", "--prompt PROMPT_FILE", String, "Specify path to a text file that contains the AI 'system' prompt.") do |f|
45
46
  run_config.prompt_file_path = f
46
47
  end
47
48
 
@@ -49,23 +50,23 @@ option_parser = OptionParser.new do |parser|
49
50
  run_config.diff = true
50
51
  end
51
52
 
52
- parser.on("-C", "--continue [MAX_MESSAGES]", Integer, "If ChatGPT stops generating due to the maximum token count being reached, continue to generate more messages, until a stop condition or MAX_MESSAGES. MAX_MESSAGES defaults to 3") do |c|
53
+ parser.on("-C", "--continue [MAX_MESSAGES]", Integer, "If AI stops generating due to the maximum token count being reached, continue to generate more messages, until a stop condition or MAX_MESSAGES. MAX_MESSAGES defaults to 3") do |c|
53
54
  run_config.ai_max_attempts = c
54
55
  end
55
56
 
56
- parser.on("-m", "--model MODEL_NAME", String, "Specify a ChatGPT model to use (default gpt-4-turbo-preview).") do |m|
57
+ parser.on("-m", "--model MODEL_NAME", String, "Specify a AI model to use (default 'gpt-4-turbo'). OpenAI and Anthropic models supported (eg 'gpt-4o', 'claude-3-opus-20240229')") do |m|
57
58
  run_config.ai_model = m
58
59
  end
59
60
 
60
- parser.on("--temperature TEMP", Float, "Specify the temperature parameter for ChatGPT (default 0.7).") do |p|
61
+ parser.on("--temperature TEMP", Float, "Specify the temperature parameter for generation (default 0.7).") do |p|
61
62
  run_config.ai_temperature = p
62
63
  end
63
64
 
64
- parser.on("--max-tokens MAX_TOKENS", Integer, "Specify the max number of tokens of output ChatGPT can generate. Max will depend on the size of the prompt (default 1500)") do |m|
65
+ parser.on("--max-tokens MAX_TOKENS", Integer, "Specify the max number of tokens of output the AI can generate. Max will depend on the size of the prompt (default 1500)") do |m|
65
66
  run_config.ai_max_tokens = m
66
67
  end
67
68
 
68
- parser.on("-t", "--timeout SECONDS", Integer, "Specify the max wait time for ChatGPT response.") do |m|
69
+ parser.on("-t", "--timeout SECONDS", Integer, "Specify the max wait time for an AI response.") do |m|
69
70
  run_config.ai_timeout = m
70
71
  end
71
72
 
@@ -0,0 +1,86 @@
1
+ # frozen_string_literal: true
2
+
3
+ module AIRefactor
4
+ class AIClient
5
+ def initialize(platform: "openai", model: "gpt-4-turbo", temperature: 0.7, max_tokens: 1500, timeout: 60, verbose: false)
6
+ @platform = platform
7
+ @model = model
8
+ @temperature = temperature
9
+ @max_tokens = max_tokens
10
+ @timeout = timeout
11
+ @verbose = verbose
12
+ @client = configure
13
+ end
14
+
15
+ def generate!(messages)
16
+ finished_reason, content, response = case @platform
17
+ when "openai"
18
+ openai_parse_response(
19
+ @client.chat(
20
+ parameters: {
21
+ messages: messages,
22
+ model: @model,
23
+ temperature: @temperature,
24
+ max_tokens: @max_tokens
25
+ }
26
+ )
27
+ )
28
+ when "anthropic"
29
+ anthropic_parse_response(
30
+ @client.messages(
31
+ parameters: {
32
+ system: messages.find { |m| m[:role] == "system" }&.fetch(:content, nil),
33
+ messages: messages.select { |m| m[:role] != "system" },
34
+ model: @model,
35
+ max_tokens: @max_tokens
36
+ }
37
+ )
38
+ )
39
+ else
40
+ raise "Invalid platform: #{@platform}"
41
+ end
42
+ yield finished_reason, content, response
43
+ end
44
+
45
+ private
46
+
47
+ def configure
48
+ case @platform
49
+ when "openai"
50
+ ::OpenAI::Client.new(
51
+ access_token: ENV.fetch("OPENAI_API_KEY"),
52
+ organization_id: ENV.fetch("OPENAI_ORGANIZATION_ID", nil),
53
+ request_timeout: @timeout,
54
+ log_errors: @verbose
55
+ )
56
+ when "anthropic"
57
+ ::Anthropic::Client.new(
58
+ access_token: ENV.fetch("ANTHROPIC_API_KEY"),
59
+ request_timeout: @timeout
60
+ )
61
+ else
62
+ raise "Invalid platform: #{@platform}"
63
+ end
64
+ end
65
+
66
+ def openai_parse_response(response)
67
+ if response["error"]
68
+ raise StandardError.new("OpenAI error: #{response["error"]["type"]}: #{response["error"]["message"]} (#{response["error"]["code"]})")
69
+ end
70
+
71
+ content = response.dig("choices", 0, "message", "content")
72
+ finished_reason = response.dig("choices", 0, "finish_reason")
73
+ [finished_reason, content, response]
74
+ end
75
+
76
+ def anthropic_parse_response(response)
77
+ if response["error"]
78
+ raise StandardError.new("Anthropic error: #{response["error"]["type"]}: #{response["error"]["message"]}")
79
+ end
80
+
81
+ content = response.dig("content", 0, "text")
82
+ finished_reason = response["stop_reason"] == "max_tokens" ? "length" : response["stop_reason"]
83
+ [finished_reason, content, response]
84
+ end
85
+ end
86
+ end
@@ -63,6 +63,17 @@ module AIRefactor
63
63
  configuration.input_file_paths
64
64
  end
65
65
 
66
+ def ai_client
67
+ @ai_client ||= AIRefactor::AIClient.new(
68
+ platform: configuration.ai_platform,
69
+ model: configuration.ai_model,
70
+ temperature: configuration.ai_temperature,
71
+ max_tokens: configuration.ai_max_tokens,
72
+ timeout: configuration.ai_timeout,
73
+ verbose: configuration.verbose
74
+ )
75
+ end
76
+
66
77
  def valid?
67
78
  return false unless refactorer
68
79
  inputs_valid = refactorer.takes_input_files? ? !(inputs.nil? || inputs.empty?) : true
@@ -72,12 +83,6 @@ module AIRefactor
72
83
  def run
73
84
  return false unless valid?
74
85
 
75
- OpenAI.configure do |config|
76
- config.access_token = ENV.fetch("OPENAI_API_KEY")
77
- config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID", nil)
78
- config.request_timeout = configuration.ai_timeout || 240
79
- end
80
-
81
86
  if refactorer.takes_input_files?
82
87
  expanded_inputs = inputs.map do |path|
83
88
  File.exist?(path) ? path : Dir.glob(path)
@@ -92,7 +97,7 @@ module AIRefactor
92
97
  return_values = expanded_inputs.map do |file|
93
98
  logger.info "Processing #{file}..."
94
99
 
95
- refactor = refactorer.new(file, configuration, logger)
100
+ refactor = refactorer.new(ai_client, file, configuration, logger)
96
101
  refactor_returned = refactor.run
97
102
  failed = refactor_returned == false
98
103
  if failed
@@ -118,7 +123,7 @@ module AIRefactor
118
123
  name = refactorer.refactor_name
119
124
  logger.info "AI Refactor - #{name} refactor\n"
120
125
  logger.info "====================\n"
121
- refactor = refactorer.new(nil, configuration, logger)
126
+ refactor = refactorer.new(ai_client, nil, configuration, logger)
122
127
  refactor_returned = refactor.run
123
128
  failed = refactor_returned == false
124
129
  if failed
@@ -60,35 +60,21 @@ module AIRefactor
60
60
  logger.debug "Options: #{options.inspect}"
61
61
  logger.debug "Messages: #{messages.inspect}"
62
62
 
63
- response = @ai_client.chat(
64
- parameters: {
65
- model: options[:ai_model] || "gpt-4-turbo-preview",
66
- messages: messages,
67
- temperature: options[:ai_temperature] || 0.7,
68
- max_tokens: options[:ai_max_tokens] || 1500
69
- }
70
- )
71
-
72
- if response["error"]
73
- raise StandardError.new("OpenAI error: #{response["error"]["type"]}: #{response["error"]["message"]} (#{response["error"]["code"]})")
74
- end
75
-
76
- content = response.dig("choices", 0, "message", "content")
77
- finished_reason = response.dig("choices", 0, "finish_reason")
78
-
79
- if finished_reason == "length" && attempts_left > 0
80
- generate_next_message(messages + [
81
- {role: "assistant", content: content},
82
- {role: "user", content: "Continue"}
83
- ], options, attempts_left - 1)
84
- else
85
- previous_messages = messages.filter { |m| m[:role] == "assistant" }.map { |m| m[:content] }.join
86
- content = if previous_messages.length > 0
87
- content ? previous_messages + content : previous_messages
63
+ @ai_client.generate!(messages) do |finished_reason, content, response|
64
+ if finished_reason == "length" && attempts_left > 0
65
+ generate_next_message(messages + [
66
+ {role: "assistant", content: content},
67
+ {role: "user", content: "Continue"}
68
+ ], options, attempts_left - 1)
88
69
  else
89
- content
70
+ previous_messages = messages.filter { |m| m[:role] == "assistant" }.map { |m| m[:content] }.join
71
+ content = if previous_messages.length > 0
72
+ content ? previous_messages + content : previous_messages
73
+ else
74
+ content
75
+ end
76
+ [content, finished_reason, response["usage"]]
90
77
  end
91
- [content, finished_reason, response["usage"]]
92
78
  end
93
79
  end
94
80
 
@@ -17,11 +17,12 @@ module AIRefactor
17
17
  true
18
18
  end
19
19
 
20
- attr_reader :input_file, :options, :logger
20
+ attr_reader :ai_client, :input_file, :options, :logger
21
21
  attr_accessor :input_content
22
22
  attr_writer :failed_message
23
23
 
24
- def initialize(input_file, options, logger)
24
+ def initialize(ai_client, input_file, options, logger)
25
+ @ai_client = ai_client
25
26
  @input_file = input_file
26
27
  @options = options
27
28
  @logger = logger
@@ -79,8 +80,11 @@ module AIRefactor
79
80
  output_content
80
81
  rescue => e
81
82
  logger.error "Request to AI failed: #{e.message}"
83
+ if e.respond_to?(:response) && e.response
84
+ logger.error "Response: #{e.response[:body]}"
85
+ end
82
86
  logger.warn "Skipping #{input_file}..."
83
- self.failed_message = "Request to OpenAI failed"
87
+ self.failed_message = "Request to AI API failed"
84
88
  raise e
85
89
  end
86
90
  end
@@ -175,10 +179,6 @@ module AIRefactor
175
179
  path
176
180
  end
177
181
 
178
- def ai_client
179
- @ai_client ||= OpenAI::Client.new
180
- end
181
-
182
182
  def refactor_name
183
183
  self.class.refactor_name
184
184
  end
@@ -18,11 +18,6 @@ module AIRefactor
18
18
  :review_prompt,
19
19
  :prompt,
20
20
  :prompt_file_path,
21
- :ai_max_attempts,
22
- :ai_model,
23
- :ai_temperature,
24
- :ai_max_tokens,
25
- :ai_timeout,
26
21
  :overwrite,
27
22
  :diff,
28
23
  :verbose,
@@ -97,30 +92,54 @@ module AIRefactor
97
92
  attr_writer :rspec_run_command
98
93
  attr_writer :minitest_run_command
99
94
 
95
+ def ai_max_attempts
96
+ @ai_max_attempts || 3
97
+ end
98
+
100
99
  def ai_max_attempts=(value)
101
- @ai_max_attempts = value || 3
100
+ @ai_max_attempts = value
101
+ end
102
+
103
+ def ai_model
104
+ @ai_model || "gpt-4-turbo"
102
105
  end
103
106
 
104
107
  def ai_model=(value)
105
- @ai_model = value || "gpt-4-turbo-preview"
108
+ @ai_model = value
106
109
  end
107
110
 
108
- def ai_temperature=(value)
109
- @ai_temperature = value || 0.7
111
+ def ai_platform
112
+ if ai_model&.start_with?("claude")
113
+ "anthropic"
114
+ else
115
+ "openai"
116
+ end
110
117
  end
111
118
 
112
- def ai_max_tokens=(value)
113
- @ai_max_tokens = value || 1500
119
+ def ai_temperature
120
+ @ai_temperature || 0.7
114
121
  end
115
122
 
116
- def ai_timeout=(value)
117
- @ai_timeout = value || 60
123
+ attr_writer :ai_temperature
124
+
125
+ def ai_max_tokens
126
+ @ai_max_tokens || 1500
118
127
  end
119
128
 
120
- def overwrite=(value)
121
- @overwrite = value || "a"
129
+ attr_writer :ai_max_tokens
130
+
131
+ def ai_timeout
132
+ @ai_timeout || 60
122
133
  end
123
134
 
135
+ attr_writer :ai_timeout
136
+
137
+ def overwrite
138
+ @overwrite || "a"
139
+ end
140
+
141
+ attr_writer :overwrite
142
+
124
143
  attr_writer :diff
125
144
 
126
145
  attr_writer :verbose
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AIRefactor
4
- VERSION = "0.5.4"
4
+ VERSION = "0.6.0"
5
5
  end
data/lib/ai_refactor.rb CHANGED
@@ -4,6 +4,7 @@ require "zeitwerk"
4
4
  loader = Zeitwerk::Loader.for_gem
5
5
  loader.inflector.inflect(
6
6
  "ai_refactor" => "AIRefactor",
7
+ "ai_client" => "AIClient",
7
8
  "rspec_runner" => "RSpecRunner"
8
9
  )
9
10
  loader.setup # ready!
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ai_refactor
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.4
4
+ version: 0.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Stephen Ierodiaconou
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-02-07 00:00:00.000000000 Z
11
+ date: 2024-06-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: colorize
@@ -58,6 +58,26 @@ dependencies:
58
58
  - - "<"
59
59
  - !ruby/object:Gem::Version
60
60
  version: '6.0'
61
+ - !ruby/object:Gem::Dependency
62
+ name: anthropic
63
+ requirement: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - ">="
66
+ - !ruby/object:Gem::Version
67
+ version: 0.1.0
68
+ - - "<"
69
+ - !ruby/object:Gem::Version
70
+ version: '1.0'
71
+ type: :runtime
72
+ prerelease: false
73
+ version_requirements: !ruby/object:Gem::Requirement
74
+ requirements:
75
+ - - ">="
76
+ - !ruby/object:Gem::Version
77
+ version: 0.1.0
78
+ - - "<"
79
+ - !ruby/object:Gem::Version
80
+ version: '1.0'
61
81
  - !ruby/object:Gem::Dependency
62
82
  name: zeitwerk
63
83
  requirement: !ruby/object:Gem::Requirement
@@ -90,14 +110,9 @@ files:
90
110
  - Steepfile
91
111
  - ai_refactor.gemspec
92
112
  - commands/quickdraw/0.1.0/convert_minitest.yml
93
- - examples/ex1_convert_a_rspec_test_to_minitest.yml
94
- - examples/ex1_input_spec.rb
95
- - examples/ex2_input.rb
96
- - examples/ex2_write_rbs.yml
97
- - examples/rails_helper.rb
98
- - examples/test_helper.rb
99
113
  - exe/ai_refactor
100
114
  - lib/ai_refactor.rb
115
+ - lib/ai_refactor/ai_client.rb
101
116
  - lib/ai_refactor/cli.rb
102
117
  - lib/ai_refactor/command_file_parser.rb
103
118
  - lib/ai_refactor/commands.rb
@@ -146,14 +161,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
146
161
  requirements:
147
162
  - - ">="
148
163
  - !ruby/object:Gem::Version
149
- version: 2.7.0
164
+ version: 3.3.0
150
165
  required_rubygems_version: !ruby/object:Gem::Requirement
151
166
  requirements:
152
167
  - - ">="
153
168
  - !ruby/object:Gem::Version
154
169
  version: '0'
155
170
  requirements: []
156
- rubygems_version: 3.4.20
171
+ rubygems_version: 3.5.3
157
172
  signing_key:
158
173
  specification_version: 4
159
174
  summary: Use AI to convert a Rails RSpec test suite to minitest.
@@ -1,8 +0,0 @@
1
- refactor: rails/minitest/rspec_to_minitest
2
- input_file_paths:
3
- - examples/ex1_input_spec.rb
4
- # We need to add context here as otherwise to tell the AI to require our local test_helper.rb file so that we can run the tests after
5
- context_text: "In the output test use `require_relative '../test_helper'` to include 'test_helper'."
6
- # By default, ai_refactor runs "bundle exec rails test" but this isn't going to work here as we are not actually in a Rails app context in the examples
7
- minitest_run_command: ruby __FILE__
8
- output_file_path: examples/outputs/ex1_input_test.rb
@@ -1,32 +0,0 @@
1
- require_relative "rails_helper"
2
-
3
- RSpec.describe MyModel, type: :model do
4
- subject(:model) { described_class.new }
5
-
6
- it { is_expected.to validate_presence_of(:name) }
7
-
8
- it "should allow integer values for age" do
9
- model.age = 1
10
- expect(model.age).to eq 1
11
- end
12
-
13
- it "should allow string values for name" do
14
- model.name = "test"
15
- expect(model.name).to eq "test"
16
- end
17
-
18
- it "should be invalid with invalid name" do
19
- model.name = nil
20
- expect(model).to be_invalid
21
- end
22
-
23
- it "should convert integer values for name" do
24
- model.name = 1
25
- expect(model.name).to eq "1"
26
- end
27
-
28
- it "should not allow string values for age" do
29
- model.age = "test"
30
- expect(model.age).to eq 0
31
- end
32
- end
@@ -1,17 +0,0 @@
1
- # example from https://blog.kiprosh.com/type-checking-in-ruby-3-using-rbs/
2
- # basic_math.rb
3
-
4
- class BasicMath
5
- def initialize(num1, num2)
6
- @num1 = num1
7
- @num2 = num2
8
- end
9
-
10
- def first_less_than_second?
11
- @num1 < @num2
12
- end
13
-
14
- def add
15
- @num1 + @num2
16
- end
17
- end
@@ -1,7 +0,0 @@
1
- refactor: ruby/write_rbs
2
- input_file_paths:
3
- - examples/ex2_input.rb
4
- # We need to add context here as our class doesnt actually give any context.
5
- context_text: "Assume that the inputs can be any numeric type."
6
- # By default this refactor writes to `sig/...` but here we just put the result in `examples/...`
7
- output_file_path: examples/outputs/ex2_input.rbs
@@ -1,21 +0,0 @@
1
- require "rails/all"
2
- require "shoulda-matchers"
3
-
4
- Shoulda::Matchers.configure do |config|
5
- config.integrate do |with|
6
- with.test_framework :rspec
7
- with.library :rails
8
- end
9
- end
10
-
11
- class MyModel
12
- include ActiveModel::Model
13
- include ActiveModel::Attributes
14
- include ActiveModel::Validations
15
- include ActiveModel::Validations::Callbacks
16
-
17
- validates :name, presence: true
18
-
19
- attribute :name, :string
20
- attribute :age, :integer
21
- end
@@ -1,14 +0,0 @@
1
- require "rails/all"
2
- require "active_support/testing/autorun"
3
-
4
- class MyModel
5
- include ActiveModel::Model
6
- include ActiveModel::Attributes
7
- include ActiveModel::Validations
8
- include ActiveModel::Validations::Callbacks
9
-
10
- validates :name, presence: true
11
-
12
- attribute :name, :string
13
- attribute :age, :integer
14
- end