ai_refactor 0.5.3 → 0.6.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5f916993ff53967b44cbbc81138c87d518518618d3edc932cefa6aea9f0a4c4c
4
- data.tar.gz: 5d3bd3b9cea6d7bd39e0028caa5ccf24527dd03fcd98d643498a65743a2cac5e
3
+ metadata.gz: 8bd5fdbe52b59e22f97921422925d317e01e05355bc4af0baa9ae17952e5f4d1
4
+ data.tar.gz: ee025ff41eaf40421288c66b737336f61d24a9951f80f48d968a72b0c4f99554
5
5
  SHA512:
6
- metadata.gz: 41cb1f31ceb6e503cdbeeb972f17a91e1826eccb4fb66e8a73b3b54bb3899ff9572d9cb4783c67fe7e7fee8999f7f25f2b19ae7ab9fde6c766e895e4fb4a1b32
7
- data.tar.gz: 57626f38b266521d820bb411dbf517dd3273ed6e4bd44412d4c4e4b0306a3347252a1b87d0214c148f5f5e35c20cd81c6325648f878dd34d67fe58226156118d
6
+ metadata.gz: fe1321b14189e4043ceea64457126cdf0f39a387a3568aca51a9ff0e8a5e76837ae551b20f83363ad97699b69e5c5a0823b71a0d40aedcceec7ef19a9cf2b5bf
7
+ data.tar.gz: 910dba4e7bccf116a1122d0e06a1b2d92406288c6200dac5f3bdbd943c66ae5696b73e88abb75ce02ce46eaad06d5d0529015a6042e28ac63af26525b5b2ac9f
data/CHANGELOG.md CHANGED
@@ -1,5 +1,52 @@
1
1
  # AI Refactor Changelog
2
2
 
3
+ ## [Unreleased]
4
+
5
+ ### Changes
6
+
7
+
8
+ ## [0.6.0] - 2024-06-19
9
+
10
+ ### Added
11
+
12
+ - Now supports Anthropic AI models. Eg pass `-m claude-3-opus-20240229` to use the current Claude Opus model.
13
+
14
+ ### Changes
15
+ - Default openAI model is now `gpt-4-turbo`
16
+
17
+ ### Fixed
18
+
19
+ - example test run should use `bundle exec` to ensure the correct version of the gem is used.
20
+
21
+ ## [0.5.4] - 2024-02-07
22
+
23
+ ### Added
24
+ - Support for built-in command YML files to make it easy to add new refactors
25
+ - Support for specifying context files from gems with `context_file_paths_from_gems:` key in command templates
26
+ - Command to convert Minitest tests to Quickdraw tests
27
+
28
+ ### Changes
29
+ - Default openAI model is now `gpt-4-turbo-preview`
30
+
31
+ ## [0.5.3] - 2024-02-06
32
+
33
+ ### Added
34
+ - Add runner to run steep on inputs after generating RBS
35
+ - Add refactor to write RBS
36
+
37
+ ### Fixed
38
+ - Removed dependency on `dotenv` gems
39
+ - Update openai dependency
40
+ - Improve prompt handling to allow having custom text prompts from commands that can append to build in prompt templates
41
+ - Custom refactor should allow prompt to come from prompt text option
42
+
43
+ ## [0.5.2] - 2023-09-21
44
+
45
+ ### Added
46
+
47
+ ### Fixed
48
+ - Removed `puts`
49
+
3
50
  ## [0.5.1] - 2023-09-21
4
51
 
5
52
  ### Added
data/README.md CHANGED
@@ -2,20 +2,25 @@
2
2
 
3
3
  __The goal for AIRefactor is to use LLMs to apply repetitive refactoring tasks to code.__
4
4
 
5
- First the human decides what refactoring is needed and builds up a prompt to describe the task, or uses one of AIRefactors provided prompts.
5
+ ## The workflow
6
6
 
7
- AIRefactor then helps to apply the refactoring to one or more files.
7
+ 1) the human decides what refactoring is needed
8
+ 2) the human selects an existing built-in refactoring command, and/or builds up a prompt to describe the task
9
+ 3) the human selects some source files to act as context (eg examples of the code post-refactor, or related classes etc)
10
+ 4) the human runs the tool with the command, source files and context files
11
+ 5) the AI generates the refactored code and outputs it either to a file or stdout.
12
+ 6) In some cases, the tool can then check the generated code by running tests and comparing test outputs.
8
13
 
9
- In some cases, the tool can then check the generated code by running tests and comparing test outputs.
14
+ AIRefactor can apply the refactoring to multiple files, allowing batch processing.
10
15
 
11
16
  #### Notes
12
17
 
13
18
  AI Refactor is an experimental tool and under active development as I explore the idea myself. It may not work as expected, or
14
19
  change in ways that break existing functionality.
15
20
 
16
- The focus of the tool is work with the Ruby programming language ecosystem, but it can be used with any language.
21
+ The focus of the tool is work with the **Ruby programming language ecosystem**, but it can be used with any language.
17
22
 
18
- AI Refactor currently uses [OpenAI's ChatGPT](https://platform.openai.com/).
23
+ AI Refactor currently uses [OpenAI's ChatGPT](https://platform.openai.com/) or [Anthropic Claude](https://docs.anthropic.com/en/docs/about-claude/models) to generate code.
19
24
 
20
25
  ## Examples
21
26
 
@@ -49,8 +54,7 @@ And find the file `examples/ex1_input_test.rb` has been created. Note the proces
49
54
 
50
55
  If you see an error, then try to run it again, or use a different GPT model.
51
56
 
52
-
53
- ## Available refactors
57
+ ## Available refactors & commands
54
58
 
55
59
  Write your own prompt:
56
60
 
@@ -65,7 +69,9 @@ Use a pre-built prompt:
65
69
 
66
70
  ### User supplied prompts, eg `custom`, `ruby/write_ruby` and `ruby/refactor_ruby`
67
71
 
68
- Applies the refactor specified by prompting the AI with the user supplied prompt. You must supply a prompt file with the `-p` option.
72
+ You can use these commands in conjunction with a user supplied prompt.
73
+
74
+ You must supply a prompt file with the `-p` option.
69
75
 
70
76
  The output is written to `stdout`, or to a file with the `--output` option.
71
77
 
@@ -107,6 +113,14 @@ For example, if the input file is `app/stuff/my_thing.rb` the output will be wri
107
113
  This refactor can benefit from being passed related files as context, for example, if the class under test inherits from another class,
108
114
  then context can be used to provide the parent class.
109
115
 
116
+ ### `quickdraw/0.1.0/convert_minitest`
117
+
118
+ Convert Minitest or Test::Unit test suite files to [Quickdraw](https://github.com/joeldrapper/quickdraw) test suite files.
119
+
120
+ Files, by default, are output to the same directory as the input file but with .test.rb extension (and _test removed).
121
+
122
+ Note: Quickdraw is still missing some features, so some minitest methods are not converted, for example, Quickdraw does not support setup/teardown just yet.
123
+
110
124
  ## Installation
111
125
 
112
126
  Install the gem and add to the application's Gemfile by executing:
@@ -134,7 +148,7 @@ Where REFACTOR_TYPE_OR_COMMAND_FILE is either the path to a command YML file, or
134
148
  -p, --prompt PROMPT_FILE Specify path to a text file that contains the ChatGPT 'system' prompt.
135
149
  -f, --diffs Request AI generate diffs of changes rather than writing out the whole file.
136
150
  -C, --continue [MAX_MESSAGES] If ChatGPT stops generating due to the maximum token count being reached, continue to generate more messages, until a stop condition or MAX_MESSAGES. MAX_MESSAGES defaults to 3
137
- -m, --model MODEL_NAME Specify a ChatGPT model to use (default gpt-4).
151
+ -m, --model MODEL_NAME Specify a ChatGPT model to use (default gpt-4-turbo).
138
152
  --temperature TEMP Specify the temperature parameter for ChatGPT (default 0.7).
139
153
  --max-tokens MAX_TOKENS Specify the max number of tokens of output ChatGPT can generate. Max will depend on the size of the prompt (default 1500)
140
154
  -t, --timeout SECONDS Specify the max wait time for ChatGPT response.
@@ -171,19 +185,26 @@ output_file_path: output file or directory
171
185
  output_template_path: output file template (see docs)
172
186
  prompt_file_path: path
173
187
  prompt: |
174
- A custom prompt to send to ChatGPT if the command needs it (otherwise read from file)
188
+ A custom prompt to send to AI if the command needs it (otherwise read from file)
175
189
  context_file_paths:
176
190
  - file1.rb
177
191
  - file2.rb
192
+ context_file_paths_from_gems:
193
+ gem_name:
194
+ - path/from/gem_root/file1.rb
195
+ - lib/gem_name/file2.rb
196
+ gem_name2:
197
+ - lib/gem_name2/file1.rb
198
+ - app/controllers/file2.rb
178
199
  # Other configuration options:
179
200
  context_text: |
180
201
  Some extra info to prepend to the prompt
181
202
  diff: true/false (default false)
182
203
  ai_max_attempts: max times to generate more if AI does not complete generating (default 3)
183
- ai_model: ChatGPT model name (default gpt-4)
184
- ai_temperature: ChatGPT temperature (default 0.7)
185
- ai_max_tokens: ChatGPT max tokens (default 1500)
186
- ai_timeout: ChatGPT timeout (default 60)
204
+ ai_model: AI model name, OpenAI GPT or Anthropic Claude (default gpt-4-turbo)
205
+ ai_temperature: AI temperature (default 0.7)
206
+ ai_max_tokens: AI max tokens (default 1500)
207
+ ai_timeout: AI timeout (default 60)
187
208
  overwrite: y/n/a (default a)
188
209
  verbose: true/false (default false)
189
210
  debug: true/false (default false)
@@ -247,12 +268,6 @@ This file provides default CLI switches to add to any `ai_refactor` command.
247
268
 
248
269
  The tool keeps a history of commands run in the `.ai_refactor_history` file in the current working directory.
249
270
 
250
- ## Note on performance and ChatGPT version
251
-
252
- _The quality of results depend very much on the version of ChatGPT being used._
253
-
254
- I have tested with both 3.5 and 4 and see **significantly** better performance with version 4.
255
-
256
271
  ## Development
257
272
 
258
273
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
data/ai_refactor.gemspec CHANGED
@@ -12,7 +12,7 @@ Gem::Specification.new do |spec|
12
12
  spec.description = "Use OpenAI's ChatGPT to automate converting Rails RSpec tests to minitest (ActiveSupport::TestCase)."
13
13
  spec.homepage = "https://github.com/stevegeek/ai_refactor"
14
14
  spec.license = "MIT"
15
- spec.required_ruby_version = ">= 2.7.0"
15
+ spec.required_ruby_version = ">= 3.3.0"
16
16
 
17
17
  spec.metadata["homepage_uri"] = spec.homepage
18
18
  spec.metadata["source_code_uri"] = "https://github.com/stevegeek/ai_refactor"
@@ -21,7 +21,7 @@ Gem::Specification.new do |spec|
21
21
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
22
22
  spec.files = Dir.chdir(__dir__) do
23
23
  `git ls-files -z`.split("\x0").reject do |f|
24
- (File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ .git .circleci appveyor])
24
+ (File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ examples/ .github/ .git .circleci appveyor])
25
25
  end
26
26
  end
27
27
  spec.bindir = "exe"
@@ -32,5 +32,6 @@ Gem::Specification.new do |spec|
32
32
  spec.add_dependency "colorize", "< 2.0"
33
33
  spec.add_dependency "open3", "< 2.0"
34
34
  spec.add_dependency "ruby-openai", ">= 3.4.0", "< 6.0"
35
+ spec.add_dependency "anthropic", ">= 0.1.0", "< 1.0"
35
36
  spec.add_dependency "zeitwerk", "~> 2.6"
36
37
  end
@@ -0,0 +1,194 @@
1
+ # Convert Minitest or Test::Unit test suite files to Quickdraw test suite files.
2
+ #
3
+ # Files are output to the same directory as the input file but with .test.rb extension (and _test removed).
4
+ # Quickdraw is still missing some features, so some minitest methods are not converted. Also
5
+ # Quickdraw does not support setup/teardown just yet.
6
+
7
+ refactor: ruby/refactor_ruby
8
+ description: Convert Minitest or Test::Unit test suite files to Quickdraw test suite files.
9
+ output_template_path: "[DIR]/[NAME|_test|].test[EXT]"
10
+ context_file_paths_from_gems:
11
+ quickdraw:
12
+ - "lib/quickdraw/matchers/boolean.rb"
13
+ - "lib/quickdraw/matchers/case_equality.rb"
14
+ - "lib/quickdraw/matchers/change.rb"
15
+ - "lib/quickdraw/matchers/equality.rb"
16
+ - "lib/quickdraw/matchers/include.rb"
17
+ - "lib/quickdraw/matchers/predicate.rb"
18
+ - "lib/quickdraw/matchers/respond_to.rb"
19
+ - "lib/quickdraw/matchers/to_be_a.rb"
20
+ - "lib/quickdraw/matchers/to_have_attributes.rb"
21
+ prompt: |
22
+ You are an expert Ruby senior software developer. You convert minitest or Test::Unit test suite files to Quickdraw test suite files.
23
+
24
+ Quickdraw is a new test framework for Ruby:
25
+
26
+ - Spec-like DSL, but with just five methods: `describe`, `test` and `expect`, `assert`, `refute`. No `context`, `let`, `subject`, `to`, `it`, `is_expected` or `specify`, and you’ll never need to guess whether the next symbol should be a space, a colon, a dot or an underscore.
27
+ - No chaining on matchers. Rather than chain, the matcher can yield if it wants to allow for more complex matching.
28
+ - Auto-loaded configuration, so you never need to `require "test_helper"`.
29
+ - Scoped execution, so you can define methods and constants at the top level without worrying about collisions.
30
+ - You can define your own matchers, which can be scoped to a specific type of object and they can be overloaded for different types.
31
+ - Designed to take advantage of all your CPU cores — by default it runs one process per CPU core and two threads per process.
32
+ - Optional test names — sometimes the code is so clear, you don’t need names.
33
+ - Make as many expectations as you want in a test. You’ll get a dot for each one to make you feel good about youtself.
34
+
35
+ > [!TIP]
36
+ > Your test files are executed in an anonymous class, so you can define methods and constants at the top level without worrying about collisions. If you’re testing something that references `Class#name`, you may have to define those classes as fixtures somewhere else.
37
+
38
+ ### `.test`
39
+ Use the `test` method to define a test. The description is optional — sometimes you don’t need it.
40
+
41
+ ```ruby
42
+ test { assert true }
43
+ ```
44
+
45
+ You can pass `skip: true` to skip the test. Skipped tests are still run; they pass if they fail and fail they pass.
46
+
47
+ ```ruby
48
+ test(skip: true) { assert false }
49
+ ```
50
+
51
+ ### `.describe`
52
+ You can optionally wrap tests in any number of `describe` blocks, which can take a description as a string or module/class.
53
+
54
+ ```ruby
55
+ describe Thing do
56
+ # your Thing tests here
57
+ end
58
+ ```
59
+
60
+ ### `#assert`
61
+ `assert` takes a value and passes if it’s truthy.
62
+
63
+ ```ruby
64
+ test "something" do
65
+ assert true
66
+ end
67
+ ```
68
+
69
+ You can pass a custom failure message as a block. Using blocks for the failure messages means we don’t waste time constructing them unless the test fails. You don’t need to worry about expensive failure messages slowing down your tests.
70
+
71
+ ```ruby
72
+ test "something" do
73
+ assert(false) { "This is a custom failure message" }
74
+ end
75
+ ```
76
+
77
+ ### `#refute`
78
+ `refute` is just like `assert`, but it passes if the value is falsy.
79
+
80
+ ```ruby
81
+ test "something" do
82
+ refute false
83
+ end
84
+ ```
85
+
86
+ ### `expect` matchers
87
+ `expect` takes either a value or a block and returns an expectation object, which you can call matchers on.
88
+
89
+ #### `==` and `!=`
90
+
91
+ ```ruby
92
+ test "equality" do
93
+ expect(Thing.foo) == "foo"
94
+ expect(Thing.bar) != "foo"
95
+ end
96
+ ```
97
+
98
+ #### `to_raise`
99
+
100
+ ```ruby
101
+ test "raises" do
102
+ expect { Thing.bar! }.to_raise(ArgumentError) do |error|
103
+ expect(error.message) == "Foo bar"
104
+ end
105
+ end
106
+ ```
107
+
108
+ #### `to_receive`
109
+
110
+ ```ruby
111
+ test "mocks and spies" do
112
+ expect(Thing).to_receive(:foo) do |a, b, c|
113
+ # The block receives arguments and can make assertions about them.
114
+ expect(a) == 1
115
+ expect(b) != 1
116
+ assert(c)
117
+
118
+ # Either return a mock response or call the original via `@super`
119
+ @super.call
120
+ end
121
+
122
+ Thing.foo(1, 2, 3)
123
+ end
124
+ ```
125
+
126
+ ### Mappings of minitest assertions/expectations to quickdraw
127
+
128
+ The minitest test class (which inherits from Test::Unit or Minitest::Test) should be removed from the output, as the
129
+ quickdraw test class is anonymous and implicit.
130
+ eg
131
+ ```ruby
132
+ class MyTest < Minitest::Test
133
+ def test_something
134
+ assert true
135
+ end
136
+ end
137
+ ```
138
+ becomes
139
+ ```ruby
140
+ test "something" do
141
+ assert true
142
+ end
143
+ ```
144
+
145
+ `should` in Test::Unit is the same as `describe` in Quickdraw.
146
+
147
+ minitest "assert" and "refute" methods are mapped to quickdraw `assert` and `refute` methods.
148
+
149
+ minitest "expect" methods are mapped to quickdraw `expect` methods.
150
+
151
+ below are the mappings of minitest methods to quickdraw methods:
152
+
153
+ `_(x).must_be`, `expect(x).must_be :==, 0` or `assert_operator x, :==, 0` becomes `expect(x) == 0`
154
+ `_(x).must_be`, `expect(x).must_be :>, 0` or `assert_operator x, :>, 0` becomes `assert(x > 0)`
155
+ `_(x).must_be`, `expect(x).must_be :empty?` `expect(x).must_be_empty` `assert_empty` becomes `assert(x.empty?)`
156
+ `_(x).must_equal`, `expect(x).must_equal b` or `assert_equal b, x` becomes `expect(x) == b`
157
+ `_(x).must_be_close_to`, `expect(x).must_be_close_to 2.99999`, `assert_in_epsilon` or `assert_in_delta` becomes `raise "Not implemented in Quickdraw yet"`
158
+ `_(x).must_be_same_as`, `expect(x).must_be_same_as b` and `assert_same` becomes `expect(x).to_equal(b)`
159
+ `_(x).must_include`, `expect(x).must_include needle`, `assert_includes x, needle` becomes `expect(x).to_include(needle)`
160
+ `_(x).must_be_kind_of`, `expect(x).must_be_kind_of Enumerable` or `assert_kind_of Enumerable, x` becomes `assert(x.kind_of? Enumerable)`
161
+ `_(x).must_be_instance_of`, `expect(x).must_be_instance_of Array` or `assert_instance_of Array, x` becomes `assert(x.instance_of? Array)`
162
+ `_(x).must_be_nil`, `expect(x).must_be_nil` or `assert_nil` becomes `assert(x == nil)`
163
+ `_(x).must_match`, `expect(x).must_match /regex/` , `assert_match x, /regex/` becomes `assert(/regex/ === x)`
164
+ `_(x).must_respond_to`, `expect(x).must_respond_to msg` or `assert_respond_to x, msg` becomes `expect(x).to_respond_to(msg)`
165
+ `_(x).wont_respond_to`, `expect(x).wont_respond_to msg` or `refute_respond_to x, msg` becomes `expect(x).not_to_respond_to(msg)`
166
+ `proc { "no stdout or stderr" }.must_output` or `assert_output {}`, proc { "no stdout or stderr" }.must_be_silent` or `assert_silent {}` becomes `raise "Not implemented in Quickdraw yet"`
167
+ `proc { ... }.must_raise exception` or `assert_raises(exp) {}` becomes `expect {}.to_raise(exp)`
168
+ `proc { ... }.must_throw sym` or `assert_throws(sym) {}` becomes `raise "Not implemented in Quickdraw yet"`
169
+
170
+ note: there are also `refute_*` methods in minitest, which are mapped to either a `refute(...)` or a #not_to* methods in quickdraw.
171
+
172
+ Converting `MiniTest::Spec` to `Quickdraw` as follows (like converting from Spec syntax to Test syntax):
173
+
174
+ `subject {}` becomes
175
+ ```ruby
176
+ def subject
177
+ @subject ||= Thing.new
178
+ end
179
+ ```
180
+
181
+ `let(:x) { 1 }` becomes
182
+ ```ruby
183
+ def x
184
+ @x ||= 1
185
+ end
186
+ ```
187
+
188
+ If any modules are included in the minitest class, then take the contents of the module and add it to the output but remove the wrapping `module`
189
+ Also remove the `include Module` statement from the output.
190
+ Also remove the def self.included(base) method from the output.
191
+
192
+ Only show me the test file code. Do NOT provide any other description of your work. Always enclose the output code in triple backticks (```).
193
+
194
+ The minitest test to convert is as follows:
data/exe/ai_refactor CHANGED
@@ -3,11 +3,13 @@
3
3
  require "optparse"
4
4
  require "colorize"
5
5
  require "openai"
6
+ require "anthropic"
6
7
  require "shellwords"
7
8
  require_relative "../lib/ai_refactor"
8
9
 
9
- supported_refactors = AIRefactor::Refactors.all
10
- refactors_descriptions = AIRefactor::Refactors.descriptions
10
+ supported_refactors = AIRefactor::Refactors.all.merge(AIRefactor::Commands.all)
11
+ supported_refactors_names = supported_refactors.keys
12
+ refactors_descriptions = AIRefactor::Refactors.descriptions.merge(AIRefactor::Commands.descriptions)
11
13
 
12
14
  arguments = ARGV.dup
13
15
 
@@ -36,11 +38,11 @@ option_parser = OptionParser.new do |parser|
36
38
  run_config.context_text = c
37
39
  end
38
40
 
39
- parser.on("-r", "--review-prompt", "Show the prompt that will be sent to ChatGPT but do not actually call ChatGPT or make changes to files.") do
41
+ parser.on("-r", "--review-prompt", "Show the prompt that will be sent to the AI but do not actually call the AI or make changes to files.") do
40
42
  run_config.review_prompt = true
41
43
  end
42
44
 
43
- parser.on("-p", "--prompt PROMPT_FILE", String, "Specify path to a text file that contains the ChatGPT 'system' prompt.") do |f|
45
+ parser.on("-p", "--prompt PROMPT_FILE", String, "Specify path to a text file that contains the AI 'system' prompt.") do |f|
44
46
  run_config.prompt_file_path = f
45
47
  end
46
48
 
@@ -48,23 +50,23 @@ option_parser = OptionParser.new do |parser|
48
50
  run_config.diff = true
49
51
  end
50
52
 
51
- parser.on("-C", "--continue [MAX_MESSAGES]", Integer, "If ChatGPT stops generating due to the maximum token count being reached, continue to generate more messages, until a stop condition or MAX_MESSAGES. MAX_MESSAGES defaults to 3") do |c|
53
+ parser.on("-C", "--continue [MAX_MESSAGES]", Integer, "If AI stops generating due to the maximum token count being reached, continue to generate more messages, until a stop condition or MAX_MESSAGES. MAX_MESSAGES defaults to 3") do |c|
52
54
  run_config.ai_max_attempts = c
53
55
  end
54
56
 
55
- parser.on("-m", "--model MODEL_NAME", String, "Specify a ChatGPT model to use (default gpt-4).") do |m|
57
+ parser.on("-m", "--model MODEL_NAME", String, "Specify a AI model to use (default 'gpt-4-turbo'). OpenAI and Anthropic models supported (eg 'gpt-4o', 'claude-3-opus-20240229')") do |m|
56
58
  run_config.ai_model = m
57
59
  end
58
60
 
59
- parser.on("--temperature TEMP", Float, "Specify the temperature parameter for ChatGPT (default 0.7).") do |p|
61
+ parser.on("--temperature TEMP", Float, "Specify the temperature parameter for generation (default 0.7).") do |p|
60
62
  run_config.ai_temperature = p
61
63
  end
62
64
 
63
- parser.on("--max-tokens MAX_TOKENS", Integer, "Specify the max number of tokens of output ChatGPT can generate. Max will depend on the size of the prompt (default 1500)") do |m|
65
+ parser.on("--max-tokens MAX_TOKENS", Integer, "Specify the max number of tokens of output the AI can generate. Max will depend on the size of the prompt (default 1500)") do |m|
64
66
  run_config.ai_max_tokens = m
65
67
  end
66
68
 
67
- parser.on("-t", "--timeout SECONDS", Integer, "Specify the max wait time for ChatGPT response.") do |m|
69
+ parser.on("-t", "--timeout SECONDS", Integer, "Specify the max wait time for an AI response.") do |m|
68
70
  run_config.ai_timeout = m
69
71
  end
70
72
 
@@ -137,10 +139,9 @@ if arguments.empty? || arguments.all? { |arg| arg.start_with?("-") && !(arg == "
137
139
  # For each option that is required but not provided, prompt for it
138
140
  # Put the option in arguments to parse with option_parser
139
141
  interactive_log.info "Interactive mode started. You can use tab to autocomplete:"
140
- predefined_commands = AIRefactor::Refactors.names
141
142
 
142
- interactive_log.info "Available refactors: #{predefined_commands.join(", ")}\n"
143
- command = AIRefactor::Cli.request_input_with_autocomplete("Enter refactor name: ", predefined_commands)
143
+ interactive_log.info "Available refactors: #{supported_refactors_names.join(", ")}\n"
144
+ command = AIRefactor::Cli.request_input_with_autocomplete("Enter refactor name: ", supported_refactors_names)
144
145
  exit_with_option_error("No refactor name provided.", option_parser) if command.nil? || command.empty?
145
146
  initial = [command]
146
147
 
@@ -180,10 +181,12 @@ logger = AIRefactor::Logger.new(verbose: run_config.verbose, debug: run_config.d
180
181
  logger.info "Also loaded options from '.ai_refactor' file..." if options_from_config_file&.size&.positive?
181
182
 
182
183
  command_or_file = arguments.shift
183
- if AIRefactor::CommandFileParser.command_file?(command_or_file)
184
- logger.info "Loading refactor command file '#{command_or_file}'..."
184
+ is_built_in_command = AIRefactor::Commands.supported?(command_or_file)
185
+ if is_built_in_command || AIRefactor::CommandFileParser.command_file?(command_or_file)
186
+ logger.info "Loading #{is_built_in_command ? "built-in" : "custom"} refactor command file '#{command_or_file}'..."
185
187
  begin
186
- run_config.set!(AIRefactor::CommandFileParser.new(command_or_file).parse)
188
+ command_file_path = is_built_in_command ? Commands.get(command_name).path : command_or_file
189
+ run_config.set!(AIRefactor::CommandFileParser.new(command_file_path).parse)
187
190
  rescue => e
188
191
  exit_with_option_error(e.message, option_parser, logger)
189
192
  end
@@ -0,0 +1,86 @@
1
+ # frozen_string_literal: true
2
+
3
+ module AIRefactor
4
+ class AIClient
5
+ def initialize(platform: "openai", model: "gpt-4-turbo", temperature: 0.7, max_tokens: 1500, timeout: 60, verbose: false)
6
+ @platform = platform
7
+ @model = model
8
+ @temperature = temperature
9
+ @max_tokens = max_tokens
10
+ @timeout = timeout
11
+ @verbose = verbose
12
+ @client = configure
13
+ end
14
+
15
+ def generate!(messages)
16
+ finished_reason, content, response = case @platform
17
+ when "openai"
18
+ openai_parse_response(
19
+ @client.chat(
20
+ parameters: {
21
+ messages: messages,
22
+ model: @model,
23
+ temperature: @temperature,
24
+ max_tokens: @max_tokens
25
+ }
26
+ )
27
+ )
28
+ when "anthropic"
29
+ anthropic_parse_response(
30
+ @client.messages(
31
+ parameters: {
32
+ system: messages.find { |m| m[:role] == "system" }&.fetch(:content, nil),
33
+ messages: messages.select { |m| m[:role] != "system" },
34
+ model: @model,
35
+ max_tokens: @max_tokens
36
+ }
37
+ )
38
+ )
39
+ else
40
+ raise "Invalid platform: #{@platform}"
41
+ end
42
+ yield finished_reason, content, response
43
+ end
44
+
45
+ private
46
+
47
+ def configure
48
+ case @platform
49
+ when "openai"
50
+ ::OpenAI::Client.new(
51
+ access_token: ENV.fetch("OPENAI_API_KEY"),
52
+ organization_id: ENV.fetch("OPENAI_ORGANIZATION_ID", nil),
53
+ request_timeout: @timeout,
54
+ log_errors: @verbose
55
+ )
56
+ when "anthropic"
57
+ ::Anthropic::Client.new(
58
+ access_token: ENV.fetch("ANTHROPIC_API_KEY"),
59
+ request_timeout: @timeout
60
+ )
61
+ else
62
+ raise "Invalid platform: #{@platform}"
63
+ end
64
+ end
65
+
66
+ def openai_parse_response(response)
67
+ if response["error"]
68
+ raise StandardError.new("OpenAI error: #{response["error"]["type"]}: #{response["error"]["message"]} (#{response["error"]["code"]})")
69
+ end
70
+
71
+ content = response.dig("choices", 0, "message", "content")
72
+ finished_reason = response.dig("choices", 0, "finish_reason")
73
+ [finished_reason, content, response]
74
+ end
75
+
76
+ def anthropic_parse_response(response)
77
+ if response["error"]
78
+ raise StandardError.new("Anthropic error: #{response["error"]["type"]}: #{response["error"]["message"]}")
79
+ end
80
+
81
+ content = response.dig("content", 0, "text")
82
+ finished_reason = response["stop_reason"] == "max_tokens" ? "length" : response["stop_reason"]
83
+ [finished_reason, content, response]
84
+ end
85
+ end
86
+ end
@@ -63,6 +63,17 @@ module AIRefactor
63
63
  configuration.input_file_paths
64
64
  end
65
65
 
66
+ def ai_client
67
+ @ai_client ||= AIRefactor::AIClient.new(
68
+ platform: configuration.ai_platform,
69
+ model: configuration.ai_model,
70
+ temperature: configuration.ai_temperature,
71
+ max_tokens: configuration.ai_max_tokens,
72
+ timeout: configuration.ai_timeout,
73
+ verbose: configuration.verbose
74
+ )
75
+ end
76
+
66
77
  def valid?
67
78
  return false unless refactorer
68
79
  inputs_valid = refactorer.takes_input_files? ? !(inputs.nil? || inputs.empty?) : true
@@ -72,12 +83,6 @@ module AIRefactor
72
83
  def run
73
84
  return false unless valid?
74
85
 
75
- OpenAI.configure do |config|
76
- config.access_token = ENV.fetch("OPENAI_API_KEY")
77
- config.organization_id = ENV.fetch("OPENAI_ORGANIZATION_ID", nil)
78
- config.request_timeout = configuration.ai_timeout || 240
79
- end
80
-
81
86
  if refactorer.takes_input_files?
82
87
  expanded_inputs = inputs.map do |path|
83
88
  File.exist?(path) ? path : Dir.glob(path)
@@ -85,11 +90,14 @@ module AIRefactor
85
90
 
86
91
  logger.info "AI Refactor #{expanded_inputs.size} files(s)/dir(s) '#{expanded_inputs}' with #{refactorer.refactor_name} refactor\n"
87
92
  logger.info "====================\n"
93
+ if configuration.description
94
+ logger.info "Description: #{configuration.description}\n"
95
+ end
88
96
 
89
97
  return_values = expanded_inputs.map do |file|
90
98
  logger.info "Processing #{file}..."
91
99
 
92
- refactor = refactorer.new(file, configuration, logger)
100
+ refactor = refactorer.new(ai_client, file, configuration, logger)
93
101
  refactor_returned = refactor.run
94
102
  failed = refactor_returned == false
95
103
  if failed
@@ -115,7 +123,7 @@ module AIRefactor
115
123
  name = refactorer.refactor_name
116
124
  logger.info "AI Refactor - #{name} refactor\n"
117
125
  logger.info "====================\n"
118
- refactor = refactorer.new(nil, configuration, logger)
126
+ refactor = refactorer.new(ai_client, nil, configuration, logger)
119
127
  refactor_returned = refactor.run
120
128
  failed = refactor_returned == false
121
129
  if failed
@@ -0,0 +1,41 @@
1
+ # frozen_string_literal: true
2
+
3
+ module AIRefactor
4
+ module Commands
5
+ # TODO: support command_line_options
6
+ BuiltInCommand = Data.define(:name, :description, :path, :command_line_options, :config)
7
+
8
+ def get(name)
9
+ all[name]
10
+ end
11
+ module_function :get
12
+
13
+ def names
14
+ all.keys
15
+ end
16
+ module_function :names
17
+
18
+ def descriptions
19
+ names.map { |n| "\"#{n}\"" }.zip(all.values.map(&:description)).to_h
20
+ end
21
+ module_function :descriptions
22
+
23
+ def supported?(name)
24
+ names.include?(name)
25
+ end
26
+ module_function :supported?
27
+
28
+ def all
29
+ @all ||= begin
30
+ commands = Dir.glob(File.join(__dir__, "../../commands", "**/*.yml")).map do |path|
31
+ path_to_commands = File.join(__dir__, "../../commands/")
32
+ name = File.join(File.dirname(path.gsub(path_to_commands, "")), File.basename(path, ".yml")).to_sym
33
+ config = YAML.safe_load_file(path, permitted_classes: [Symbol], symbolize_names: true, aliases: true)
34
+ BuiltInCommand.new(name: name, path: path, description: config[:description], config: config, command_line_options: [])
35
+ end
36
+ commands.map { |c| [c.name, c] }.to_h
37
+ end
38
+ end
39
+ module_function :all
40
+ end
41
+ end
@@ -60,35 +60,21 @@ module AIRefactor
60
60
  logger.debug "Options: #{options.inspect}"
61
61
  logger.debug "Messages: #{messages.inspect}"
62
62
 
63
- response = @ai_client.chat(
64
- parameters: {
65
- model: options[:ai_model] || "gpt-4",
66
- messages: messages,
67
- temperature: options[:ai_temperature] || 0.7,
68
- max_tokens: options[:ai_max_tokens] || 1500
69
- }
70
- )
71
-
72
- if response["error"]
73
- raise StandardError.new("OpenAI error: #{response["error"]["type"]}: #{response["error"]["message"]} (#{response["error"]["code"]})")
74
- end
75
-
76
- content = response.dig("choices", 0, "message", "content")
77
- finished_reason = response.dig("choices", 0, "finish_reason")
78
-
79
- if finished_reason == "length" && attempts_left > 0
80
- generate_next_message(messages + [
81
- {role: "assistant", content: content},
82
- {role: "user", content: "Continue"}
83
- ], options, attempts_left - 1)
84
- else
85
- previous_messages = messages.filter { |m| m[:role] == "assistant" }.map { |m| m[:content] }.join
86
- content = if previous_messages.length > 0
87
- content ? previous_messages + content : previous_messages
63
+ @ai_client.generate!(messages) do |finished_reason, content, response|
64
+ if finished_reason == "length" && attempts_left > 0
65
+ generate_next_message(messages + [
66
+ {role: "assistant", content: content},
67
+ {role: "user", content: "Continue"}
68
+ ], options, attempts_left - 1)
88
69
  else
89
- content
70
+ previous_messages = messages.filter { |m| m[:role] == "assistant" }.map { |m| m[:content] }.join
71
+ content = if previous_messages.length > 0
72
+ content ? previous_messages + content : previous_messages
73
+ else
74
+ content
75
+ end
76
+ [content, finished_reason, response["usage"]]
90
77
  end
91
- [content, finished_reason, response["usage"]]
92
78
  end
93
79
  end
94
80
 
@@ -17,11 +17,12 @@ module AIRefactor
17
17
  true
18
18
  end
19
19
 
20
- attr_reader :input_file, :options, :logger
20
+ attr_reader :ai_client, :input_file, :options, :logger
21
21
  attr_accessor :input_content
22
22
  attr_writer :failed_message
23
23
 
24
- def initialize(input_file, options, logger)
24
+ def initialize(ai_client, input_file, options, logger)
25
+ @ai_client = ai_client
25
26
  @input_file = input_file
26
27
  @options = options
27
28
  @logger = logger
@@ -79,8 +80,11 @@ module AIRefactor
79
80
  output_content
80
81
  rescue => e
81
82
  logger.error "Request to AI failed: #{e.message}"
83
+ if e.respond_to?(:response) && e.response
84
+ logger.error "Response: #{e.response[:body]}"
85
+ end
82
86
  logger.warn "Skipping #{input_file}..."
83
- self.failed_message = "Request to OpenAI failed"
87
+ self.failed_message = "Request to AI API failed"
84
88
  raise e
85
89
  end
86
90
  end
@@ -175,10 +179,6 @@ module AIRefactor
175
179
  path
176
180
  end
177
181
 
178
- def ai_client
179
- @ai_client ||= OpenAI::Client.new
180
- end
181
-
182
182
  def refactor_name
183
183
  self.class.refactor_name
184
184
  end
@@ -8,19 +8,16 @@ module AIRefactor
8
8
  end
9
9
 
10
10
  attr_reader :refactor,
11
+ :description,
11
12
  :input_file_paths,
12
13
  :output_file_path,
13
14
  :output_template_path,
14
15
  :context_file_paths,
16
+ :context_file_paths_from_gems,
15
17
  :context_text,
16
18
  :review_prompt,
17
19
  :prompt,
18
20
  :prompt_file_path,
19
- :ai_max_attempts,
20
- :ai_model,
21
- :ai_temperature,
22
- :ai_max_tokens,
23
- :ai_timeout,
24
21
  :overwrite,
25
22
  :diff,
26
23
  :verbose,
@@ -33,7 +30,7 @@ module AIRefactor
33
30
  end
34
31
  end
35
32
 
36
- attr_writer :refactor
33
+ attr_writer :refactor, :description
37
34
 
38
35
  # @deprecated
39
36
  def [](key)
@@ -56,6 +53,25 @@ module AIRefactor
56
53
  @context_file_paths.concat(paths)
57
54
  end
58
55
 
56
+ # A hash is passed in, where the keys are gem names that should be in the bundle and the path is a path inside the gem
57
+ # install location. We resolve the absolute path of each and then add to @context_file_paths
58
+ def context_file_paths_from_gems=(paths)
59
+ @context_file_paths ||= []
60
+ @context_file_paths_from_gems ||= {}
61
+ @context_file_paths_from_gems.merge!(paths)
62
+
63
+ paths.each do |gem_name, paths|
64
+ paths = [paths] unless paths.is_a?(Array)
65
+ paths.each do |path|
66
+ gem_spec = Gem::Specification.find_by_name(gem_name.to_s)
67
+ raise "Gem #{gem_name} not found" unless gem_spec
68
+ gem_path = gem_spec.gem_dir
69
+ full_path = File.join(gem_path, path)
70
+ @context_file_paths << full_path
71
+ end
72
+ end
73
+ end
74
+
59
75
  def context_text=(text)
60
76
  @context_text ||= ""
61
77
  @context_text += text
@@ -76,30 +92,54 @@ module AIRefactor
76
92
  attr_writer :rspec_run_command
77
93
  attr_writer :minitest_run_command
78
94
 
95
+ def ai_max_attempts
96
+ @ai_max_attempts || 3
97
+ end
98
+
79
99
  def ai_max_attempts=(value)
80
- @ai_max_attempts = value || 3
100
+ @ai_max_attempts = value
101
+ end
102
+
103
+ def ai_model
104
+ @ai_model || "gpt-4-turbo"
81
105
  end
82
106
 
83
107
  def ai_model=(value)
84
- @ai_model = value || "gpt-4"
108
+ @ai_model = value
85
109
  end
86
110
 
87
- def ai_temperature=(value)
88
- @ai_temperature = value || 0.7
111
+ def ai_platform
112
+ if ai_model&.start_with?("claude")
113
+ "anthropic"
114
+ else
115
+ "openai"
116
+ end
89
117
  end
90
118
 
91
- def ai_max_tokens=(value)
92
- @ai_max_tokens = value || 1500
119
+ def ai_temperature
120
+ @ai_temperature || 0.7
93
121
  end
94
122
 
95
- def ai_timeout=(value)
96
- @ai_timeout = value || 60
123
+ attr_writer :ai_temperature
124
+
125
+ def ai_max_tokens
126
+ @ai_max_tokens || 1500
97
127
  end
98
128
 
99
- def overwrite=(value)
100
- @overwrite = value || "a"
129
+ attr_writer :ai_max_tokens
130
+
131
+ def ai_timeout
132
+ @ai_timeout || 60
101
133
  end
102
134
 
135
+ attr_writer :ai_timeout
136
+
137
+ def overwrite
138
+ @overwrite || "a"
139
+ end
140
+
141
+ attr_writer :overwrite
142
+
103
143
  attr_writer :diff
104
144
 
105
145
  attr_writer :verbose
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AIRefactor
4
- VERSION = "0.5.3"
4
+ VERSION = "0.6.0"
5
5
  end
data/lib/ai_refactor.rb CHANGED
@@ -4,6 +4,7 @@ require "zeitwerk"
4
4
  loader = Zeitwerk::Loader.for_gem
5
5
  loader.inflector.inflect(
6
6
  "ai_refactor" => "AIRefactor",
7
+ "ai_client" => "AIClient",
7
8
  "rspec_runner" => "RSpecRunner"
8
9
  )
9
10
  loader.setup # ready!
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ai_refactor
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.3
4
+ version: 0.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Stephen Ierodiaconou
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-02-06 00:00:00.000000000 Z
11
+ date: 2024-06-19 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: colorize
@@ -58,6 +58,26 @@ dependencies:
58
58
  - - "<"
59
59
  - !ruby/object:Gem::Version
60
60
  version: '6.0'
61
+ - !ruby/object:Gem::Dependency
62
+ name: anthropic
63
+ requirement: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - ">="
66
+ - !ruby/object:Gem::Version
67
+ version: 0.1.0
68
+ - - "<"
69
+ - !ruby/object:Gem::Version
70
+ version: '1.0'
71
+ type: :runtime
72
+ prerelease: false
73
+ version_requirements: !ruby/object:Gem::Requirement
74
+ requirements:
75
+ - - ">="
76
+ - !ruby/object:Gem::Version
77
+ version: 0.1.0
78
+ - - "<"
79
+ - !ruby/object:Gem::Version
80
+ version: '1.0'
61
81
  - !ruby/object:Gem::Dependency
62
82
  name: zeitwerk
63
83
  requirement: !ruby/object:Gem::Requirement
@@ -89,16 +109,13 @@ files:
89
109
  - Rakefile
90
110
  - Steepfile
91
111
  - ai_refactor.gemspec
92
- - examples/ex1_convert_a_rspec_test_to_minitest.yml
93
- - examples/ex1_input_spec.rb
94
- - examples/ex2_input.rb
95
- - examples/ex2_write_rbs.yml
96
- - examples/rails_helper.rb
97
- - examples/test_helper.rb
112
+ - commands/quickdraw/0.1.0/convert_minitest.yml
98
113
  - exe/ai_refactor
99
114
  - lib/ai_refactor.rb
115
+ - lib/ai_refactor/ai_client.rb
100
116
  - lib/ai_refactor/cli.rb
101
117
  - lib/ai_refactor/command_file_parser.rb
118
+ - lib/ai_refactor/commands.rb
102
119
  - lib/ai_refactor/context.rb
103
120
  - lib/ai_refactor/file_processor.rb
104
121
  - lib/ai_refactor/logger.rb
@@ -144,14 +161,14 @@ required_ruby_version: !ruby/object:Gem::Requirement
144
161
  requirements:
145
162
  - - ">="
146
163
  - !ruby/object:Gem::Version
147
- version: 2.7.0
164
+ version: 3.3.0
148
165
  required_rubygems_version: !ruby/object:Gem::Requirement
149
166
  requirements:
150
167
  - - ">="
151
168
  - !ruby/object:Gem::Version
152
169
  version: '0'
153
170
  requirements: []
154
- rubygems_version: 3.4.20
171
+ rubygems_version: 3.5.3
155
172
  signing_key:
156
173
  specification_version: 4
157
174
  summary: Use AI to convert a Rails RSpec test suite to minitest.
@@ -1,8 +0,0 @@
1
- refactor: rails/minitest/rspec_to_minitest
2
- input_file_paths:
3
- - examples/ex1_input_spec.rb
4
- # We need to add context here as otherwise to tell the AI to require our local test_helper.rb file so that we can run the tests after
5
- context_text: "In the output test use `require_relative '../test_helper'` to include 'test_helper'."
6
- # By default, ai_refactor runs "bundle exec rails test" but this isn't going to work here as we are not actually in a Rails app context in the examples
7
- minitest_run_command: ruby __FILE__
8
- output_file_path: examples/outputs/ex1_input_test.rb
@@ -1,32 +0,0 @@
1
- require_relative "rails_helper"
2
-
3
- RSpec.describe MyModel, type: :model do
4
- subject(:model) { described_class.new }
5
-
6
- it { is_expected.to validate_presence_of(:name) }
7
-
8
- it "should allow integer values for age" do
9
- model.age = 1
10
- expect(model.age).to eq 1
11
- end
12
-
13
- it "should allow string values for name" do
14
- model.name = "test"
15
- expect(model.name).to eq "test"
16
- end
17
-
18
- it "should be invalid with invalid name" do
19
- model.name = nil
20
- expect(model).to be_invalid
21
- end
22
-
23
- it "should convert integer values for name" do
24
- model.name = 1
25
- expect(model.name).to eq "1"
26
- end
27
-
28
- it "should not allow string values for age" do
29
- model.age = "test"
30
- expect(model.age).to eq 0
31
- end
32
- end
@@ -1,17 +0,0 @@
1
- # example from https://blog.kiprosh.com/type-checking-in-ruby-3-using-rbs/
2
- # basic_math.rb
3
-
4
- class BasicMath
5
- def initialize(num1, num2)
6
- @num1 = num1
7
- @num2 = num2
8
- end
9
-
10
- def first_less_than_second?
11
- @num1 < @num2
12
- end
13
-
14
- def add
15
- @num1 + @num2
16
- end
17
- end
@@ -1,7 +0,0 @@
1
- refactor: ruby/write_rbs
2
- input_file_paths:
3
- - examples/ex2_input.rb
4
- # We need to add context here as our class doesnt actually give any context.
5
- context_text: "Assume that the inputs can be any numeric type."
6
- # By default this refactor writes to `sig/...` but here we just put the result in `examples/...`
7
- output_file_path: examples/outputs/ex2_input.rbs
@@ -1,21 +0,0 @@
1
- require "rails/all"
2
- require "shoulda-matchers"
3
-
4
- Shoulda::Matchers.configure do |config|
5
- config.integrate do |with|
6
- with.test_framework :rspec
7
- with.library :rails
8
- end
9
- end
10
-
11
- class MyModel
12
- include ActiveModel::Model
13
- include ActiveModel::Attributes
14
- include ActiveModel::Validations
15
- include ActiveModel::Validations::Callbacks
16
-
17
- validates :name, presence: true
18
-
19
- attribute :name, :string
20
- attribute :age, :integer
21
- end
@@ -1,14 +0,0 @@
1
- require "rails/all"
2
- require "active_support/testing/autorun"
3
-
4
- class MyModel
5
- include ActiveModel::Model
6
- include ActiveModel::Attributes
7
- include ActiveModel::Validations
8
- include ActiveModel::Validations::Callbacks
9
-
10
- validates :name, presence: true
11
-
12
- attribute :name, :string
13
- attribute :age, :integer
14
- end