mock_openai 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: a1f8417e225e11b9232ee5da333fcfc9e430f00f9e624a11c39b2edf19aecc5c
4
+ data.tar.gz: '03789f11bd18f2365e79d7059935aa36460bd7335d0709487143a8533c3003dc'
5
+ SHA512:
6
+ metadata.gz: 83536158b83e6c698a5bf6f4024c43f7268f98252ac59817346533cf8e8ff13d89adfd1bf5fda3d4ab101dcec37f8ed3c4d6caf27cae1bc585de03fca5497e16
7
+ data.tar.gz: d6ad10f230b13b1fedc0c1f01cb24654b9ce5f0a957dbe8fe920a9290676ad41968b0e65819ef497ba688cb2eeeda198cef955dee72be1c1157cbab9f36cb1cb
data/README.md ADDED
@@ -0,0 +1,126 @@
1
+ <p align="center">
2
+ <img src="docs/assets/img/mockopenai_logo_medium.png" alt="MockOpenAI logo" />
3
+ </p>
4
+
5
+ # MockOpenAI
6
+
7
+ [![Docs](https://img.shields.io/badge/docs-GitHub%20Pages-blue)](https://grymoire7.github.io/mockopenai)
8
+ ![Tests](https://github.com/grymoire7/mockopenai/actions/workflows/ruby.yml/badge.svg?branch=main)
9
+ ![Ruby Version](https://img.shields.io/badge/Ruby-%3E%3D%203.0-green?logo=Ruby&logoColor=red&label=Ruby%20version&color=green)
10
+ [![License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/grymoire7/mockopenai/blob/main/LICENSE)
11
+
12
+ A local mock server for OpenAI-compatible and Anthropic APIs with deterministic responses
13
+ and per-request failure simulation for any Ruby application.
14
+
15
+ MockOpenAI lets you test any Ruby app that calls an LLM **without hitting real
16
+ APIs**, spending real money, or waiting on rate limits. It supports the OpenAI
17
+ Chat Completions API (`POST /v1/chat/completions`) and the Anthropic Messages
18
+ API (`POST /v1/messages`). Works with Rails, Sinatra, CLI tools, background
19
+ jobs, or plain Ruby scripts.
20
+
21
+ ---
22
+
23
+ ## Why MockOpenAI?
24
+
25
+ - **No API keys needed**: zero token costs, zero network calls in CI
26
+ - **Deterministic**: control exactly what the LLM "says" for each request
27
+ - **Per-request failure modes**: simulate timeouts, rate limits, malformed JSON, and more
28
+ - **No app changes**: no monkey-patching, no client wrapping, no test doubles
29
+ - **Fast CI**: tests run at local speed, not API speed
30
+ - **OpenAI + Anthropic**: supports `POST /v1/chat/completions` and `POST /v1/messages`
31
+
32
+ Not sure if MockOpenAI is right for your project? See [When not to use MockOpenAI](https://grymoire7.github.io/mockopenai/when-not-to-use/).
33
+
34
+ ---
35
+
36
+ ## Installation
37
+
38
+ ```ruby
39
+ # Gemfile
40
+ group :test do
41
+ gem "mock_openai"
42
+ end
43
+ ```
44
+
45
+ ```
46
+ bundle install
47
+ ```
48
+
49
+ ---
50
+
51
+ ## Quick Start
52
+
53
+ **RSpec:**
54
+
55
+ ```ruby
56
+ # spec/rails_helper.rb
57
+ require "mock_openai/rspec"
58
+
59
+ # If your code makes real HTTP connections to the LLM API (CLI tools,
60
+ # integration tests, background jobs), start the server once here:
61
+ MockOpenAI.start_test_server!
62
+
63
+ RubyLLM.configure do |config|
64
+ config.anthropic_api_base = MockOpenAI.server_url
65
+ end
66
+ ```
67
+
68
+ ```ruby
69
+ it "returns a canned response", :mock_openai do
70
+ MockOpenAI.set_responses([{ match: "Hello", response: "Hi!" }])
71
+ expect(MyService.call_openai("Hello")).to eq("Hi!")
72
+ end
73
+ ```
74
+
75
+ The `:mock_openai` tag wires everything up and resets state between tests automatically. `start_test_server!` is idempotent and blocks until the server is ready.
76
+
77
+ **Minitest:**
78
+
79
+ ```ruby
80
+ # test/test_helper.rb
81
+ require "mock_openai/minitest"
82
+
83
+ # If your code makes real HTTP connections to the LLM API (CLI tools,
84
+ # integration tests, background jobs), start the server once here:
85
+ MockOpenAI.start_test_server!
86
+
87
+ RubyLLM.configure do |config|
88
+ config.anthropic_api_base = MockOpenAI.server_url
89
+ end
90
+ ```
91
+
92
+ ```ruby
93
+ class MyChatTest < Minitest::Test
94
+ include MockOpenAI::Minitest
95
+
96
+ def test_returns_canned_response
97
+ MockOpenAI.set_responses([{ match: "Hello", response: "Hi!" }])
98
+ assert_equal "Hi!", MyService.call_openai("Hello")
99
+ end
100
+ end
101
+ ```
102
+
103
+ `MockOpenAI::Minitest` hooks into `before_setup` and `after_teardown` to reset state automatically. `start_test_server!` is idempotent and blocks until the server is ready.
104
+
105
+ ---
106
+
107
+ ## Documentation
108
+
109
+ Full documentation is available at **[grymoire7.github.io/mockopenai](https://grymoire7.github.io/mockopenai)**:
110
+
111
+ - [Getting Started](https://grymoire7.github.io/mockopenai/getting-started/): installation, setup, first test
112
+ - [Usage](https://grymoire7.github.io/mockopenai/usage/): in-process vs. standalone server modes
113
+ - [Examples](https://grymoire7.github.io/mockopenai/examples/): multi-step conversations, failure modes, templates
114
+ - [Reference](https://grymoire7.github.io/mockopenai/reference/): full API, RSpec tags, CLI, and configuration
115
+
116
+ ---
117
+
118
+ ## Contributing
119
+
120
+ PRs welcome. Open an issue to discuss new failure modes, matchers, or integrations.
121
+
122
+ ---
123
+
124
+ ## License
125
+
126
+ MIT
data/bin/mock-openai ADDED
@@ -0,0 +1,6 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require_relative "../lib/mock_openai"
5
+
6
+ MockOpenAI::CLI.run(ARGV)
@@ -0,0 +1,20 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "securerandom"
4
+
5
+ module MockOpenAI
6
+ class AnthropicResponseBuilder
7
+ def self.build(content:, model: "mock-claude-3")
8
+ {
9
+ "id" => "mock-msg-#{SecureRandom.hex(8)}",
10
+ "type" => "message",
11
+ "role" => "assistant",
12
+ "content" => [{"type" => "text", "text" => content}],
13
+ "model" => model,
14
+ "stop_reason" => "end_turn",
15
+ "stop_sequence" => nil,
16
+ "usage" => {"input_tokens" => 0, "output_tokens" => 0}
17
+ }
18
+ end
19
+ end
20
+ end
@@ -0,0 +1,54 @@
1
+ # lib/mock_openai/cli.rb
2
+ # frozen_string_literal: true
3
+
4
+ require "optparse"
5
+
6
+ module MockOpenAI
7
+ class CLI
8
+ SAMPLE_CONFIG = <<~YAML
9
+ # MockOpenAI configuration
10
+ # port: 4000
11
+ # timeout_seconds: 5
12
+ # default_response: "Mock response from MockOpenAI"
13
+ YAML
14
+
15
+ def self.run(argv = ARGV)
16
+ command = argv.first
17
+ case command
18
+ when "start"
19
+ port = 4000
20
+ OptionParser.new do |opts|
21
+ opts.on("--port=PORT", Integer) { |p| port = p }
22
+ end.parse!(argv[1..])
23
+ Server.start(port: port)
24
+ when "init"
25
+ path = config_file_path
26
+ if File.exist?(path)
27
+ puts "mock_openai.yml already exists at #{path}"
28
+ else
29
+ File.write(path, SAMPLE_CONFIG)
30
+ puts "Created #{path}"
31
+ end
32
+ when "check"
33
+ cfg = MockOpenAI.config
34
+ puts "MockOpenAI configuration:"
35
+ puts " port: #{cfg.port}"
36
+ puts " timeout_seconds: #{cfg.timeout_seconds}"
37
+ puts " default_response: #{cfg.default_response}"
38
+ puts " state_file: #{cfg.state_file}"
39
+ else
40
+ puts "Usage: mock-openai <command> [options]"
41
+ puts ""
42
+ puts "Commands:"
43
+ puts " start [--port=N] Start the mock server (default port: 4000)"
44
+ puts " init Create a sample mock_openai.yml"
45
+ puts " check Show resolved configuration"
46
+ exit(1)
47
+ end
48
+ end
49
+
50
+ def self.config_file_path
51
+ "mock_openai.yml"
52
+ end
53
+ end
54
+ end
@@ -0,0 +1,30 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "yaml"
4
+
5
+ module MockOpenAI
6
+ class Config
7
+ DEFAULTS = {
8
+ "port" => 4000,
9
+ "timeout_seconds" => 5,
10
+ "default_response" => "Mock response from MockOpenAI",
11
+ "state_file" => "tmp/mock_openai_state.json"
12
+ }.freeze
13
+
14
+ attr_reader :port, :timeout_seconds, :default_response, :state_file
15
+
16
+ def self.load(path = "mock_openai.yml")
17
+ file_config = File.exist?(path) ? YAML.safe_load_file(path) || {} : {}
18
+ merged = DEFAULTS.merge(file_config.transform_keys(&:to_s))
19
+ new(**merged.transform_keys(&:to_sym))
20
+ end
21
+
22
+ def initialize(port: DEFAULTS["port"], timeout_seconds: DEFAULTS["timeout_seconds"],
23
+ default_response: DEFAULTS["default_response"], state_file: DEFAULTS["state_file"])
24
+ @port = port
25
+ @timeout_seconds = timeout_seconds
26
+ @default_response = default_response
27
+ @state_file = state_file
28
+ end
29
+ end
30
+ end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module FailureModes
5
+ REGISTRY = {} # populated by each subclass on require
6
+
7
+ def self.apply(mode, request:, response:)
8
+ key = mode.to_s
9
+ klass = REGISTRY.fetch(key) { raise ArgumentError, "Unknown failure mode: #{mode}" }
10
+ klass.new.apply(request: request, response: response)
11
+ end
12
+
13
+ class Base
14
+ def apply(request:, response:)
15
+ raise NotImplementedError, "#{self.class} must implement #apply"
16
+ end
17
+ end
18
+ end
19
+ end
@@ -0,0 +1,19 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module FailureModes
5
+ class InternalError < Base
6
+ def apply(request:, response:)
7
+ body = {
8
+ "error" => {
9
+ "type" => "server_error",
10
+ "message" => "Internal server error"
11
+ }
12
+ }
13
+ [500, {"Content-Type" => "application/json"}, [body.to_json]]
14
+ end
15
+ end
16
+
17
+ REGISTRY["internal_error"] = InternalError
18
+ end
19
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module FailureModes
5
+ class MalformedJson < Base
6
+ def apply(request:, response:)
7
+ [200, {"Content-Type" => "application/json"}, ['{ "choices": [ ']]
8
+ end
9
+ end
10
+
11
+ REGISTRY["malformed_json"] = MalformedJson
12
+ end
13
+ end
@@ -0,0 +1,20 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module FailureModes
5
+ class RateLimit < Base
6
+ def apply(request:, response:)
7
+ body = {
8
+ "error" => {
9
+ "type" => "rate_limit_error",
10
+ "message" => "Rate limit exceeded",
11
+ "code" => "rate_limit_exceeded"
12
+ }
13
+ }
14
+ [429, {"Content-Type" => "application/json"}, [body.to_json]]
15
+ end
16
+ end
17
+
18
+ REGISTRY["rate_limit"] = RateLimit
19
+ end
20
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module FailureModes
5
+ class Timeout < Base
6
+ def apply(request:, response:)
7
+ :timeout
8
+ end
9
+ end
10
+
11
+ REGISTRY["timeout"] = Timeout
12
+ end
13
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module FailureModes
5
+ class TruncatedStream < Base
6
+ def apply(request:, response:)
7
+ :stream_truncated
8
+ end
9
+ end
10
+
11
+ REGISTRY["truncated_stream"] = TruncatedStream
12
+ end
13
+ end
@@ -0,0 +1,100 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module Handlers
5
+ class Base
6
+ JSON_HEADERS = {"Content-Type" => "application/json"}.freeze
7
+
8
+ def json_headers
9
+ JSON_HEADERS.dup
10
+ end
11
+
12
+ def call(env)
13
+ request = Rack::Request.new(env)
14
+ parsed = parse_json_body(request.body.read)
15
+ return error_response(400, "invalid_request_error", "Request body must be valid JSON") unless parsed
16
+
17
+ request_context = parse_request(parsed)
18
+ state = State.read
19
+ rule = Matcher.match(state["rules"] || [], request_context[:last_user_message].to_s)
20
+
21
+ result = resolve_content(rule, state, request_context)
22
+ log_request(env, rule ? (state["rules"] || []).index(rule) : nil, rule&.dig("failure_mode"))
23
+ result
24
+ end
25
+
26
+ private
27
+
28
+ # Template methods — subclasses must implement all three
29
+
30
+ def parse_request(body)
31
+ raise NotImplementedError, "#{self.class} must implement #parse_request"
32
+ end
33
+
34
+ def build_success_response(content, model)
35
+ raise NotImplementedError, "#{self.class} must implement #build_success_response"
36
+ end
37
+
38
+ def apply_failure_mode(mode, request_context)
39
+ raise NotImplementedError, "#{self.class} must implement #apply_failure_mode"
40
+ end
41
+
42
+ protected
43
+
44
+ def extract_text_content(content)
45
+ if content.is_a?(Array)
46
+ content.select { |b| b["type"] == "text" }.map { |b| b["text"] }.join
47
+ else
48
+ content.to_s
49
+ end
50
+ end
51
+
52
+ private
53
+
54
+ def resolve_content(rule, state, request_context)
55
+ model = request_context[:model]
56
+
57
+ if rule
58
+ if rule["failure_mode"]
59
+ return apply_failure_mode(rule["failure_mode"], request_context)
60
+ elsif rule["response"]
61
+ return build_success_response(rule["response"], model)
62
+ elsif rule["template"]
63
+ return build_success_response(
64
+ TemplateRenderer.render(rule["template"], request_context), model
65
+ )
66
+ end
67
+ end
68
+
69
+ if state["response_template"]
70
+ return build_success_response(
71
+ TemplateRenderer.render(state["response_template"], request_context), model
72
+ )
73
+ end
74
+
75
+ fallback = state["default_response"] || MockOpenAI.config.default_response
76
+ build_success_response(fallback, model)
77
+ end
78
+
79
+ def parse_json_body(body)
80
+ JSON.parse(body)
81
+ rescue JSON::ParserError
82
+ nil
83
+ end
84
+
85
+ def error_response(status, type, message)
86
+ body = {"error" => {"type" => type, "message" => message}}
87
+ [status, json_headers, [body.to_json]]
88
+ end
89
+
90
+ def log_request(env, rule_index, failure_mode)
91
+ return unless MockOpenAI.verbose?
92
+ method = env["REQUEST_METHOD"]
93
+ path = env["PATH_INFO"]
94
+ rule_label = rule_index.nil? ? "none" : rule_index.to_s
95
+ mode_label = failure_mode.nil? ? "none" : failure_mode.to_s
96
+ puts "[MockOpenAI] #{method} #{path} | rule=#{rule_label} | mode=#{mode_label}"
97
+ end
98
+ end
99
+ end
100
+ end
@@ -0,0 +1,45 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module Handlers
5
+ class ChatCompletions < Base
6
+ private
7
+
8
+ def parse_request(body)
9
+ messages = body["messages"] || []
10
+ last_user = messages.reverse.find { |m| m["role"] == "user" }
11
+ system_msg = messages.find { |m| m["role"] == "system" }
12
+ {
13
+ last_user_message: extract_text_content(last_user&.dig("content")),
14
+ system_message: extract_text_content(system_msg&.dig("content")),
15
+ model: body["model"] || "mock-gpt-4"
16
+ }
17
+ end
18
+
19
+ def build_success_response(content, model)
20
+ response = ResponseBuilder.build(content: content, model: model)
21
+ [200, json_headers, [response.to_json]]
22
+ end
23
+
24
+ def apply_failure_mode(mode, request_context)
25
+ result = FailureModes.apply(mode, request: {}, response: {})
26
+ return handle_symbol_result(result) if result.is_a?(Symbol)
27
+ result
28
+ end
29
+
30
+ def handle_symbol_result(symbol)
31
+ case symbol
32
+ when :timeout
33
+ sleep(MockOpenAI.config.timeout_seconds)
34
+ [200, json_headers, [{"choices" => []}.to_json]]
35
+ when :stream_truncated
36
+ chunks = [
37
+ "data: {\"choices\":[{\"delta\":{\"content\":\"Hello\"}}]}\n\n",
38
+ "data: {\"choices\":[{\"delta\":{\"content\":\" world\"}}]}\n\n"
39
+ ]
40
+ [200, {"Content-Type" => "text/event-stream"}, chunks]
41
+ end
42
+ end
43
+ end
44
+ end
45
+ end
@@ -0,0 +1,45 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ module Handlers
5
+ class Messages < Base
6
+ private
7
+
8
+ def parse_request(body)
9
+ messages = body["messages"] || []
10
+ last_user = messages.reverse.find { |m| m["role"] == "user" }
11
+ {
12
+ last_user_message: extract_text_content(last_user&.dig("content")),
13
+ system_message: body["system"].to_s,
14
+ model: body["model"] || "mock-claude-3"
15
+ }
16
+ end
17
+
18
+ def build_success_response(content, model)
19
+ response = AnthropicResponseBuilder.build(content: content, model: model)
20
+ [200, json_headers, [response.to_json]]
21
+ end
22
+
23
+ def apply_failure_mode(mode, request_context)
24
+ result = FailureModes.apply(mode, request: {}, response: {})
25
+ return handle_symbol_result(result, request_context) if result.is_a?(Symbol)
26
+ result
27
+ end
28
+
29
+ def handle_symbol_result(symbol, request_context)
30
+ case symbol
31
+ when :timeout
32
+ sleep(MockOpenAI.config.timeout_seconds)
33
+ error_body = {"type" => "error", "error" => {"type" => "overloaded_error", "message" => "Overloaded"}}
34
+ [200, json_headers, [error_body.to_json]]
35
+ when :stream_truncated
36
+ chunks = [
37
+ "data: {\"type\":\"content_block_delta\",\"delta\":{\"type\":\"text_delta\",\"text\":\"Hello\"}}\n\n",
38
+ "data: {\"type\":\"content_block_delta\",\"delta\":{\"type\":\"text_delta\",\"text\":\" world\"}}\n\n"
39
+ ]
40
+ [200, {"Content-Type" => "text/event-stream"}, chunks]
41
+ end
42
+ end
43
+ end
44
+ end
45
+ end
@@ -0,0 +1,40 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ class Matcher
5
+ def self.match(rules, last_user_message)
6
+ rules.each do |rule|
7
+ return rule if matches?(rule["match"], last_user_message)
8
+ end
9
+ nil
10
+ end
11
+
12
+ def self.matches?(pattern, text)
13
+ return false if pattern.nil?
14
+
15
+ # 1. Try regex match if pattern looks like a regex
16
+ if regex_like?(pattern)
17
+ begin
18
+ return true if Regexp.new(pattern).match?(text)
19
+ rescue RegexpError
20
+ puts "[MockOpenAI] Warning: invalid regex '#{pattern}', falling back to substring match"
21
+ # Fall through to substring match
22
+ end
23
+ end
24
+
25
+ # 2. Exact match
26
+ return true if pattern == text
27
+
28
+ # 3. Substring match
29
+ text.include?(pattern)
30
+ end
31
+
32
+ def self.regex_like?(pattern)
33
+ pattern.start_with?("^") ||
34
+ pattern.end_with?("$") ||
35
+ pattern.include?(".*") ||
36
+ pattern.include?("(") ||
37
+ pattern.include?("[")
38
+ end
39
+ end
40
+ end
@@ -0,0 +1,18 @@
1
+ # lib/mock_openai/minitest.rb
2
+ # frozen_string_literal: true
3
+
4
+ require "mock_openai"
5
+
6
+ module MockOpenAI
7
+ module Minitest
8
+ def before_setup
9
+ super
10
+ MockOpenAI.reset!
11
+ end
12
+
13
+ def after_teardown
14
+ MockOpenAI.reset!
15
+ super
16
+ end
17
+ end
18
+ end
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "securerandom"
4
+
5
+ module MockOpenAI
6
+ class ResponseBuilder
7
+ def self.build(content:, model: "mock-gpt-4")
8
+ {
9
+ "id" => "mock-chatcmpl-#{SecureRandom.hex(8)}",
10
+ "object" => "chat.completion",
11
+ "created" => Time.now.to_i,
12
+ "model" => model,
13
+ "choices" => [
14
+ {
15
+ "index" => 0,
16
+ "message" => {"role" => "assistant", "content" => content},
17
+ "finish_reason" => "stop"
18
+ }
19
+ ],
20
+ "usage" => {"prompt_tokens" => 0, "completion_tokens" => 0, "total_tokens" => 0}
21
+ }
22
+ end
23
+ end
24
+ end
@@ -0,0 +1,22 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ class Router
5
+ NOT_FOUND_BODY = {
6
+ "error" => {"type" => "invalid_request_error", "message" => "Not found"}
7
+ }.to_json.freeze
8
+
9
+ def call(env)
10
+ request = Rack::Request.new(env)
11
+
12
+ case [request.request_method, request.path_info]
13
+ in ["POST", "/v1/chat/completions"]
14
+ Handlers::ChatCompletions.new.call(env)
15
+ in ["POST", "/v1/messages"]
16
+ Handlers::Messages.new.call(env)
17
+ else
18
+ [404, {"Content-Type" => "application/json"}, [NOT_FOUND_BODY]]
19
+ end
20
+ end
21
+ end
22
+ end
@@ -0,0 +1,38 @@
1
+ # lib/mock_openai/rspec/metadata.rb
2
+ # frozen_string_literal: true
3
+
4
+ module MockOpenAI
5
+ module RSpec
6
+ module Metadata
7
+ FAILURE_MODE_TAGS = {
8
+ mock_openai_timeout: :timeout,
9
+ mock_openai_rate_limit: :rate_limit,
10
+ mock_openai_malformed_json: :malformed_json,
11
+ mock_openai_internal_error: :internal_error,
12
+ mock_openai_truncated_stream: :truncated_stream
13
+ }.freeze
14
+ end
15
+ end
16
+ end
17
+
18
+ ::RSpec.configure do |config|
19
+ # Generic tag: reset state before test
20
+ config.before(:each, :mock_openai) do
21
+ MockOpenAI.reset!
22
+ end
23
+
24
+ # Shortcut failure mode tags: reset then set mode
25
+ MockOpenAI::RSpec::Metadata::FAILURE_MODE_TAGS.each do |tag, mode|
26
+ config.before(:each, tag) do
27
+ MockOpenAI.reset!
28
+ MockOpenAI.set_failure_mode(mode)
29
+ end
30
+ end
31
+
32
+ # Reset after any mock_openai-tagged test
33
+ config.after(:each) do |example|
34
+ if example.metadata.keys.any? { |k| k.to_s.start_with?("mock_openai") }
35
+ MockOpenAI.reset!
36
+ end
37
+ end
38
+ end
@@ -0,0 +1,5 @@
1
+ # lib/mock_openai/rspec.rb
2
+ # frozen_string_literal: true
3
+
4
+ require "mock_openai"
5
+ require "mock_openai/rspec/metadata"
@@ -0,0 +1,50 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "fileutils"
4
+ require "logger"
5
+ require "socket"
6
+
7
+ module MockOpenAI
8
+ class Server
9
+ def self.start(port: MockOpenAI.config.port)
10
+ state_file = MockOpenAI.config.state_file
11
+ FileUtils.mkdir_p(File.dirname(state_file))
12
+ State.reset! unless File.exist?(state_file)
13
+
14
+ if MockOpenAI.verbose?
15
+ puts "MockOpenAI v#{VERSION} started"
16
+ puts " Listening on: http://localhost:#{port}"
17
+ puts " State file: #{state_file}"
18
+
19
+ config_status = File.exist?("mock_openai.yml") ? "mock_openai.yml" : "mock_openai.yml (not found, using defaults)"
20
+ puts " Config: #{config_status}"
21
+ end
22
+
23
+ run_rack_server(port: port)
24
+ end
25
+
26
+ def self.run_rack_server(port: MockOpenAI.config.port)
27
+ require "rackup"
28
+ logger = MockOpenAI.verbose? ? Logger.new($stdout) : Logger.new(IO::NULL)
29
+ Rackup::Server.start(
30
+ app: Router.new,
31
+ Port: port,
32
+ Host: "127.0.0.1",
33
+ server: :webrick,
34
+ Logger: logger,
35
+ AccessLog: []
36
+ )
37
+ end
38
+
39
+ def self.wait_until_ready(timeout: 5)
40
+ deadline = Time.now + timeout
41
+ loop do
42
+ TCPSocket.new("127.0.0.1", MockOpenAI.config.port).close
43
+ return
44
+ rescue Errno::ECONNREFUSED
45
+ raise "MockOpenAI server did not start within #{timeout}s" if Time.now > deadline
46
+ sleep 0.05
47
+ end
48
+ end
49
+ end
50
+ end
@@ -0,0 +1,43 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "fileutils"
4
+ require "json"
5
+
6
+ module MockOpenAI
7
+ class State
8
+ EMPTY = {
9
+ "rules" => [],
10
+ "response_template" => nil,
11
+ "default_response" => nil,
12
+ "metadata" => {}
13
+ }.freeze
14
+
15
+ def self.state_file
16
+ MockOpenAI.config.state_file
17
+ end
18
+
19
+ def self.write(rules:, response_template: nil, default_response: nil)
20
+ FileUtils.mkdir_p(File.dirname(state_file))
21
+ File.write(state_file, {
22
+ "rules" => rules,
23
+ "response_template" => response_template,
24
+ "default_response" => default_response,
25
+ "metadata" => {}
26
+ }.to_json)
27
+ end
28
+
29
+ def self.read
30
+ return EMPTY unless File.exist?(state_file)
31
+
32
+ JSON.parse(File.read(state_file))
33
+ rescue JSON::ParserError
34
+ puts "[MockOpenAI] Warning: state file is corrupt, using empty state"
35
+ EMPTY
36
+ end
37
+
38
+ def self.reset!
39
+ FileUtils.mkdir_p(File.dirname(state_file))
40
+ File.write(state_file, EMPTY.to_json)
41
+ end
42
+ end
43
+ end
@@ -0,0 +1,17 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ class TemplateRenderer
5
+ VARIABLES = {
6
+ "{{last_user_message}}" => ->(ctx) { ctx[:last_user_message].to_s },
7
+ "{{system_message}}" => ->(ctx) { ctx[:system_message].to_s },
8
+ "{{model}}" => ->(ctx) { ctx[:model].to_s }
9
+ }.freeze
10
+
11
+ def self.render(template, context)
12
+ VARIABLES.reduce(template) do |result, (placeholder, extractor)|
13
+ result.gsub(placeholder, extractor.call(context))
14
+ end
15
+ end
16
+ end
17
+ end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module MockOpenAI
4
+ VERSION = "0.1.0"
5
+ end
@@ -0,0 +1,75 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "json"
4
+ require "socket"
5
+ require_relative "mock_openai/version"
6
+ require_relative "mock_openai/config"
7
+ require_relative "mock_openai/state"
8
+ require_relative "mock_openai/matcher"
9
+ require_relative "mock_openai/response_builder"
10
+ require_relative "mock_openai/anthropic_response_builder"
11
+ require_relative "mock_openai/template_renderer"
12
+ require_relative "mock_openai/failure_modes/base"
13
+ require_relative "mock_openai/failure_modes/timeout"
14
+ require_relative "mock_openai/failure_modes/rate_limit"
15
+ require_relative "mock_openai/failure_modes/malformed_json"
16
+ require_relative "mock_openai/failure_modes/internal_error"
17
+ require_relative "mock_openai/failure_modes/truncated_stream"
18
+ require_relative "mock_openai/handlers/base"
19
+ require_relative "mock_openai/handlers/chat_completions"
20
+ require_relative "mock_openai/handlers/messages"
21
+ require_relative "mock_openai/router"
22
+ require_relative "mock_openai/server"
23
+ require_relative "mock_openai/cli"
24
+
25
+ module MockOpenAI
26
+ @verbose = true
27
+
28
+ class << self
29
+ def config
30
+ @config ||= Config.load
31
+ end
32
+
33
+ def verbose?
34
+ @verbose
35
+ end
36
+
37
+ def start_test_server!
38
+ return if server_reachable?
39
+ @verbose = false
40
+ Thread.new { Server.start }
41
+ Server.wait_until_ready
42
+ end
43
+
44
+ def server_url
45
+ "http://127.0.0.1:#{config.port}"
46
+ end
47
+
48
+ def set_responses(rules)
49
+ State.write(rules: rules.map { |r| r.transform_keys(&:to_s) })
50
+ end
51
+
52
+ def set_failure_mode(mode)
53
+ set_responses([{"match" => ".*", "failure_mode" => mode.to_s}])
54
+ end
55
+
56
+ def reset!
57
+ State.reset!
58
+ end
59
+
60
+ def current_failure_mode
61
+ state = State.read
62
+ catch_all = state["rules"].find { |r| r["match"] == ".*" && r["failure_mode"] }
63
+ catch_all&.dig("failure_mode")&.to_sym
64
+ end
65
+
66
+ private
67
+
68
+ def server_reachable?
69
+ TCPSocket.new("127.0.0.1", config.port).close
70
+ true
71
+ rescue Errno::ECONNREFUSED
72
+ false
73
+ end
74
+ end
75
+ end
metadata ADDED
@@ -0,0 +1,165 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: mock_openai
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Tracy Atteberry
8
+ bindir: bin
9
+ cert_chain: []
10
+ date: 1980-01-02 00:00:00.000000000 Z
11
+ dependencies:
12
+ - !ruby/object:Gem::Dependency
13
+ name: rack
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - "~>"
17
+ - !ruby/object:Gem::Version
18
+ version: '3.0'
19
+ type: :runtime
20
+ prerelease: false
21
+ version_requirements: !ruby/object:Gem::Requirement
22
+ requirements:
23
+ - - "~>"
24
+ - !ruby/object:Gem::Version
25
+ version: '3.0'
26
+ - !ruby/object:Gem::Dependency
27
+ name: rackup
28
+ requirement: !ruby/object:Gem::Requirement
29
+ requirements:
30
+ - - "~>"
31
+ - !ruby/object:Gem::Version
32
+ version: '2.0'
33
+ type: :runtime
34
+ prerelease: false
35
+ version_requirements: !ruby/object:Gem::Requirement
36
+ requirements:
37
+ - - "~>"
38
+ - !ruby/object:Gem::Version
39
+ version: '2.0'
40
+ - !ruby/object:Gem::Dependency
41
+ name: logger
42
+ requirement: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - "~>"
45
+ - !ruby/object:Gem::Version
46
+ version: '1.0'
47
+ type: :runtime
48
+ prerelease: false
49
+ version_requirements: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - "~>"
52
+ - !ruby/object:Gem::Version
53
+ version: '1.0'
54
+ - !ruby/object:Gem::Dependency
55
+ name: webrick
56
+ requirement: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - "~>"
59
+ - !ruby/object:Gem::Version
60
+ version: '1.8'
61
+ type: :runtime
62
+ prerelease: false
63
+ version_requirements: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - "~>"
66
+ - !ruby/object:Gem::Version
67
+ version: '1.8'
68
+ - !ruby/object:Gem::Dependency
69
+ name: rspec
70
+ requirement: !ruby/object:Gem::Requirement
71
+ requirements:
72
+ - - "~>"
73
+ - !ruby/object:Gem::Version
74
+ version: '3.0'
75
+ type: :development
76
+ prerelease: false
77
+ version_requirements: !ruby/object:Gem::Requirement
78
+ requirements:
79
+ - - "~>"
80
+ - !ruby/object:Gem::Version
81
+ version: '3.0'
82
+ - !ruby/object:Gem::Dependency
83
+ name: rack-test
84
+ requirement: !ruby/object:Gem::Requirement
85
+ requirements:
86
+ - - "~>"
87
+ - !ruby/object:Gem::Version
88
+ version: '2.0'
89
+ type: :development
90
+ prerelease: false
91
+ version_requirements: !ruby/object:Gem::Requirement
92
+ requirements:
93
+ - - "~>"
94
+ - !ruby/object:Gem::Version
95
+ version: '2.0'
96
+ - !ruby/object:Gem::Dependency
97
+ name: standard
98
+ requirement: !ruby/object:Gem::Requirement
99
+ requirements:
100
+ - - "~>"
101
+ - !ruby/object:Gem::Version
102
+ version: '1.0'
103
+ type: :development
104
+ prerelease: false
105
+ version_requirements: !ruby/object:Gem::Requirement
106
+ requirements:
107
+ - - "~>"
108
+ - !ruby/object:Gem::Version
109
+ version: '1.0'
110
+ description: Drop-in mock server for testing Rails apps that use OpenAI-compatible
111
+ APIs. Provides deterministic responses and per-request failure simulation.
112
+ executables:
113
+ - mock-openai
114
+ extensions: []
115
+ extra_rdoc_files: []
116
+ files:
117
+ - README.md
118
+ - bin/mock-openai
119
+ - lib/mock_openai.rb
120
+ - lib/mock_openai/anthropic_response_builder.rb
121
+ - lib/mock_openai/cli.rb
122
+ - lib/mock_openai/config.rb
123
+ - lib/mock_openai/failure_modes/base.rb
124
+ - lib/mock_openai/failure_modes/internal_error.rb
125
+ - lib/mock_openai/failure_modes/malformed_json.rb
126
+ - lib/mock_openai/failure_modes/rate_limit.rb
127
+ - lib/mock_openai/failure_modes/timeout.rb
128
+ - lib/mock_openai/failure_modes/truncated_stream.rb
129
+ - lib/mock_openai/handlers/base.rb
130
+ - lib/mock_openai/handlers/chat_completions.rb
131
+ - lib/mock_openai/handlers/messages.rb
132
+ - lib/mock_openai/matcher.rb
133
+ - lib/mock_openai/minitest.rb
134
+ - lib/mock_openai/response_builder.rb
135
+ - lib/mock_openai/router.rb
136
+ - lib/mock_openai/rspec.rb
137
+ - lib/mock_openai/rspec/metadata.rb
138
+ - lib/mock_openai/server.rb
139
+ - lib/mock_openai/state.rb
140
+ - lib/mock_openai/template_renderer.rb
141
+ - lib/mock_openai/version.rb
142
+ homepage: https://github.com/grymoire7/mockopenai
143
+ licenses:
144
+ - MIT
145
+ metadata:
146
+ documentation_uri: https://grymoire7.github.io/mockopenai
147
+ homepage_uri: https://github.com/grymoire7/mockopenai
148
+ rdoc_options: []
149
+ require_paths:
150
+ - lib
151
+ required_ruby_version: !ruby/object:Gem::Requirement
152
+ requirements:
153
+ - - ">="
154
+ - !ruby/object:Gem::Version
155
+ version: '3.0'
156
+ required_rubygems_version: !ruby/object:Gem::Requirement
157
+ requirements:
158
+ - - ">="
159
+ - !ruby/object:Gem::Version
160
+ version: '0'
161
+ requirements: []
162
+ rubygems_version: 4.0.4
163
+ specification_version: 4
164
+ summary: A local mock server for OpenAI-compatible APIs
165
+ test_files: []