hyrum 0.0.2 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 2c973fd6cbfc5ee63d43e581024c7fc9720501d6e544aa90f524fc6ce8e3b641
4
- data.tar.gz: a080770b267980f238d35e0d7ea86613ad37b306075a887d8ec83eaf52cd19db
3
+ metadata.gz: 51ba8be674113de6da2792d5d7489a08a03d3ef0e61976c19d2c90e51d9ad6d7
4
+ data.tar.gz: 52c4782d0863fab72e35c09e5514da71aa6186a5d5b64aa3223a2644b4da5142
5
5
  SHA512:
6
- metadata.gz: 95833489cffa0a13282d7c294e43346d6aff1aa762629f03a49b2b65dc663f3e9ec722861d4f2b5325fb5351ee3ad7444995f3981a43ee541fb9583f003579a8
7
- data.tar.gz: 7b198b8dce4536f38f1e37b07fa7dad5c83bbcd4121e5a9ff76d16029aba45f5b60ba012b8cb153b3c5e9601834b6853daf6beee109321a4c67a6efeb41ffde2
6
+ metadata.gz: 8d3aaed1301a484a1820a880d4178167710c684607f92f6ed5761ec8853baf7f46214c0c9c837f6db05aff2fc2882c3a048bfcc6298c396b09bb58f3a00c3ad4
7
+ data.tar.gz: '0160106939c1b07d0e8d8880d0f140c79aab70fbac80a2566aa22a24b8624e42ce99205570f17b9931e11ed1f2d9c58c5ad5b5a1ebf285f1f19e60d595ab002d'
data/CHANGELOG.md CHANGED
@@ -5,12 +5,94 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
- ## [0.0.1] - 2024-12-01
8
+ ## [Unreleased]
9
+
10
+ ### Breaking Changes
11
+
12
+ - **Dependency Migration**: Switched from `ruby-openai` gem to `ruby_llm` gem
13
+ - **Environment Variable Changes**:
14
+ - `OPENAI_ACCESS_TOKEN` → `OPENAI_API_KEY` (renamed)
15
+ - `OLLAMA_URL` → `OLLAMA_API_BASE` (renamed, must include `/v1` suffix)
16
+ - **Removed Features**:
17
+ - OpenAI organization ID configuration (can be added back if needed)
9
18
 
10
19
  ### Added
11
20
 
12
- - This CHANGELOG file
13
- - Initial version of the project
21
+ - Support for 10+ AI providers through ruby_llm:
22
+ - Anthropic Claude (anthropic)
23
+ - Google Gemini (gemini)
24
+ - Mistral AI (mistral)
25
+ - DeepSeek (deepseek)
26
+ - Perplexity (perplexity)
27
+ - OpenRouter (openrouter)
28
+ - Google Vertex AI (vertexai)
29
+ - AWS Bedrock (bedrock)
30
+ - GPUStack (gpustack)
31
+ - Improved error messages with provider-specific configuration guidance
32
+ - Structured JSON output via schemas for more reliable message parsing
33
+ - Better model defaults for each provider
34
+
35
+ ### Changed
36
+
37
+ - Renamed `OpenaiGenerator` to `AiGenerator` for implementation-agnostic naming
38
+ - Simplified generator implementation using ruby_llm's unified API
39
+ - Test suite now uses RubyLLM-level mocking instead of VCR HTTP cassettes
40
+ - **Optimized default models for cost-efficiency**:
41
+ - Anthropic: `claude-sonnet-4` → `claude-haiku-20250514` (~10x cheaper, perfect for simple text tasks)
42
+ - Bedrock: Sonnet → Haiku version (same cost savings)
43
+ - For simple message generation, cheaper models provide excellent results
44
+ - **Code Quality Improvements**:
45
+ - Extracted fake messages data to external JSON file (improved maintainability)
46
+ - Refactored API key environment variable mapping to use constant hash
47
+ - Reduced FakeGenerator from 298 to 36 lines
48
+ - Reduced AiGenerator cyclomatic complexity from 12 to 1
49
+ - All RuboCop warnings resolved
50
+
51
+ ### Removed
52
+
53
+ - VCR test cassettes (replaced with direct RubyLLM mocking)
54
+ - `OpenaiGenerator` class (replaced by `AiGenerator`)
55
+ - **Dependencies**: `vcr` and `webmock` gems (no longer needed with RubyLLM-level mocking)
56
+
57
+ ### Migration Guide
58
+
59
+ 1. Update environment variables:
60
+ ```bash
61
+ # Old
62
+ export OPENAI_ACCESS_TOKEN=sk-...
63
+ export OLLAMA_URL=http://localhost:11434
64
+
65
+ # New
66
+ export OPENAI_API_KEY=sk-...
67
+ export OLLAMA_API_BASE=http://localhost:11434/v1
68
+ ```
69
+
70
+ 2. Add new provider API keys as needed:
71
+ ```bash
72
+ export ANTHROPIC_API_KEY=sk-ant-...
73
+ export GEMINI_API_KEY=...
74
+ ```
75
+
76
+ 3. Update your Gemfile and run `bundle install`:
77
+ ```ruby
78
+ gem 'hyrum' # latest version
79
+ ```
80
+
81
+ For detailed migration instructions, see the [README](README.md#configuration).
82
+
83
+ ## [0.1.0] - 2024-12-16
84
+
85
+ ### Fixed
86
+
87
+ - Minor bug fixes and spec updates
88
+ - Option defaults added in help where appropriate
89
+
90
+ ### Added
91
+
92
+ - New -n | --number option to specify number of messages to produce
93
+ - New ScriptOption specs
94
+ - New MessageGenerator specs
95
+ - New FakeGenerator specs
14
96
 
15
97
  ## [0.0.2] - 2024-12-06
16
98
 
@@ -22,3 +104,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
22
104
  ### Added
23
105
 
24
106
  - New OpenAI generator specs
107
+
108
+ ## [0.0.1] - 2024-12-01
109
+
110
+ ### Added
111
+
112
+ - This CHANGELOG file
113
+ - Initial version of the project
data/README.md CHANGED
@@ -26,11 +26,26 @@ no longer static, improving your api design.
26
26
 
27
27
  ## Example
28
28
 
29
+ Generate coffee brewing error messages with OpenAI:
30
+
31
+ ```bash
32
+ hyrum --service openai \
33
+ --message "The server refuses the attempt to brew coffee with a teapot" \
34
+ --key e418 \
35
+ --format ruby
36
+ ```
37
+
38
+ Or use Anthropic's Claude:
39
+
29
40
  ```bash
30
- hyrum --service openai --key e418 --format ruby \
31
- --message "The server refuses the attempt to brew coffee with a teapot"
41
+ hyrum --service anthropic \
42
+ --message "The server refuses the attempt to brew coffee with a teapot" \
43
+ --key e418 \
44
+ --format ruby
32
45
  ```
33
46
 
47
+ Output:
48
+
34
49
  ```ruby
35
50
  # frozen_string_literal: true
36
51
 
@@ -58,19 +73,83 @@ end
58
73
  ## Usage
59
74
 
60
75
  ```
61
- ❯ hyrum --help # OR from the repo as `./exe/hyrum --help`
76
+ ❯ hyrum --help # OR from the repo as `./bin/hyrum --help`
62
77
  Usage: hyrum [options]
63
78
  -v, --[no-]verbose Run verbosely
64
79
  -f, --format FORMAT Output format. Supported formats are:
65
80
  ruby, javascript, python, java, text, json
66
- -m, --message MESSAGE Status message
67
- -k, --key KEY Message key
68
- -s, --service SERVICE AI service: one of openai, ollama, fake
81
+ (default: text)
82
+ -m, --message MESSAGE Status message (required unless fake)
83
+ -k, --key KEY Message key (default: status)
84
+ -n, --number NUMBER Number of messages to generate (default: 5)
85
+ -s, --service SERVICE AI service: one of openai, anthropic, gemini, ollama,
86
+ mistral, deepseek, perplexity, openrouter, vertexai,
87
+ bedrock, gpustack, fake (default: fake)
69
88
  -d, --model MODEL AI model: must be a valid model for the selected service
89
+ --validate Enable quality validation (default: off)
90
+ --min-quality SCORE Minimum quality score 0-100 (default: 70)
91
+ --strict Fail on quality issues instead of warning (default: false)
92
+ --show-scores Include quality metrics in output (default: false)
70
93
  -h, --help Show this message
71
94
  --version Show version
72
95
  ```
73
96
 
97
+ ## Quality validation
98
+
99
+ Hyrum can validate the quality of generated message variations to ensure they
100
+ achieve Hyrum's Law goal: variations preserve the original message's meaning
101
+ while using different wording.
102
+
103
+ ### Basic validation
104
+
105
+ ```bash
106
+ hyrum --validate -s openai -m "Server error" -f ruby --show-scores
107
+ ```
108
+
109
+ This includes quality metrics as comments in the output:
110
+
111
+ ```ruby
112
+ # Quality Score: 82.5/100
113
+ # - Semantic similarity: 94.0% (variations preserve meaning)
114
+ # - Lexical diversity: 68.0% (variation in wording)
115
+
116
+ # frozen_string_literal: true
117
+ # ... rest of generated code
118
+ ```
119
+
120
+ ### Validation options
121
+
122
+ - `--validate` - Enable quality validation (default: off)
123
+ - `--min-quality SCORE` - Minimum acceptable quality score 0-100 (default: 70)
124
+ - `--strict` - Exit with error if quality check fails (default: warning only)
125
+ - `--show-scores` - Include quality metrics in output (default: false)
126
+
127
+ ### Strict mode for automated workflows
128
+
129
+ Use strict mode to enforce quality in automated workflows:
130
+
131
+ ```bash
132
+ hyrum --validate --strict --min-quality 75 -s openai -m "Error message" -f ruby
133
+ ```
134
+
135
+ This exits with a non-zero status code if quality is below 75.
136
+
137
+ ### How it works
138
+
139
+ The validator measures two key metrics:
140
+
141
+ 1. **Semantic Similarity** (≥85% required): Ensures each variation preserves the meaning of your original message
142
+ - Uses embedding models (OpenAI, Google, etc.) when available
143
+ - Falls back to word overlap heuristic if no embedding provider configured
144
+ 2. **Lexical Diversity** (30-70% required): Ensures variations use different words from each other
145
+
146
+ The overall quality score is a weighted combination of both metrics.
147
+
148
+ **Embedding Support**: Semantic similarity works best with embedding models.
149
+ Configure any embedding provider (OpenAI, Google, etc.) and Hyrum will use it
150
+ automatically. If no embedding provider is configured, validation still works
151
+ using a word overlap heuristic.
152
+
74
153
  ## Installation
75
154
  Install the gem and add to the application's Gemfile by executing:
76
155
 
@@ -86,31 +165,49 @@ if you want to see output quickly, you can use the `-s fake` option to use a fak
86
165
  service provider that will generate stock responses.
87
166
 
88
167
  ```bash
89
- hyrum -s fake -f ruby -m "anything here"
168
+ hyrum -s fake -f ruby -k "404" -n 5
90
169
  ```
91
170
 
92
171
  You don't even need to install the gem to use Hyrum, fake service provider or not.
93
172
  You can run the executable directly from a cloned repository.
94
173
 
95
174
  ```bash
96
- ./exec/hyrum -s fake -f ruby -m "anything here"
175
+ ./bin/hyrum -s fake -f ruby -m "anything here"
97
176
  ```
98
177
 
99
- ## Configruation
178
+ ## Configuration
100
179
 
101
180
  ### OpenAI (`--service openai`)
102
- Hyrum requires an OpenAI API token to access the language models. The API token should be
181
+ Hyrum requires an OpenAI API key to access the language models. The API key should be
103
182
  set as an environment variable as shown below.
104
183
 
105
184
  ```bash
106
- export OPENAI_ACCESS_TOKEN=your_open_ai_token
185
+ export OPENAI_API_KEY=your_openai_api_key
186
+ ```
187
+
188
+ If you specify the `openai` service but no model, Hyrum will use `gpt-4o-mini`.
189
+
190
+ ### Anthropic (`--service anthropic`)
191
+ To use Anthropic's Claude models, set your API key:
192
+
193
+ ```bash
194
+ export ANTHROPIC_API_KEY=your_anthropic_api_key
107
195
  ```
108
196
 
109
- If you specify the `openai` service but no model, Hyrum will use the `gpt-o4-mini`.
197
+ Default model: `claude-haiku-20250514`
198
+
199
+ ### Gemini (`--service gemini`)
200
+ To use Google's Gemini models, set your API key:
201
+
202
+ ```bash
203
+ export GEMINI_API_KEY=your_gemini_api_key
204
+ ```
205
+
206
+ Default model: `gemini-2.0-flash-exp`
110
207
 
111
208
  ### Ollama (`--service ollama`)
112
209
  If you specify the `ollama` service, Hyrum will attempt to use the Ollama API
113
- running at `http://localhost:11434`. You can set the `OLLAMA_URL` environment
210
+ running at `http://localhost:11434/v1`. You can set the `OLLAMA_API_BASE` environment
114
211
  variable to specify a different URL.
115
212
 
116
213
  Make sure your ollama server is running before using the `ollama` service.
@@ -122,9 +219,31 @@ ollama serve
122
219
  Use `ollama list` to see the available models. For more information on using
123
220
  ollama and downloading models, see the [ollama repository](http://ollama.com).
124
221
 
222
+ Default model: `llama3`
223
+
224
+ ### Other providers
225
+
226
+ Hyrum supports all providers available in [ruby_llm](https://github.com/crmne/ruby_llm),
227
+ including:
228
+ - **Mistral** (`--service mistral`) - Set `MISTRAL_API_KEY`
229
+ - **DeepSeek** (`--service deepseek`) - Set `DEEPSEEK_API_KEY`
230
+ - **Perplexity** (`--service perplexity`) - Set `PERPLEXITY_API_KEY`
231
+ - **OpenRouter** (`--service openrouter`) - Set `OPENROUTER_API_KEY`
232
+ - **Vertex AI** (`--service vertexai`) - Set `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION`
233
+ - **AWS Bedrock** (`--service bedrock`) - Uses AWS credential chain or set `AWS_ACCESS_KEY_ID`
234
+ - **GPUStack** (`--service gpustack`) - Set `GPUSTACK_API_BASE` and `GPUSTACK_API_KEY`
235
+
236
+ See the [ruby_llm configuration docs](https://ruby-llm.com/configuration) for detailed
237
+ setup instructions for each provider.
238
+
125
239
  ## Supported formats and AI services
126
240
 
127
- See [Usage](#usage) for a list of supported formats and AI services.
241
+ **Formats:** ruby, javascript, python, java, text, json
242
+
243
+ **AI Services:** openai, anthropic, gemini, ollama, mistral, deepseek, perplexity,
244
+ openrouter, vertexai, bedrock, gpustack, fake
245
+
246
+ See [Configuration](#configuration) for setup details for each service.
128
247
 
129
248
  ## Compatibility
130
249
 
data/bin/hyrum CHANGED
@@ -1,10 +1,9 @@
1
1
  #!/usr/bin/env ruby
2
2
  # frozen_string_literal: true
3
3
 
4
- $LOAD_PATH.unshift(File.dirname(__FILE__) + '/../lib')
4
+ $LOAD_PATH.unshift("#{File.dirname(__FILE__)}/../lib")
5
5
 
6
6
  require 'bundler/setup'
7
7
  require 'hyrum'
8
8
 
9
- # Hyrum::CLI.start(ARGV)
10
9
  Hyrum.run(ARGV)
@@ -0,0 +1,282 @@
1
+ {
2
+ "e400": [
3
+ "Bad Request: The server cannot process the request due to client error",
4
+ "Invalid syntax in the request parameters",
5
+ "The request could not be understood by the server",
6
+ "Missing required parameters in the request",
7
+ "Malformed request syntax"
8
+ ],
9
+ "e401": [
10
+ "Unauthorized: Authentication is required to access this resource",
11
+ "Invalid credentials provided",
12
+ "Access token has expired",
13
+ "Missing authentication token",
14
+ "You must log in to access this resource"
15
+ ],
16
+ "e402": [
17
+ "Payment Required: The requested resource requires payment",
18
+ "Subscription has expired",
19
+ "Please upgrade your account to access this feature",
20
+ "Payment verification failed",
21
+ "Resource access requires premium subscription"
22
+ ],
23
+ "e403": [
24
+ "Forbidden: You don't have permission to access this resource",
25
+ "Access denied due to insufficient privileges",
26
+ "Your IP address has been blocked",
27
+ "Account suspended",
28
+ "Resource access restricted to authorized users only"
29
+ ],
30
+ "e404": [
31
+ "Not Found: The requested resource could not be located",
32
+ "The page you're looking for doesn't exist",
33
+ "Resource has been moved or deleted",
34
+ "Invalid URL or endpoint",
35
+ "The requested item is no longer available"
36
+ ],
37
+ "e405": [
38
+ "Method Not Allowed: The requested HTTP method is not supported",
39
+ "This endpoint doesn't support the specified HTTP method",
40
+ "Invalid HTTP method for this resource",
41
+ "Please check the API documentation for supported methods",
42
+ "HTTP method not supported for this endpoint"
43
+ ],
44
+ "e406": [
45
+ "Not Acceptable: Cannot generate response matching Accept headers",
46
+ "Requested format is not available",
47
+ "Content type negotiation failed",
48
+ "Server cannot generate acceptable response",
49
+ "Unsupported media type requested"
50
+ ],
51
+ "e407": [
52
+ "Proxy Authentication Required: Authentication with proxy server needed",
53
+ "Please authenticate with the proxy first",
54
+ "Proxy credentials required",
55
+ "Missing proxy authentication",
56
+ "Cannot proceed without proxy authentication"
57
+ ],
58
+ "e408": [
59
+ "Request Timeout: The server timed out waiting for the request",
60
+ "Client took too long to send the complete request",
61
+ "Connection timed out while waiting for data",
62
+ "Request processing exceeded time limit",
63
+ "Please try submitting your request again"
64
+ ],
65
+ "e409": [
66
+ "Conflict: Request conflicts with current state of the server",
67
+ "Resource version conflict detected",
68
+ "Concurrent modification error",
69
+ "Data conflict with existing resource",
70
+ "Cannot process due to resource state conflict"
71
+ ],
72
+ "e410": [
73
+ "Gone: The requested resource is no longer available",
74
+ "Resource has been permanently removed",
75
+ "This version of the API has been deprecated",
76
+ "Content has been permanently deleted",
77
+ "Resource not available at this location"
78
+ ],
79
+ "e411": [
80
+ "Length Required: Content-Length header is required",
81
+ "Missing Content-Length header in request",
82
+ "Request must include content length",
83
+ "Cannot process request without content length",
84
+ "Please specify the content length"
85
+ ],
86
+ "e412": [
87
+ "Precondition Failed: Request preconditions failed",
88
+ "Resource state doesn't match expectations",
89
+ "Conditional request failed",
90
+ "Required conditions not met",
91
+ "Cannot proceed due to failed preconditions"
92
+ ],
93
+ "e413": [
94
+ "Payload Too Large: Request entity exceeds limits",
95
+ "File size is too large",
96
+ "Request data exceeds maximum allowed size",
97
+ "Please reduce the size of your request",
98
+ "Upload size limit exceeded"
99
+ ],
100
+ "e414": [
101
+ "URI Too Long: The requested URL exceeds server limits",
102
+ "URL length exceeds maximum allowed characters",
103
+ "Request URL is too long to process",
104
+ "Please shorten the URL and try again",
105
+ "URI length exceeds server configuration"
106
+ ],
107
+ "e415": [
108
+ "Unsupported Media Type: Request format not supported",
109
+ "Invalid content type in request",
110
+ "Server doesn't support this media format",
111
+ "Please check supported content types",
112
+ "Media type not accepted by the server"
113
+ ],
114
+ "e416": [
115
+ "Range Not Satisfiable: Cannot fulfill requested range",
116
+ "Requested range not available",
117
+ "Invalid range specified in request",
118
+ "Range header value not satisfiable",
119
+ "Cannot serve the requested content range"
120
+ ],
121
+ "e417": [
122
+ "Expectation Failed: Server cannot meet Expect header requirements",
123
+ "Expected condition could not be fulfilled",
124
+ "Server cannot satisfy expectations",
125
+ "Expect header requirements not met",
126
+ "Request expectations cannot be met"
127
+ ],
128
+ "e418": [
129
+ "I'm a teapot: Server refuses to brew coffee with a teapot",
130
+ "This server is a teapot, not a coffee maker",
131
+ "Coffee brewing request denied by teapot",
132
+ "Hyper Text Coffee Pot Control Protocol error",
133
+ "Cannot brew coffee using teapot protocol"
134
+ ],
135
+ "e421": [
136
+ "Misdirected Request: Server is not able to produce a response",
137
+ "Request was directed to wrong server",
138
+ "Cannot handle misdirected request",
139
+ "Invalid server configuration for request",
140
+ "Request cannot be processed by this server"
141
+ ],
142
+ "e422": [
143
+ "Unprocessable Entity: Request semantically incorrect",
144
+ "Validation failed for the request",
145
+ "Cannot process invalid request data",
146
+ "Semantic errors in request content",
147
+ "Request contains invalid parameters"
148
+ ],
149
+ "e423": [
150
+ "Locked: Resource is locked",
151
+ "Resource access blocked by lock",
152
+ "Cannot modify locked resource",
153
+ "Resource temporarily unavailable due to lock",
154
+ "Please wait for resource lock to clear"
155
+ ],
156
+ "e424": [
157
+ "Failed Dependency: Request failed due to previous request failure",
158
+ "Dependent request failed",
159
+ "Cannot proceed due to failed dependency",
160
+ "Previous request failure prevents completion",
161
+ "Dependency chain broken"
162
+ ],
163
+ "e425": [
164
+ "Too Early: Server unwilling to risk processing early request",
165
+ "Request arrived too early to process",
166
+ "Cannot handle premature request",
167
+ "Please retry request later",
168
+ "Early request rejected"
169
+ ],
170
+ "e426": [
171
+ "Upgrade Required: Client must upgrade protocol",
172
+ "Please upgrade to continue",
173
+ "Protocol upgrade necessary",
174
+ "Current protocol version not supported",
175
+ "Connection upgrade required"
176
+ ],
177
+ "e428": [
178
+ "Precondition Required: Request must be conditional",
179
+ "Missing required precondition",
180
+ "Please include precondition headers",
181
+ "Cannot process request without preconditions",
182
+ "Conditional request required"
183
+ ],
184
+ "e429": [
185
+ "Too Many Requests: Rate limit exceeded",
186
+ "Please slow down your requests",
187
+ "API rate limit reached",
188
+ "Too many requests in time window",
189
+ "Request quota exceeded"
190
+ ],
191
+ "e431": [
192
+ "Request Header Fields Too Large: Header fields too large",
193
+ "Headers exceed size limits",
194
+ "Request contains oversized headers",
195
+ "Please reduce header size",
196
+ "Header fields exceed server limits"
197
+ ],
198
+ "e451": [
199
+ "Unavailable For Legal Reasons: Content legally restricted",
200
+ "Access denied due to legal restrictions",
201
+ "Content blocked for legal reasons",
202
+ "Resource legally unavailable",
203
+ "Cannot serve content due to legal constraints"
204
+ ],
205
+ "e500": [
206
+ "Internal Server Error: Something went wrong on our end",
207
+ "Unexpected server error occurred",
208
+ "Server encountered an error processing request",
209
+ "Internal error in server configuration",
210
+ "System error, please try again later"
211
+ ],
212
+ "e501": [
213
+ "Not Implemented: Functionality not supported",
214
+ "Request method not supported by server",
215
+ "Feature not available on this server",
216
+ "Requested functionality not implemented",
217
+ "Server does not support this operation"
218
+ ],
219
+ "e502": [
220
+ "Bad Gateway: Invalid response from upstream server",
221
+ "Gateway received invalid response",
222
+ "Error communicating with upstream server",
223
+ "Invalid response from backend service",
224
+ "Gateway communication error"
225
+ ],
226
+ "e503": [
227
+ "Service Unavailable: Server temporarily unavailable",
228
+ "System under maintenance",
229
+ "Server is overloaded",
230
+ "Service temporarily offline",
231
+ "Please try again later"
232
+ ],
233
+ "e504": [
234
+ "Gateway Timeout: Upstream server timed out",
235
+ "Gateway connection timed out",
236
+ "Backend server not responding",
237
+ "Request timed out at gateway",
238
+ "Gateway failed to get timely response"
239
+ ],
240
+ "e505": [
241
+ "HTTP Version Not Supported: HTTP version not supported",
242
+ "Unsupported HTTP protocol version",
243
+ "Server doesn't support HTTP version",
244
+ "Please use a supported HTTP version",
245
+ "HTTP protocol version not compatible"
246
+ ],
247
+ "e506": [
248
+ "Variant Also Negotiates: Server configuration error",
249
+ "Circular reference in content negotiation",
250
+ "Internal configuration conflict",
251
+ "Content negotiation error",
252
+ "Server misconfiguration detected"
253
+ ],
254
+ "e507": [
255
+ "Insufficient Storage: Server out of storage",
256
+ "Not enough storage space available",
257
+ "Storage quota exceeded",
258
+ "Server storage capacity reached",
259
+ "Cannot process due to storage limits"
260
+ ],
261
+ "e508": [
262
+ "Loop Detected: Infinite loop detected in request",
263
+ "Request processing loop detected",
264
+ "Circular dependency found",
265
+ "Cannot complete due to infinite loop",
266
+ "Processing halted due to loop"
267
+ ],
268
+ "e510": [
269
+ "Not Extended: Further extensions needed",
270
+ "Required extension not supported",
271
+ "Cannot fulfill without extension",
272
+ "Missing required protocol extension",
273
+ "Extension support required"
274
+ ],
275
+ "e511": [
276
+ "Network Authentication Required: Network access requires authentication",
277
+ "Please authenticate with network",
278
+ "Network login required",
279
+ "Cannot access network without authentication",
280
+ "Network credentials required"
281
+ ]
282
+ }
@@ -1,5 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require 'erb'
4
+
3
5
  module Hyrum
4
6
  module Formats
5
7
  FORMATS = %i[ruby javascript python java text json].freeze
@@ -11,10 +13,14 @@ module Hyrum
11
13
  @options = options
12
14
  end
13
15
 
14
- def format(messages)
16
+ def format(messages, validation_result = nil)
15
17
  template_file = File.join(__dir__, 'templates', "#{options[:format]}.erb")
16
18
  template = ERB.new(File.read(template_file), trim_mode: '-')
17
- template.result_with_hash(messages: messages)
19
+ template.result_with_hash(
20
+ messages: messages,
21
+ validation_result: validation_result,
22
+ show_scores: options[:show_scores]
23
+ )
18
24
  end
19
25
  end
20
26
  end
@@ -1,3 +1,12 @@
1
+ <% if validation_result && show_scores -%>
2
+ // Quality Score: <%= validation_result.score %>/100
3
+ // - Semantic similarity: <%= validation_result.semantic_similarity %>% (variations preserve meaning)
4
+ // - Lexical diversity: <%= validation_result.lexical_diversity %>% (variation in wording)
5
+ <% validation_result.warnings.each do |warning| -%>
6
+ // Warning: <%= warning %>
7
+ <% end -%>
8
+ //
9
+ <% end -%>
1
10
  import java.util.HashMap;
2
11
  import java.util.List;
3
12
  import java.util.Random;
@@ -1,3 +1,12 @@
1
+ <% if validation_result && show_scores -%>
2
+ // Quality Score: <%= validation_result.score %>/100
3
+ // - Semantic similarity: <%= validation_result.semantic_similarity %>% (variations preserve meaning)
4
+ // - Lexical diversity: <%= validation_result.lexical_diversity %>% (variation in wording)
5
+ <% validation_result.warnings.each do |warning| -%>
6
+ // Warning: <%= warning %>
7
+ <% end -%>
8
+ //
9
+ <% end -%>
1
10
  const Messages = {
2
11
  MESSAGES: {
3
12
  <% messages.each do |key, values| -%>
@@ -1 +1,14 @@
1
+ <% if validation_result && show_scores -%>
2
+ <%= JSON.pretty_generate({
3
+ quality: {
4
+ score: validation_result.score,
5
+ semantic_similarity: validation_result.semantic_similarity,
6
+ lexical_diversity: validation_result.lexical_diversity,
7
+ passed: validation_result.passed?,
8
+ warnings: validation_result.warnings
9
+ },
10
+ messages: messages
11
+ }) %>
12
+ <% else -%>
1
13
  <%= JSON.pretty_generate(messages) %>
14
+ <% end -%>