aia 0.9.14 → 0.9.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 91bd0c834c03dafa5c71d9d5e608db37ecd398c897e806361069aabee36d8abc
4
- data.tar.gz: d275ec9393e55f3b39978fffd97a9eecf36021786ef579cdae0746488b0fe401
3
+ metadata.gz: d93978bc1dc5486b24cb460ecdd13bbcefa83c3ce7a6cb3b7b7454c21b66e979
4
+ data.tar.gz: 169d574acd4e9b4127ba8f47526e4767bc86cd5e7c3c144f8f79de2f2dfefa25
5
5
  SHA512:
6
- metadata.gz: b2f00745bd78ffba8a434497946aa4a2116b7d0cfe805f8a5b132869af70e6698b490f8361a7589da875dc8d64a9869cd5c22b588347cabeefa10b0f8585e196
7
- data.tar.gz: e4334b5fb5d25851aa3a1c29af391f82b6fc9ecbf1f6bc52edf999475e29b62ecf40395c246200f5ff525c1e0d8d3dd51330a4738d4b2b20f08594c9d0969431
6
+ metadata.gz: 82b26b797a06a1af89e96ced21b8a182a9f9e6fd31e1cb20c8ad5e070b045f31430f89f8c3fc82d79d5714363d3289e80cdbf716bc6a9f86d4b6f59cd9a6a96e
7
+ data.tar.gz: 6c2f8953be44fc7d98fe2bef0680831f09947ee75c9d60b251ecdb243057da83a836de904a4ed45e4776280cad4d048a9485a249c692213aec9496fe05eb8cf7
data/.version CHANGED
@@ -1 +1 @@
1
- 0.9.14
1
+ 0.9.16
data/CHANGELOG.md CHANGED
@@ -1,7 +1,27 @@
1
1
  # Changelog
2
2
  ## [Unreleased]
3
3
 
4
+ ### [0.9.16] 2025-09-26
5
+
6
+ #### New Features
7
+ - **NEW FEATURE**: Added support for Ollama AI provider
8
+ - **NEW FEATURE**: Added support for Osaurus AI provider
9
+ - **NEW FEATURE**: Added support for LM Studio AI provider
10
+
11
+ #### Improvements
12
+ - **ENHANCEMENT**: Expanded AI provider ecosystem with three new local/self-hosted model options
13
+ - **ENHANCEMENT**: Improved flexibility for users preferring local LLM deployments
14
+
4
15
  ## Released
16
+ ### [0.9.15] 2025-09-21
17
+
18
+ #### New Features
19
+ - **NEW FEATURE**: Added `//paste` directive to insert clipboard contents into prompts
20
+ - **NEW FEATURE**: Added `//clipboard` alias for the paste directive
21
+
22
+ #### Technical Changes
23
+ - Enhanced DirectiveProcessor with clipboard integration using the clipboard gem
24
+ - Added comprehensive test coverage for paste directive functionality
5
25
 
6
26
  ### [0.9.14] 2025-09-19
7
27
 
data/README.md CHANGED
@@ -338,6 +338,7 @@ Directives are special commands in prompt files that begin with `//` and provide
338
338
  | `//checkpoint` | Create a named checkpoint of current context | `//checkpoint save_point` |
339
339
  | `//restore` | Restore context to a previous checkpoint | `//restore save_point` |
340
340
  | `//include` | Insert file contents | `//include path/to/file.txt` |
341
+ | `//paste` | Insert clipboard contents | `//paste` |
341
342
  | `//shell` | Execute shell commands | `//shell ls -la` |
342
343
  | `//robot` | Show the pet robot ASCII art w/versions | `//robot` |
343
344
  | `//ruby` | Execute Ruby code | `//ruby puts "Hello World"` |
@@ -372,6 +373,9 @@ Your prompt content here...
372
373
  # Include file contents
373
374
  //include ~/project/README.md
374
375
 
376
+ # Paste clipboard contents
377
+ //paste
378
+
375
379
  # Execute shell commands
376
380
  //shell git log --oneline -10
377
381
 
@@ -160,6 +160,30 @@ if File.exist?(history_file)
160
160
  Now analyze <%= task %> using all available context layers.
161
161
  ```
162
162
 
163
+ ### Clipboard Integration
164
+ Quick data insertion from clipboard:
165
+
166
+ ```markdown
167
+ # Code Review with Clipboard Content
168
+
169
+ ## Code to Review
170
+ //paste
171
+
172
+ ## Review Guidelines
173
+ - Check for best practices
174
+ - Identify security vulnerabilities
175
+ - Suggest performance improvements
176
+ - Validate error handling
177
+
178
+ Please provide detailed feedback on the code above.
179
+ ```
180
+
181
+ This is particularly useful for:
182
+ - Quick code reviews when you've copied code from an IDE
183
+ - Analyzing error messages or logs copied from terminals
184
+ - Including data from spreadsheets or other applications
185
+ - Rapid prototyping with copied examples
186
+
163
187
  ### Context Filtering and Summarization
164
188
  Manage large contexts intelligently:
165
189
 
@@ -237,6 +261,11 @@ Generate an academic document with:
237
261
  <% end %>
238
262
  <% end %>
239
263
 
264
+ ## Clipboard Content (if applicable)
265
+ <% if include_clipboard %>
266
+ //paste
267
+ <% end %>
268
+
240
269
  Target audience: <%= audience || "general professional" %>
241
270
  Document length: <%= length || "2000-3000 words" %>
242
271
  ```
@@ -111,6 +111,24 @@ Include content from files or websites.
111
111
 
112
112
  **Aliases**: `//import`
113
113
 
114
+ ### `//paste`
115
+ Insert content from the system clipboard.
116
+
117
+ **Syntax**: `//paste`
118
+
119
+ **Examples**:
120
+ ```markdown
121
+ //paste
122
+ ```
123
+
124
+ **Features**:
125
+ - Inserts the current clipboard contents directly into the prompt
126
+ - Useful for quickly including copied text, code, or data
127
+ - Works across different platforms (macOS, Linux, Windows)
128
+ - Handles multi-line clipboard content
129
+
130
+ **Aliases**: `//clipboard`
131
+
114
132
  ### `//webpage`
115
133
  Include content from web pages (requires PUREMD_API_KEY).
116
134
 
@@ -2,6 +2,7 @@
2
2
 
3
3
  require 'faraday'
4
4
  require 'active_support/all'
5
+ require 'clipboard'
5
6
 
6
7
  module AIA
7
8
  module Directives
@@ -60,11 +61,21 @@ module AIA
60
61
  @included_files = files
61
62
  end
62
63
 
64
+ def self.paste(args = [], context_manager = nil)
65
+ begin
66
+ content = Clipboard.paste
67
+ content.to_s
68
+ rescue => e
69
+ "Error: Unable to paste from clipboard - #{e.message}"
70
+ end
71
+ end
72
+
63
73
  # Set up aliases - these work on the module's singleton class
64
74
  class << self
65
75
  alias_method :website, :webpage
66
- alias_method :web, :webpage
76
+ alias_method :web, :webpage
67
77
  alias_method :import, :include
78
+ alias_method :clipboard, :paste
68
79
  end
69
80
  end
70
81
  end
@@ -42,6 +42,8 @@ module AIA
42
42
 
43
43
  # --- Custom OpenAI Endpoint ---
44
44
  # Use this for Azure OpenAI, proxies, or self-hosted models via OpenAI-compatible APIs.
45
+ # For osaurus: Use model name prefix "osaurus/" and set OSAURUS_API_BASE env var
46
+ # For LM Studio: Use model name prefix "lms/" and set LMS_API_BASE env var
45
47
  config.openai_api_base = ENV.fetch('OPENAI_API_BASE', nil) # e.g., "https://your-azure.openai.azure.com"
46
48
 
47
49
  # --- Default Models ---
@@ -83,7 +85,30 @@ module AIA
83
85
 
84
86
  @models.each do |model_name|
85
87
  begin
86
- chat = RubyLLM.chat(model: model_name)
88
+ # Check if this is a local provider model and handle it specially
89
+ if model_name.start_with?('ollama/')
90
+ # For Ollama models, extract the actual model name and use assume_model_exists
91
+ actual_model = model_name.sub('ollama/', '')
92
+ chat = RubyLLM.chat(model: actual_model, provider: 'ollama', assume_model_exists: true)
93
+ elsif model_name.start_with?('osaurus/')
94
+ # For Osaurus models (OpenAI-compatible), create a custom context with the right API base
95
+ actual_model = model_name.sub('osaurus/', '')
96
+ custom_config = RubyLLM.config.dup
97
+ custom_config.openai_api_base = ENV.fetch('OSAURUS_API_BASE', 'http://localhost:11434/v1')
98
+ custom_config.openai_api_key = 'dummy' # Local servers don't need a real API key
99
+ context = RubyLLM::Context.new(custom_config)
100
+ chat = context.chat(model: actual_model, provider: 'openai', assume_model_exists: true)
101
+ elsif model_name.start_with?('lms/')
102
+ # For LM Studio models (OpenAI-compatible), create a custom context with the right API base
103
+ actual_model = model_name.sub('lms/', '')
104
+ custom_config = RubyLLM.config.dup
105
+ custom_config.openai_api_base = ENV.fetch('LMS_API_BASE', 'http://localhost:1234/v1')
106
+ custom_config.openai_api_key = 'dummy' # Local servers don't need a real API key
107
+ context = RubyLLM::Context.new(custom_config)
108
+ chat = context.chat(model: actual_model, provider: 'openai', assume_model_exists: true)
109
+ else
110
+ chat = RubyLLM.chat(model: model_name)
111
+ end
87
112
  valid_chats[model_name] = chat
88
113
  rescue StandardError => e
89
114
  failed_models << "#{model_name}: #{e.message}"
@@ -263,7 +288,7 @@ module AIA
263
288
 
264
289
  def format_multi_model_results(results)
265
290
  use_consensus = should_use_consensus_mode?
266
-
291
+
267
292
  if use_consensus
268
293
  # Generate consensus response using primary model
269
294
  generate_consensus_response(results)
@@ -288,7 +313,7 @@ module AIA
288
313
  begin
289
314
  # Have the primary model generate the consensus
290
315
  consensus_result = primary_chat.ask(consensus_prompt).content
291
-
316
+
292
317
  # Format the consensus response
293
318
  "from: #{primary_model} (consensus)\n#{consensus_result}"
294
319
  rescue StandardError => e
@@ -329,7 +354,7 @@ module AIA
329
354
  def format_individual_responses(results)
330
355
  # For metrics support, return a special structure if all results have token info
331
356
  has_metrics = results.values.all? { |r| r.respond_to?(:input_tokens) && r.respond_to?(:output_tokens) }
332
-
357
+
333
358
  if has_metrics && AIA.config.show_metrics
334
359
  # Return structured data that preserves metrics for multi-model
335
360
  format_multi_model_with_metrics(results)
@@ -350,17 +375,17 @@ module AIA
350
375
  output.join("\n")
351
376
  end
352
377
  end
353
-
378
+
354
379
  def format_multi_model_with_metrics(results)
355
380
  # Create a composite response that includes all model responses and metrics
356
381
  formatted_content = []
357
382
  metrics_data = []
358
-
383
+
359
384
  results.each do |model_name, result|
360
385
  formatted_content << "from: #{model_name}"
361
386
  formatted_content << result.content
362
387
  formatted_content << ""
363
-
388
+
364
389
  # Collect metrics for each model
365
390
  metrics_data << {
366
391
  model_id: model_name,
@@ -368,20 +393,20 @@ module AIA
368
393
  output_tokens: result.output_tokens
369
394
  }
370
395
  end
371
-
396
+
372
397
  # Return a special MultiModelResponse that ChatProcessorService can handle
373
398
  MultiModelResponse.new(formatted_content.join("\n"), metrics_data)
374
399
  end
375
-
400
+
376
401
  # Helper class to carry multi-model response with metrics
377
402
  class MultiModelResponse
378
403
  attr_reader :content, :metrics_list
379
-
404
+
380
405
  def initialize(content, metrics_list)
381
406
  @content = content
382
407
  @metrics_list = metrics_list
383
408
  end
384
-
409
+
385
410
  def multi_model?
386
411
  true
387
412
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: aia
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.14
4
+ version: 0.9.16
5
5
  platform: ruby
6
6
  authors:
7
7
  - Dewayne VanHoozer
@@ -51,6 +51,20 @@ dependencies:
51
51
  - - ">="
52
52
  - !ruby/object:Gem::Version
53
53
  version: '0'
54
+ - !ruby/object:Gem::Dependency
55
+ name: clipboard
56
+ requirement: !ruby/object:Gem::Requirement
57
+ requirements:
58
+ - - ">="
59
+ - !ruby/object:Gem::Version
60
+ version: '0'
61
+ type: :runtime
62
+ prerelease: false
63
+ version_requirements: !ruby/object:Gem::Requirement
64
+ requirements:
65
+ - - ">="
66
+ - !ruby/object:Gem::Version
67
+ version: '0'
54
68
  - !ruby/object:Gem::Dependency
55
69
  name: faraday
56
70
  requirement: !ruby/object:Gem::Requirement
@@ -433,7 +447,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
433
447
  - !ruby/object:Gem::Version
434
448
  version: '0'
435
449
  requirements: []
436
- rubygems_version: 3.6.9
450
+ rubygems_version: 3.7.2
437
451
  specification_version: 4
438
452
  summary: Multi-model AI CLI with dynamic prompts, consensus responses, shell & Ruby
439
453
  integration, and seamless chat workflows.