aia 0.5.12 → 0.5.14

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8fa0d67e36209d8ac1840c94820ce58e0b6c08a6b01e47e579dfe90f04b94420
4
- data.tar.gz: 382d34ab554077b0e81d3a724362b41463e5ccc88f888dbba00b1ad65ff0c528
3
+ metadata.gz: 9a4bc8019b9ad41da7d884ad4d86b3489afd2487b8ce7be78dd10ca8e963f454
4
+ data.tar.gz: d374c91f50ea29bb1dd0d9fab5c924ab7741676c45cf75de6999f3f965c1d8a1
5
5
  SHA512:
6
- metadata.gz: 109f27eb85889450bd19bb78a123d02ba789bd99708cf7be20528a8da8c090a7ef68655e60f7fda31a0da12a506a38f3fcdf8c9d2b4a2ea9096006ab81c5d494
7
- data.tar.gz: 4b46125f0937d74ea4fe446c343b474f7c3b7d0443def5fea9782a5eea9a7c7bf4997e866871723dc2e4fe2d4fd9db07fe7384f9a95655891efbbfac5711a40f
6
+ metadata.gz: 4a2a194377c2b871a13e951a423bd1cede4b06be631ba0a3986cd8b3bddb80bb3cdd0c831207df08da7cabbdcb697d290bef6c083c54054d550c942909dbdd16
7
+ data.tar.gz: 79921b2f9899a2a86a86a18ae4cf2d9fe988a94e2bb70ae4dcd1998c3f721661b9a4587f4585219666b2d942d6ef500015fe9f4cfd60cae3982cb143b62bc366
data/.semver CHANGED
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  :major: 0
3
3
  :minor: 5
4
- :patch: 12
4
+ :patch: 14
5
5
  :special: ''
6
6
  :metadata: ''
data/CHANGELOG.md CHANGED
@@ -1,5 +1,13 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.5.14] 2024-03-09
4
+ - Directly access OpenAI to do text to speech when using the `--speak` option
5
+ - Added --voice to specify which voice to use
6
+ - Added --speech_model to specify which TTS model to use
7
+
8
+ ## [0.5.13] 2024-03-03
9
+ - Added CLI-utility `llm` as a backend processor
10
+
3
11
  ## [0.5.12] 2024-02-24
4
12
  - Happy Birthday Ruby!
5
13
  - Added --next CLI option
data/README.md CHANGED
@@ -6,16 +6,15 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
6
6
 
7
7
  **Most Recent Change**: Refer to the [Changelog](CHANGELOG.md)
8
8
 
9
- > v0.5.12
10
- > - Supports Prompt Sequencing
11
- > - Added --next option
12
- > - Added --pipeline option
9
+ > v0.5.14
10
+ > - Directly access OpenAI to do text to speech when using the `--speak` option
11
+ > - Added --voice to specify which voice to use
12
+ > - Added --speech_model to specify which TTS model to use
13
13
  >
14
- > v0.5.11
15
- > - Allow directives to prepend content into the prompt text
16
- > - Added //include path_to_file
17
- > - Added //shell shell_command
18
- > - Added //ruby ruby code
14
+ > v0.5.13
15
+ > - Added an initial integration for CLI-tool `llm` as a backend processor
16
+ > Its primary feature is its **ability to use local LLMs and APIs to keep all processing within your local workstation.**
17
+ >
19
18
 
20
19
  <!-- Tocer[start]: Auto-generated, don't remove. -->
21
20
 
@@ -48,6 +47,10 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
48
47
  - [The --role Option](#the---role-option)
49
48
  - [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
50
49
  - [External CLI Tools Used](#external-cli-tools-used)
50
+ - [Optional External CLI-tools](#optional-external-cli-tools)
51
+ - [Backend Processor `llm`](#backend-processor-llm)
52
+ - [Backend Processor `sgpt`](#backend-processor-sgpt)
53
+ - [Occassionally Useful Tool `plz`](#occassionally-useful-tool-plz)
51
54
  - [Shell Completion](#shell-completion)
52
55
  - [My Most Powerful Prompt](#my-most-powerful-prompt)
53
56
  - [My Configuration](#my-configuration)
@@ -103,9 +106,12 @@ The `aia` configuration defaults can be over-ridden by system environment variab
103
106
  | log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
104
107
  | markdown | true | AIA_MARKDOWN |
105
108
  | model | gpt-4-1106-preview | AIA_MODEL |
106
- | out_file | STDOUT | AIA_OUT_FILE |
109
+ | out_file | STDOUT | AIA_OUT_FILE |
107
110
  | prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
108
- | VERBOSE | FALSE | AIA_VERBOSE |
111
+ | speech_model. | tts-1 | AIA_SPEECH_MODEL |
112
+ | verbose | FALSE | AIA_VERBOSE |
113
+ | voice | alloy | AIA_VOICE |
114
+
109
115
 
110
116
 
111
117
  See the `@options` hash in the `cli.rb` file for a complete list. There are some config items that do not necessarily make sense for use as an envar over-ride. For example if you set `export AIA_DUMP_FILE=config.yaml` then `aia` would dump the current configuration config.yaml and exit every time it is ran until you finally `unset AIA_DUMP_FILE`
@@ -431,7 +437,28 @@ system environment variable 'EDITOR' like this:
431
437
 
432
438
  export EDITOR="subl -w"
433
439
 
440
+ ### Optional External CLI-tools
441
+
442
+ #### Backend Processor `llm`
443
+
444
+ ```
445
+ llm Access large language models from the command-line
446
+ | brew install llm
447
+ |__ https://llm.datasette.io/
448
+ ```
449
+
450
+ As of `aia v0.5.13` the `llm` backend processor is available in a limited integration. It is a very powerful python-based implementation that has its own prompt templating system. The reason that it is be included within the `aia` environment is for its ability to make use of local LLM models.
451
+
452
+
453
+ #### Backend Processor `sgpt`
454
+
455
+ `shell-gpt` aka `sgpt` is also a python implementation of a CLI-tool that processes prompts through OpenAI. It has less features than both `mods` and `llm` and is less flexible.
456
+
457
+ #### Occassionally Useful Tool `plz`
458
+
459
+ `plz-cli` aka `plz` is not integrated with `aia` however, it gets an honorable mention for its ability to except a prompt that tailored to doing something on the command line. Its response is a CLI command (sometimes a piped sequence) that accomplishes the task set forth in the prompt. It will return the commands to be executed agaist the data files you specified with a query to execute the command.
434
460
 
461
+ - brew install plz-cli
435
462
 
436
463
  ## Shell Completion
437
464
 
data/lib/aia/cli.rb CHANGED
@@ -155,6 +155,8 @@ class AIA::Cli
155
155
  extra: [''], #
156
156
  #
157
157
  model: ["gpt-4-1106-preview", "--llm --model"],
158
+ speech_model: ["tts-1", "--sm --spech_model"],
159
+ voice: ["alloy", "--voice"],
158
160
  #
159
161
  dump_file: [nil, "--dump"],
160
162
  completion: [nil, "--completion"],
data/lib/aia/main.rb CHANGED
@@ -53,17 +53,6 @@ class AIA::Main
53
53
  end
54
54
 
55
55
 
56
- def speak(what)
57
- return false unless AIA.config.speak?
58
- # TODO: Consider putting this into a thread
59
- # so that it can speak at the same time
60
- # the output is going to the screen
61
- # MacOS uses the say command
62
- system "say #{Shellwords.escape(what)}"
63
- true
64
- end
65
-
66
-
67
56
  # Function to setup the Reline history with a maximum depth
68
57
  def setup_reline_history(max_history_size=5)
69
58
  Reline::HISTORY.clear
@@ -123,7 +112,7 @@ class AIA::Main
123
112
 
124
113
  if AIA.config.chat?
125
114
  setup_reline_history
126
- speak result
115
+ AIA.speak result
127
116
  lets_chat
128
117
  end
129
118
 
@@ -227,14 +216,14 @@ class AIA::Main
227
216
  result = get_and_display_result(the_prompt_text)
228
217
 
229
218
  log_the_follow_up(the_prompt_text, result)
230
- speak result
219
+ AIA.speak result
231
220
  end
232
221
  else
233
222
  the_prompt_text = insert_terse_phrase(the_prompt_text)
234
223
  result = get_and_display_result(the_prompt_text)
235
224
 
236
225
  log_the_follow_up(the_prompt_text, result)
237
- speak result
226
+ AIA.speak result
238
227
  end
239
228
 
240
229
  the_prompt_text = ask_question_with_reline("\nFollow Up: ")
@@ -0,0 +1,77 @@
1
+ # lib/aia/tools/llm.rb
2
+
3
+ require_relative 'backend_common'
4
+
5
+ class AIA::Llm < AIA::Tools
6
+ include AIA::BackendCommon
7
+
8
+ meta(
9
+ name: 'llm',
10
+ role: :backend,
11
+ desc: "llm on the command line using local and remote models",
12
+ url: "https://llm.datasette.io/",
13
+ install: "brew install llm",
14
+ )
15
+
16
+
17
+ DEFAULT_PARAMETERS = [
18
+ # "--verbose", # enable verbose logging (if applicable)
19
+ # Add default parameters here
20
+ ].join(' ').freeze
21
+
22
+ DIRECTIVES = %w[
23
+ api_key
24
+ frequency_penalty
25
+ max_tokens
26
+ model
27
+ presence_penalty
28
+ stop_sequence
29
+ temperature
30
+ top_p
31
+ ]
32
+ end
33
+
34
+ __END__
35
+
36
+ #########################################################
37
+
38
+ llm, version 0.13.1
39
+
40
+ Usage: llm [OPTIONS] COMMAND [ARGS]...
41
+
42
+ Access large language models from the command-line
43
+
44
+ Documentation: https://llm.datasette.io/
45
+
46
+ To get started, obtain an OpenAI key and set it like this:
47
+
48
+ $ llm keys set openai
49
+ Enter key: ...
50
+
51
+ Then execute a prompt like this:
52
+
53
+ llm 'Five outrageous names for a pet pelican'
54
+
55
+ Options:
56
+ --version Show the version and exit.
57
+ --help Show this message and exit.
58
+
59
+ Commands:
60
+ prompt* Execute a prompt
61
+ aliases Manage model aliases
62
+ chat Hold an ongoing chat with a model.
63
+ collections View and manage collections of embeddings
64
+ embed Embed text and store or return the result
65
+ embed-models Manage available embedding models
66
+ embed-multi Store embeddings for multiple strings at once
67
+ install Install packages from PyPI into the same environment as LLM
68
+ keys Manage stored API keys for different models
69
+ logs Tools for exploring logged prompts and responses
70
+ models Manage available models
71
+ openai Commands for working directly with the OpenAI API
72
+ plugins List installed plugins
73
+ similar Return top N similar IDs from a collection
74
+ templates Manage stored prompt templates
75
+ uninstall Uninstall Python packages from the LLM environment
76
+
77
+
@@ -8,7 +8,7 @@ class AIA::Mods < AIA::Tools
8
8
  meta(
9
9
  name: 'mods',
10
10
  role: :backend,
11
- desc: 'AI on the command-line',
11
+ desc: 'GPT on the command line. Built for pipelines.',
12
12
  url: 'https://github.com/charmbracelet/mods',
13
13
  install: 'brew install mods',
14
14
  )
@@ -16,25 +16,35 @@ class AIA::Mods < AIA::Tools
16
16
 
17
17
  DEFAULT_PARAMETERS = [
18
18
  # "--no-cache", # do not save prompt and response
19
- "--no-limit" # no limit on input context
19
+ "--no-limit", # no limit on input context
20
+ "--quiet", # Quiet mode (hide the spinner while loading and stderr messages for success).
20
21
  ].join(' ').freeze
21
22
 
22
23
 
23
24
  DIRECTIVES = %w[
24
- api
25
- fanciness
26
- http-proxy
25
+ api
26
+ ask-model
27
+ continue
28
+ continue-last
29
+ fanciness
30
+ format-as
31
+ http-proxy
27
32
  max-retries
33
+ max-retries
34
+ max-tokens
28
35
  max-tokens
29
36
  model
30
37
  no-cache
31
38
  no-limit
32
- quiet
33
- raw
39
+ prompt
40
+ prompt-args
41
+ quiet
42
+ raw
34
43
  status-text
35
- temp
36
- title
37
- topp
44
+ temp
45
+ title
46
+ topp
47
+ word-wrap
38
48
  ]
39
49
  end
40
50
 
@@ -43,6 +53,8 @@ __END__
43
53
 
44
54
  ##########################################################
45
55
 
56
+ mods version 1.2.1 (Homebre)
57
+
46
58
  GPT on the command line. Built for pipelines.
47
59
 
48
60
  Usage:
@@ -50,9 +62,11 @@ Usage:
50
62
 
51
63
  Options:
52
64
  -m, --model Default model (gpt-3.5-turbo, gpt-4, ggml-gpt4all-j...).
65
+ -M, --ask-model Ask which model to use with an interactive prompt.
53
66
  -a, --api OpenAI compatible REST API (openai, localai).
54
67
  -x, --http-proxy HTTP proxy to use for API requests.
55
68
  -f, --format Ask for the response to be formatted as markdown unless otherwise set.
69
+ --format-as
56
70
  -r, --raw Render output as raw text when connected to a TTY.
57
71
  -P, --prompt Include the prompt from the arguments and stdin, truncate stdin to specified number of lines.
58
72
  -p, --prompt-args Include the prompt from the arguments in the response.
@@ -61,14 +75,16 @@ Options:
61
75
  -l, --list Lists saved conversations.
62
76
  -t, --title Saves the current conversation with the given title.
63
77
  -d, --delete Deletes a saved conversation with the given title or ID.
78
+ --delete-older-than Deletes all saved conversations older than the specified duration. Valid units are: ns, us, µs, μs, ms, s, m, h, d, w, mo, and y.
64
79
  -s, --show Show a saved conversation with the given title or ID.
65
- -S, --show-last Show a the last saved conversation.
80
+ -S, --show-last Show the last saved conversation.
66
81
  -q, --quiet Quiet mode (hide the spinner while loading and stderr messages for success).
67
82
  -h, --help Show help and exit.
68
83
  -v, --version Show version and exit.
69
84
  --max-retries Maximum number of times to retry API calls.
70
85
  --no-limit Turn off the client-side limit on the size of the input into the model.
71
86
  --max-tokens Maximum number of tokens in response.
87
+ --word-wrap Wrap formatted output at specific width (default is 80)
72
88
  --temp Temperature (randomness) of results, from 0.0 to 2.0.
73
89
  --topp TopP, an alternative to temperature that narrows response, from 0.0 to 1.0.
74
90
  --fanciness Your desired level of fanciness.
@@ -81,3 +97,4 @@ Options:
81
97
  Example:
82
98
  # Editorialize your video files
83
99
  ls ~/vids | mods -f "summarize each of these titles, group them by decade" | glow
100
+
data/lib/aia.rb CHANGED
@@ -20,11 +20,12 @@ tramp_require('debug_me') {
20
20
  }
21
21
 
22
22
  require 'hashie'
23
+ require 'openai'
23
24
  require 'os'
24
25
  require 'pathname'
25
26
  require 'reline'
26
27
  require 'shellwords'
27
- require 'tempfile' # SMELL: is this still being used?
28
+ require 'tempfile'
28
29
 
29
30
  require 'tty-spinner'
30
31
 
@@ -45,8 +46,15 @@ require_relative "core_ext/string_wrap"
45
46
  module AIA
46
47
  class << self
47
48
  attr_accessor :config
49
+ attr_accessor :client
48
50
 
49
51
  def run(args=ARGV)
52
+ begin
53
+ @client = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
54
+ rescue OpenAI::ConfigurationError
55
+ @client = nil
56
+ end
57
+
50
58
  args = args.split(' ') if args.is_a?(String)
51
59
 
52
60
  # TODO: Currently this is a one and done architecture.
@@ -59,8 +67,47 @@ module AIA
59
67
 
60
68
 
61
69
  def speak(what)
62
- return unless AIA.config.speak?
63
- system "say #{Shellwords.escape(what)}" if OS.osx?
70
+ return unless config.speak?
71
+
72
+ if OS.osx? && 'siri' == config.voice.downcase
73
+ system "say #{Shellwords.escape(what)}"
74
+ else
75
+ use_openai_tts(what)
76
+ end
77
+ end
78
+
79
+
80
+ def use_openai_tts(what)
81
+ if client.nil?
82
+ puts "\nWARNING: OpenAI's text to speech capability is not available at this time."
83
+ return
84
+ end
85
+
86
+ player = if OS.osx?
87
+ 'afplay'
88
+ elsif OS.linux?
89
+ 'mpg123'
90
+ elsif OS.windows?
91
+ 'cmdmp3'
92
+ else
93
+ puts "\nWARNING: There is no MP3 player available"
94
+ return
95
+ end
96
+
97
+ response = client.audio.speech(
98
+ parameters: {
99
+ model: config.speech_model,
100
+ input: what,
101
+ voice: config.voice
102
+ }
103
+ )
104
+
105
+ Tempfile.create(['speech', '.mp3']) do |f|
106
+ f.binmode
107
+ f.write(response)
108
+ f.close
109
+ `#{player} #{f.path}`
110
+ end
64
111
  end
65
112
  end
66
113
  end
data/man/aia.1 CHANGED
@@ -1,6 +1,6 @@
1
1
  .\" Generated by kramdown-man 1.0.1
2
2
  .\" https://github.com/postmodern/kramdown-man#readme
3
- .TH aia 1 "v0.5.12" AIA "User Manuals"
3
+ .TH aia 1 "v0.5.14" AIA "User Manuals"
4
4
  .SH NAME
5
5
  .PP
6
6
  aia \- command\-line interface for an AI assistant
@@ -98,6 +98,12 @@ A role ID is the same as a prompt ID\. A \[lq]role\[rq] is a specialized prompt
98
98
  .TP
99
99
  \fB\-v\fR, \fB\-\-verbose\fR
100
100
  Be Verbose \- default is false
101
+ .TP
102
+ \fB\-\-voice\fR
103
+ The voice to use when the option \fB\-\-speak\fR is used\. If you are on a Mac, then setting voice to \[lq]siri\[rq] will use your Mac\[cq]s default siri voice and not access OpenAI \- default is \[lq]alloy\[rq] from OpenAI
104
+ .TP
105
+ \fB\-\-sm\fR, \fB\-\-speech\[ru]model\fR
106
+ The OpenAI speech model to use when converting text into speech \- default is \[lq]tts\-1\[rq]
101
107
  .SH CONFIGURATION HIERARCHY
102
108
  .PP
103
109
  System Environment Variables (envars) that are all uppercase and begin with \[lq]AIA\[ru]\[rq] can be used to over\-ride the default configuration settings\. For example setting \[lq]export AIA\[ru]PROMPTS\[ru]DIR\[eq]\[ti]\[sl]Documents\[sl]prompts\[rq] will over\-ride the default configuration; however, a config value provided by a command line options will over\-ride an envar setting\.
@@ -180,6 +186,11 @@ OpenAI Platform Documentation
180
186
  .UE
181
187
  and working with OpenAI models\.
182
188
  .IP \(bu 2
189
+ llm
190
+ .UR https:\[sl]\[sl]llm\.datasette\.io\[sl]
191
+ .UE
192
+ for more information on \fBllm\fR \- A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine\.
193
+ .IP \(bu 2
183
194
  mods
184
195
  .UR https:\[sl]\[sl]github\.com\[sl]charmbracelet\[sl]mods
185
196
  .UE
data/man/aia.1.md CHANGED
@@ -1,4 +1,4 @@
1
- # aia 1 "v0.5.12" AIA "User Manuals"
1
+ # aia 1 "v0.5.14" AIA "User Manuals"
2
2
 
3
3
  ## NAME
4
4
 
@@ -103,6 +103,11 @@ The aia command-line tool is an interface for interacting with an AI model backe
103
103
  `-v`, `--verbose`
104
104
  : Be Verbose - default is false
105
105
 
106
+ `--voice`
107
+ : The voice to use when the option `--speak` is used. If you are on a Mac, then setting voice to "siri" will use your Mac's default siri voice and not access OpenAI - default is "alloy" from OpenAI
108
+
109
+ `--sm`, `--speech_model`
110
+ : The OpenAI speech model to use when converting text into speech - default is "tts-1"
106
111
 
107
112
  ## CONFIGURATION HIERARCHY
108
113
 
@@ -176,6 +181,8 @@ if you want to specify them one at a time.
176
181
 
177
182
  - [OpenAI Platform Documentation](https://platform.openai.com/docs/overview) for more information on [obtaining access tokens](https://platform.openai.com/account/api-keys) and working with OpenAI models.
178
183
 
184
+ - [llm](https://llm.datasette.io/) for more information on `llm` - A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
185
+
179
186
  - [mods](https://github.com/charmbracelet/mods) for more information on `mods` - AI for the command line, built for pipelines. LLM based AI is really good at interpreting the output of commands and returning the results in CLI friendly text formats like Markdown. Mods is a simple tool that makes it super easy to use AI on the command line and in your pipelines. Mods works with [OpenAI](https://platform.openai.com/account/api-keys) and [LocalAI](https://github.com/go-skynet/LocalAI)
180
187
 
181
188
  - [sgpt](https://github.com/tbckr/sgpt) (aka shell-gpt) is a powerful command-line interface (CLI) tool designed for seamless interaction with OpenAI models directly from your terminal. Effortlessly run queries, generate shell commands or code, create images from text, and more, using simple commands. Streamline your workflow and enhance productivity with this powerful and user-friendly CLI tool.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: aia
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.12
4
+ version: 0.5.14
5
5
  platform: ruby
6
6
  authors:
7
7
  - Dewayne VanHoozer
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-02-25 00:00:00.000000000 Z
11
+ date: 2024-03-09 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: hashie
@@ -66,6 +66,20 @@ dependencies:
66
66
  - - ">="
67
67
  - !ruby/object:Gem::Version
68
68
  version: '0'
69
+ - !ruby/object:Gem::Dependency
70
+ name: ruby-openai
71
+ requirement: !ruby/object:Gem::Requirement
72
+ requirements:
73
+ - - ">="
74
+ - !ruby/object:Gem::Version
75
+ version: '0'
76
+ type: :runtime
77
+ prerelease: false
78
+ version_requirements: !ruby/object:Gem::Requirement
79
+ requirements:
80
+ - - ">="
81
+ - !ruby/object:Gem::Version
82
+ version: '0'
69
83
  - !ruby/object:Gem::Dependency
70
84
  name: semver2
71
85
  requirement: !ruby/object:Gem::Requirement
@@ -220,18 +234,17 @@ dependencies:
220
234
  - - ">="
221
235
  - !ruby/object:Gem::Version
222
236
  version: '0'
223
- description: |
224
- A command-line AI Assistante (aia) that provides pre-compositional
225
- template prompt management to various backend gen-AI processes.
226
- Complete shell integration allows a prompt to access system
227
- environment variables and execut shell commands as part of the
228
- prompt content. In addition full embedded Ruby support is provided
229
- given even more dynamic prompt conditional content. It is a
230
- generalized power house that rivals specialized gen-AI tools. aia
231
- currently supports "mods" and "sgpt" CLI tools. aia uses "ripgrep"
232
- and "fzf" CLI utilities to search for and select prompt files to
233
- send to the backend gen-AI tool along with supported context
234
- files.
237
+ description: A command-line AI Assistante (aia) that provides pre-compositional template
238
+ prompt management to various backend gen-AI processes such as llm, mods and sgpt
239
+ support processing of prompts both via remote API calls as well as keeping everything
240
+ local through the use of locally managed models and the LocalAI API. Complete shell
241
+ integration allows a prompt to access system environment variables and execut shell
242
+ commands as part of the prompt content. In addition full embedded Ruby support
243
+ is provided given even more dynamic prompt conditional content. It is a generalized
244
+ power house that rivals specialized gen-AI tools. aia currently supports "mods"
245
+ and "sgpt" CLI tools. aia uses "ripgrep" and "fzf" CLI utilities to search for
246
+ and select prompt files to send to the backend gen-AI tool along with supported
247
+ context files.
235
248
  email:
236
249
  - dvanhoozer@gmail.com
237
250
  executables:
@@ -268,6 +281,7 @@ files:
268
281
  - lib/aia/tools/editor.rb
269
282
  - lib/aia/tools/fzf.rb
270
283
  - lib/aia/tools/glow.rb
284
+ - lib/aia/tools/llm.rb
271
285
  - lib/aia/tools/mods.rb
272
286
  - lib/aia/tools/sgpt.rb
273
287
  - lib/aia/tools/subl.rb