aia 0.5.11 → 0.5.13

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 740cb6d71c83fae27f2918e76a10802d805e4172aef40154b8cea3ae6800ad65
4
- data.tar.gz: 0e9aac0e6a0917b6adb04550cac137be65d7e0fa3507af89058f0a1d96aa3b58
3
+ metadata.gz: 1487b5005351fcb62b10d7ccdfdfe0153a40b931b633562671cc28e9f554ad8f
4
+ data.tar.gz: 9d34fc975adb14a52d0cf62a541a7568af61b4da5cb963c326a83253613d6b30
5
5
  SHA512:
6
- metadata.gz: 24f4eb6ace28c79ccf7c5465e3027dd03e25457df5c522b828775c681e51cfeab207d1c6eb9f237b3b84fa768b20f2c49b27779a8987fb1c7160f68152fe5e8f
7
- data.tar.gz: 260685bca6ab7c0cc53b2fc566b997026d857d96c71b6889bcff3b85bd57bbfa526d4a8df6d0044b7c400a2b3be26669a81be77900669deffc4f318f1a7674c2
6
+ metadata.gz: '00994916d15ef59a91ee9ae8b73acee3e10a8cc7deb1c41cbfa5fff4997fd97622deb2d16fdbc555da17bddba48dc349db21115333aac9bdf8d9a3dfaaa220c4'
7
+ data.tar.gz: b40df4e189d676dbee86f6593fac9fb90153d392105c07af7a53a3a1be1e3961b9978e05e29738e94c5ee135f7f9af0abdb7de19475edd4e7f7615f42d701327
data/.semver CHANGED
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  :major: 0
3
3
  :minor: 5
4
- :patch: 11
4
+ :patch: 13
5
5
  :special: ''
6
6
  :metadata: ''
data/CHANGELOG.md CHANGED
@@ -1,5 +1,13 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.5.13] 2024-03-03
4
+ - Added CLI-utility `llm` as a backend processor
5
+
6
+ ## [0.5.12] 2024-02-24
7
+ - Happy Birthday Ruby!
8
+ - Added --next CLI option
9
+ - Added --pipeline CLI option
10
+
3
11
  ## [0.5.11] 2024-02-18
4
12
  - allow directives to return information that is inserted into the prompt text
5
13
  - added //shell command directive
data/README.md CHANGED
@@ -6,17 +6,14 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
6
6
 
7
7
  **Most Recent Change**: Refer to the [Changelog](CHANGELOG.md)
8
8
 
9
- > v0.5.11
10
- > - Allow directives to prepend content into the prompt text
11
- > - Added //include path_to_file
12
- > - Added //shell shell_command
13
- > - Added //ruby ruby code
14
- >
15
- > v0.5.10
16
- > - Added --roles_dir
17
- > - Changed --prompts to --prompts_dir
18
- > - Fixed Issue 33
19
- >
9
+ > v0.5.13
10
+ > - Added an initial integration for CLI-tool `llm` as a backend processor
11
+ > Its primary feature is its **ability to use local LLMs and APIs to keep all processing within your local workstation.**
12
+ >
13
+ > v0.5.12
14
+ > - Supports Prompt Sequencing
15
+ > - Added --next option
16
+ > - Added --pipeline option
20
17
 
21
18
 
22
19
  <!-- Tocer[start]: Auto-generated, don't remove. -->
@@ -41,11 +38,19 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
41
38
  - [//shell](#shell)
42
39
  - [Backend Directive Commands](#backend-directive-commands)
43
40
  - [Using Directives in Chat Sessions](#using-directives-in-chat-sessions)
41
+ - [Prompt Sequences](#prompt-sequences)
42
+ - [--next](#--next)
43
+ - [--pipeline](#--pipeline)
44
+ - [Best Practices ??](#best-practices-)
44
45
  - [All About ROLES](#all-about-roles)
45
46
  - [The --roles_dir (AIA_ROLES_DIR)](#the---roles_dir-aia_roles_dir)
46
47
  - [The --role Option](#the---role-option)
47
48
  - [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
48
49
  - [External CLI Tools Used](#external-cli-tools-used)
50
+ - [Optional External CLI-tools](#optional-external-cli-tools)
51
+ - [Backend Processor `llm`](#backend-processor-llm)
52
+ - [Backend Processor `sgpt`](#backend-processor-sgpt)
53
+ - [Occassionally Useful Tool `plz`](#occassionally-useful-tool-plz)
49
54
  - [Shell Completion](#shell-completion)
50
55
  - [My Most Powerful Prompt](#my-most-powerful-prompt)
51
56
  - [My Configuration](#my-configuration)
@@ -298,6 +303,65 @@ Whe you are in a chat session, you may use a directive as a follow up prompt. F
298
303
  The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the backend.
299
304
 
300
305
 
306
+ ## Prompt Sequences
307
+
308
+ Why would you need/want to use a sequence of prompts in a batch situation. Maybe you have a complex prompt which exceeds the token limitations of your model for input so you need to break it up into multiple parts. Or suppose its a simple prompt but the number of tokens on the output is limited and you do not get exactly the kind of full response for which you were looking.
309
+
310
+ Sometimes it takes a series of prompts to get the kind of response that you want. The reponse from one prompt becomes a context for the next prompt. This is easy to do within a `chat` session were you are manually entering and adjusting your prompts until you get the kind of response that you want.
311
+
312
+ If you need to do this on a regular basis or within a batch you can use `aia` and the `--next` and `--pipeline` command line options.
313
+
314
+ These two options specify the sequence of prompt IDs to be processed. Both options are available to be used within a prompt file using the `//config` directive. Like all embedded directives you can take advantage of parameterization shell integration and Ruby. I'm start to feel like TIm Tool man - more power!
315
+
316
+ Consider the condition in which you have 4 prompt IDs that need to be processed in sequence. The IDs and associated prompt file names are:
317
+
318
+ | Promt ID | Prompt File |
319
+ | -------- | ----------- |
320
+ | one. | one.txt |
321
+ | two. | two.txt |
322
+ | three. | three.txt |
323
+ | four. | four.txt |
324
+
325
+
326
+ ### --next
327
+
328
+ ```shell
329
+ export AIA_OUT_FILE=temp.md
330
+ aia one --next two
331
+ aia three --next four temp.md
332
+ ```
333
+
334
+ or within each of the prompt files you use the config directive:
335
+
336
+ ```
337
+ one.txt contains //config next two
338
+ two.txt contains //config next three
339
+ three.txt contains //config next four
340
+ ```
341
+ BUT if you have more than two prompts in your sequence then consider using the --pipeline option.
342
+
343
+ ### --pipeline
344
+
345
+ `aia one --pipeline two,three,four`
346
+
347
+ or inside of the `one.txt` prompt file use this directive:
348
+
349
+ `//config pipeline two,three,four`
350
+
351
+ ### Best Practices ??
352
+
353
+ Since the response of one prompt is fed into the next prompt within the sequence instead of having all prompts write their response to the same out file, use these directives inside the associated prompt files:
354
+
355
+
356
+ | Prompt File | Directive |
357
+ | --- | --- |
358
+ | one.txt | //config out_file one.md |
359
+ | two.txt | //config out_file two.md |
360
+ | three.txt | //config out_file three.md |
361
+ | four.txt | //config out_file four.md |
362
+
363
+ This way you can see the response that was generated for each prompt in the sequence.
364
+
301
365
  ## All About ROLES
302
366
 
303
367
  ### The --roles_dir (AIA_ROLES_DIR)
@@ -370,7 +434,28 @@ system environment variable 'EDITOR' like this:
370
434
 
371
435
  export EDITOR="subl -w"
372
436
 
437
+ ### Optional External CLI-tools
438
+
439
+ #### Backend Processor `llm`
440
+
441
+ ```
442
+ llm Access large language models from the command-line
443
+ | brew install llm
444
+ |__ https://llm.datasette.io/
445
+ ```
446
+
447
+ As of `aia v0.5.13` the `llm` backend processor is available in a limited integration. It is a very powerful python-based implementation that has its own prompt templating system. The reason that it is be included within the `aia` environment is for its ability to make use of local LLM models.
448
+
449
+
450
+ #### Backend Processor `sgpt`
451
+
452
+ `shell-gpt` aka `sgpt` is also a python implementation of a CLI-tool that processes prompts through OpenAI. It has less features than both `mods` and `llm` and is less flexible.
453
+
454
+ #### Occassionally Useful Tool `plz`
455
+
456
+ `plz-cli` aka `plz` is not integrated with `aia` however, it gets an honorable mention for its ability to except a prompt that tailored to doing something on the command line. Its response is a CLI command (sometimes a piped sequence) that accomplishes the task set forth in the prompt. It will return the commands to be executed agaist the data files you specified with a query to execute the command.
373
457
 
458
+ - brew install plz-cli
374
459
 
375
460
  ## Shell Completion
376
461
 
data/justfile CHANGED
@@ -157,6 +157,11 @@ view_man_page: create_man_page
157
157
  create_man_page:
158
158
  rake man
159
159
 
160
+
161
+ # Generate the Documentation
162
+ gen_doc: create_man_page update_toc_in_readmen
163
+
164
+
160
165
  ##########################################
161
166
 
162
167
  # Tag the current commit, push it, then bump the version
data/lib/aia/cli.rb CHANGED
@@ -27,9 +27,8 @@ class AIA::Cli
27
27
  load_config_file unless AIA.config.config_file.nil?
28
28
 
29
29
  convert_to_pathname_objects
30
-
30
+ error_on_invalid_option_combinations
31
31
  setup_prompt_manager
32
-
33
32
  execute_immediate_commands
34
33
  end
35
34
 
@@ -47,6 +46,29 @@ class AIA::Cli
47
46
  end
48
47
 
49
48
 
49
+ def error_on_invalid_option_combinations
50
+ # --chat is intended as an interactive exchange
51
+ if AIA.config.chat?
52
+ unless AIA.config.next.empty?
53
+ abort "ERROR: Cannot use --next with --chat"
54
+ end
55
+ unless STDOUT == AIA.config.out_file
56
+ abort "ERROR: Cannot use --out_file with --chat"
57
+ end
58
+ unless AIA.config.pipeline.empty?
59
+ abort "ERROR: Cannot use --pipeline with --chat"
60
+ end
61
+ end
62
+
63
+ # --next says which prompt to process next
64
+ # but --pipeline gives an entire sequence of prompts for processing
65
+ unless AIA.config.next.empty?
66
+ unless AIA.config.pipeline.empty?
67
+ abort "ERROR: Cannot use --pipeline with --next"
68
+ end
69
+ end
70
+ end
71
+
50
72
  def string_to_pathname(string)
51
73
  ['~/', '$HOME/'].each do |prefix|
52
74
  if string.start_with? prefix
@@ -151,6 +173,8 @@ class AIA::Cli
151
173
  verbose?: [false, "-v --verbose"],
152
174
  version?: [false, "--version"],
153
175
  #
176
+ next: ['', "-n --next"],
177
+ pipeline: [[], "--pipeline"],
154
178
  role: ['', "-r --role"],
155
179
  #
156
180
  config_file:[nil, "-c --config_file"],
@@ -263,8 +287,11 @@ class AIA::Cli
263
287
  else
264
288
  value = arguments[index + 1]
265
289
  if value.nil? || value.start_with?('-')
266
- STDERR.puts "ERROR: #{option_sym} requires a parameter value"
267
- exit(1)
290
+ abort "ERROR: #{option_sym} requires a parameter value"
291
+ elsif "--pipeline" == switch
292
+ prompt_sequence = value.split(',')
293
+ AIA.config[option_sym] = prompt_sequence
294
+ arguments.slice!(index,2)
268
295
  else
269
296
  AIA.config[option_sym] = value
270
297
  arguments.slice!(index,2)
@@ -60,8 +60,18 @@ class AIA::Directives
60
60
  value = parts.join
61
61
  if item.end_with?('?')
62
62
  AIA.config[item] = %w[1 y yea yes t true].include?(value.downcase)
63
+ elsif item.end_with?('_file')
64
+ if "STDOUT" == value.upcase
65
+ AIA.config[item] = STDOUT
66
+ elsif "STDERR" == value.upcase
67
+ AIA.config[item] = STDERR
68
+ else
69
+ AIA.config[item] = value.start_with?('/') ?
70
+ Pathname.new(value) :
71
+ Pathname.pwd + value
72
+ end
63
73
  else
64
- AIA.config[item] = "STDOUT" == value ? STDOUT : value
74
+ AIA.config[item] = value
65
75
  end
66
76
  end
67
77
 
data/lib/aia/main.rb CHANGED
@@ -43,11 +43,11 @@ class AIA::Main
43
43
 
44
44
  @logger.info(AIA.config) if AIA.config.debug? || AIA.config.verbose?
45
45
 
46
- @prompt = AIA::Prompt.new.prompt
47
-
48
46
 
49
47
  @directives_processor = AIA::Directives.new
50
48
 
49
+ @prompt = AIA::Prompt.new.prompt
50
+
51
51
  # TODO: still should verify that the tools are ion the $PATH
52
52
  # tools.class.verify_tools
53
53
  end
@@ -71,6 +71,8 @@ class AIA::Main
71
71
  end
72
72
 
73
73
 
74
+ # This will be recursive with the new options
75
+ # --next and --pipeline
74
76
  def call
75
77
  directive_output = @directives_processor.execute_my_directives
76
78
 
@@ -124,6 +126,17 @@ class AIA::Main
124
126
  speak result
125
127
  lets_chat
126
128
  end
129
+
130
+ return if AIA.config.next.empty? && AIA.config.pipeline.empty?
131
+
132
+ # Reset some config items to defaults
133
+ AIA.config.directives = []
134
+ AIA.config.next = AIA.config.pipeline.shift
135
+ AIA.config.arguments = [AIA.config.next, AIA.config.out_file.to_s]
136
+ AIA.config.next = ""
137
+
138
+ @prompt = AIA::Prompt.new.prompt
139
+ call # Recurse!
127
140
  end
128
141
 
129
142
 
@@ -0,0 +1,77 @@
1
+ # lib/aia/tools/llm.rb
2
+
3
+ require_relative 'backend_common'
4
+
5
+ class AIA::Llm < AIA::Tools
6
+ include AIA::BackendCommon
7
+
8
+ meta(
9
+ name: 'llm',
10
+ role: :backend,
11
+ desc: "llm on the command line using local and remote models",
12
+ url: "https://llm.datasette.io/",
13
+ install: "brew install llm",
14
+ )
15
+
16
+
17
+ DEFAULT_PARAMETERS = [
18
+ # "--verbose", # enable verbose logging (if applicable)
19
+ # Add default parameters here
20
+ ].join(' ').freeze
21
+
22
+ DIRECTIVES = %w[
23
+ api_key
24
+ frequency_penalty
25
+ max_tokens
26
+ model
27
+ presence_penalty
28
+ stop_sequence
29
+ temperature
30
+ top_p
31
+ ]
32
+ end
33
+
34
+ __END__
35
+
36
+ #########################################################
37
+
38
+ llm, version 0.13.1
39
+
40
+ Usage: llm [OPTIONS] COMMAND [ARGS]...
41
+
42
+ Access large language models from the command-line
43
+
44
+ Documentation: https://llm.datasette.io/
45
+
46
+ To get started, obtain an OpenAI key and set it like this:
47
+
48
+ $ llm keys set openai
49
+ Enter key: ...
50
+
51
+ Then execute a prompt like this:
52
+
53
+ llm 'Five outrageous names for a pet pelican'
54
+
55
+ Options:
56
+ --version Show the version and exit.
57
+ --help Show this message and exit.
58
+
59
+ Commands:
60
+ prompt* Execute a prompt
61
+ aliases Manage model aliases
62
+ chat Hold an ongoing chat with a model.
63
+ collections View and manage collections of embeddings
64
+ embed Embed text and store or return the result
65
+ embed-models Manage available embedding models
66
+ embed-multi Store embeddings for multiple strings at once
67
+ install Install packages from PyPI into the same environment as LLM
68
+ keys Manage stored API keys for different models
69
+ logs Tools for exploring logged prompts and responses
70
+ models Manage available models
71
+ openai Commands for working directly with the OpenAI API
72
+ plugins List installed plugins
73
+ similar Return top N similar IDs from a collection
74
+ templates Manage stored prompt templates
75
+ uninstall Uninstall Python packages from the LLM environment
76
+
77
+
@@ -8,7 +8,7 @@ class AIA::Mods < AIA::Tools
8
8
  meta(
9
9
  name: 'mods',
10
10
  role: :backend,
11
- desc: 'AI on the command-line',
11
+ desc: 'GPT on the command line. Built for pipelines.',
12
12
  url: 'https://github.com/charmbracelet/mods',
13
13
  install: 'brew install mods',
14
14
  )
@@ -16,25 +16,35 @@ class AIA::Mods < AIA::Tools
16
16
 
17
17
  DEFAULT_PARAMETERS = [
18
18
  # "--no-cache", # do not save prompt and response
19
- "--no-limit" # no limit on input context
19
+ "--no-limit", # no limit on input context
20
+ "--quiet", # Quiet mode (hide the spinner while loading and stderr messages for success).
20
21
  ].join(' ').freeze
21
22
 
22
23
 
23
24
  DIRECTIVES = %w[
24
- api
25
- fanciness
26
- http-proxy
25
+ api
26
+ ask-model
27
+ continue
28
+ continue-last
29
+ fanciness
30
+ format-as
31
+ http-proxy
27
32
  max-retries
33
+ max-retries
34
+ max-tokens
28
35
  max-tokens
29
36
  model
30
37
  no-cache
31
38
  no-limit
32
- quiet
33
- raw
39
+ prompt
40
+ prompt-args
41
+ quiet
42
+ raw
34
43
  status-text
35
- temp
36
- title
37
- topp
44
+ temp
45
+ title
46
+ topp
47
+ word-wrap
38
48
  ]
39
49
  end
40
50
 
@@ -43,6 +53,8 @@ __END__
43
53
 
44
54
  ##########################################################
45
55
 
56
+ mods version 1.2.1 (Homebre)
57
+
46
58
  GPT on the command line. Built for pipelines.
47
59
 
48
60
  Usage:
@@ -50,9 +62,11 @@ Usage:
50
62
 
51
63
  Options:
52
64
  -m, --model Default model (gpt-3.5-turbo, gpt-4, ggml-gpt4all-j...).
65
+ -M, --ask-model Ask which model to use with an interactive prompt.
53
66
  -a, --api OpenAI compatible REST API (openai, localai).
54
67
  -x, --http-proxy HTTP proxy to use for API requests.
55
68
  -f, --format Ask for the response to be formatted as markdown unless otherwise set.
69
+ --format-as
56
70
  -r, --raw Render output as raw text when connected to a TTY.
57
71
  -P, --prompt Include the prompt from the arguments and stdin, truncate stdin to specified number of lines.
58
72
  -p, --prompt-args Include the prompt from the arguments in the response.
@@ -61,14 +75,16 @@ Options:
61
75
  -l, --list Lists saved conversations.
62
76
  -t, --title Saves the current conversation with the given title.
63
77
  -d, --delete Deletes a saved conversation with the given title or ID.
78
+ --delete-older-than Deletes all saved conversations older than the specified duration. Valid units are: ns, us, µs, μs, ms, s, m, h, d, w, mo, and y.
64
79
  -s, --show Show a saved conversation with the given title or ID.
65
- -S, --show-last Show a the last saved conversation.
80
+ -S, --show-last Show the last saved conversation.
66
81
  -q, --quiet Quiet mode (hide the spinner while loading and stderr messages for success).
67
82
  -h, --help Show help and exit.
68
83
  -v, --version Show version and exit.
69
84
  --max-retries Maximum number of times to retry API calls.
70
85
  --no-limit Turn off the client-side limit on the size of the input into the model.
71
86
  --max-tokens Maximum number of tokens in response.
87
+ --word-wrap Wrap formatted output at specific width (default is 80)
72
88
  --temp Temperature (randomness) of results, from 0.0 to 2.0.
73
89
  --topp TopP, an alternative to temperature that narrows response, from 0.0 to 1.0.
74
90
  --fanciness Your desired level of fanciness.
@@ -81,3 +97,4 @@ Options:
81
97
  Example:
82
98
  # Editorialize your video files
83
99
  ls ~/vids | mods -f "summarize each of these titles, group them by decade" | glow
100
+
data/main.just CHANGED
@@ -55,6 +55,11 @@ view_man_page: create_man_page
55
55
  create_man_page:
56
56
  rake man
57
57
 
58
+
59
+ # Generate the Documentation
60
+ gen_doc: create_man_page update_toc_in_readmen
61
+
62
+
58
63
  ##########################################
59
64
 
60
65
  # Tag the current commit, push it, then bump the version
data/man/aia.1 CHANGED
@@ -1,6 +1,6 @@
1
1
  .\" Generated by kramdown-man 1.0.1
2
2
  .\" https://github.com/postmodern/kramdown-man#readme
3
- .TH aia 1 "v0.5.11" AIA "User Manuals"
3
+ .TH aia 1 "v0.5.13" AIA "User Manuals"
4
4
  .SH NAME
5
5
  .PP
6
6
  aia \- command\-line interface for an AI assistant
@@ -78,9 +78,15 @@ Log FILEPATH \- default is \[Do]HOME\[sl]\.prompts\[sl]prompts\.log
78
78
  \fB\-m\fR, \fB\-\-\[lB]no\[rB]\-markdown\fR
79
79
  Format with Markdown \- default is true
80
80
  .TP
81
+ \fB\-n\fR, \fB\-\-next PROMPT\[ru]ID\fR
82
+ Specifies the next prompt ID to be processed using the response for the previous prompt ID\[cq]s processing as a context within which to process the next prompt \- default is an empty string
83
+ .TP
81
84
  \fB\-o\fR, \fB\-\-\[lB]no\[rB]\-out\[ru]file\fR \fIPATH\[ru]TO\[ru]OUTPUT\[ru]FILE\fP
82
85
  Out FILENAME \- default is \.\[sl]temp\.md
83
86
  .TP
87
+ \fB\-\-pipeline PID1,PID2,PID3\fR
88
+ Specifies a pipeline of prompt IDs (PID) in which the respone the first prompt is fed into the second prompt as context whose response is fed into the third as context, etc\. It is a comma seperated list\. There is no artificial limit to the number of prompt IDs in the pipeline \- default is an empty list
89
+ .TP
84
90
  \fB\-p\fR, \fB\-\-prompts\[ru]dir\fR \fIPATH\[ru]TO\[ru]DIRECTORY\fP
85
91
  Directory containing the prompt files \- default is \[ti]\[sl]\.prompts
86
92
  .TP
@@ -141,6 +147,28 @@ Some directives are:
141
147
  .IP \(bu 2
142
148
  \[sl]\[sl]shell shell\[ru]command
143
149
  .RE
150
+ .SH Prompt Sequences
151
+ .PP
152
+ The \fB\-\-next\fR and \fB\-\-pipeline\fR command line options allow for the sequencing of prompts such that the first prompt\[cq]s response feeds into the second prompt\[cq]s context and so on\. Suppose you had a complex sequence of prompts with IDs one, two, three and four\. You would use the following \fBaia\fR command to process them in sequence:
153
+ .PP
154
+ \fBaia one \-\-pipeline two,three,four\fR
155
+ .PP
156
+ Notice that the value for the pipelined prompt IDs has no spaces\. This is so that the command line parser does not mistake one of the promp IDs as a CLI option and issue an error\.
157
+ .SS Prompt Sequences Inside of a Prompt File
158
+ .PP
159
+ You can also use the \fBconfig\fR directive inside of a prompt file to specify a sequence\. Given the example above of 4 prompt IDs you could add this directive to the prompt file \fBone\.txt\fR
160
+ .PP
161
+ \fB\[sl]\[sl]config next two\fR
162
+ .PP
163
+ Then inside the prompt file \fBtwo\.txt\fR you could use this directive:
164
+ .PP
165
+ \fB\[sl]\[sl]config pipeline three,four\fR
166
+ .PP
167
+ or just
168
+ .PP
169
+ \fB\[sl]\[sl]config next three\fR
170
+ .PP
171
+ if you want to specify them one at a time\.
144
172
  .SH SEE ALSO
145
173
  .RS
146
174
  .IP \(bu 2
@@ -152,6 +180,11 @@ OpenAI Platform Documentation
152
180
  .UE
153
181
  and working with OpenAI models\.
154
182
  .IP \(bu 2
183
+ llm
184
+ .UR https:\[sl]\[sl]llm\.datasette\.io\[sl]
185
+ .UE
186
+ for more information on \fBllm\fR \- A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine\.
187
+ .IP \(bu 2
155
188
  mods
156
189
  .UR https:\[sl]\[sl]github\.com\[sl]charmbracelet\[sl]mods
157
190
  .UE
data/man/aia.1.md CHANGED
@@ -1,4 +1,4 @@
1
- # aia 1 "v0.5.11" AIA "User Manuals"
1
+ # aia 1 "v0.5.13" AIA "User Manuals"
2
2
 
3
3
  ## NAME
4
4
 
@@ -82,9 +82,15 @@ The aia command-line tool is an interface for interacting with an AI model backe
82
82
  `-m`, `--[no]-markdown`
83
83
  : Format with Markdown - default is true
84
84
 
85
+ `-n`, `--next PROMPT_ID`
86
+ : Specifies the next prompt ID to be processed using the response for the previous prompt ID's processing as a context within which to process the next prompt - default is an empty string
87
+
85
88
  `-o`, `--[no]-out_file` *PATH_TO_OUTPUT_FILE*
86
89
  : Out FILENAME - default is ./temp.md
87
90
 
91
+ `--pipeline PID1,PID2,PID3`
92
+ : Specifies a pipeline of prompt IDs (PID) in which the respone the first prompt is fed into the second prompt as context whose response is fed into the third as context, etc. It is a comma seperated list. There is no artificial limit to the number of prompt IDs in the pipeline - default is an empty list
93
+
88
94
  `-p`, `--prompts_dir` *PATH_TO_DIRECTORY*
89
95
  : Directory containing the prompt files - default is ~/.prompts
90
96
 
@@ -141,11 +147,37 @@ Some directives are:
141
147
  - //ruby ruby_code
142
148
  - //shell shell_command
143
149
 
150
+ ## Prompt Sequences
151
+
152
+ The `--next` and `--pipeline` command line options allow for the sequencing of prompts such that the first prompt's response feeds into the second prompt's context and so on. Suppose you had a complex sequence of prompts with IDs one, two, three and four. You would use the following `aia` command to process them in sequence:
153
+
154
+ `aia one --pipeline two,three,four`
155
+
156
+ Notice that the value for the pipelined prompt IDs has no spaces. This is so that the command line parser does not mistake one of the promp IDs as a CLI option and issue an error.
157
+
158
+ ### Prompt Sequences Inside of a Prompt File
159
+
160
+ You can also use the `config` directive inside of a prompt file to specify a sequence. Given the example above of 4 prompt IDs you could add this directive to the prompt file `one.txt`
161
+
162
+ `//config next two`
163
+
164
+ Then inside the prompt file `two.txt` you could use this directive:
165
+
166
+ `//config pipeline three,four`
167
+
168
+ or just
169
+
170
+ `//config next three`
171
+
172
+ if you want to specify them one at a time.
173
+
144
174
 
145
175
  ## SEE ALSO
146
176
 
147
177
  - [OpenAI Platform Documentation](https://platform.openai.com/docs/overview) for more information on [obtaining access tokens](https://platform.openai.com/account/api-keys) and working with OpenAI models.
148
178
 
179
+ - [llm](https://llm.datasette.io/) for more information on `llm` - A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
180
+
149
181
  - [mods](https://github.com/charmbracelet/mods) for more information on `mods` - AI for the command line, built for pipelines. LLM based AI is really good at interpreting the output of commands and returning the results in CLI friendly text formats like Markdown. Mods is a simple tool that makes it super easy to use AI on the command line and in your pipelines. Mods works with [OpenAI](https://platform.openai.com/account/api-keys) and [LocalAI](https://github.com/go-skynet/LocalAI)
150
182
 
151
183
  - [sgpt](https://github.com/tbckr/sgpt) (aka shell-gpt) is a powerful command-line interface (CLI) tool designed for seamless interaction with OpenAI models directly from your terminal. Effortlessly run queries, generate shell commands or code, create images from text, and more, using simple commands. Streamline your workflow and enhance productivity with this powerful and user-friendly CLI tool.
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: aia
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.11
4
+ version: 0.5.13
5
5
  platform: ruby
6
6
  authors:
7
7
  - Dewayne VanHoozer
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-02-19 00:00:00.000000000 Z
11
+ date: 2024-03-03 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: hashie
@@ -220,18 +220,17 @@ dependencies:
220
220
  - - ">="
221
221
  - !ruby/object:Gem::Version
222
222
  version: '0'
223
- description: |
224
- A command-line AI Assistante (aia) that provides pre-compositional
225
- template prompt management to various backend gen-AI processes.
226
- Complete shell integration allows a prompt to access system
227
- environment variables and execut shell commands as part of the
228
- prompt content. In addition full embedded Ruby support is provided
229
- given even more dynamic prompt conditional content. It is a
230
- generalized power house that rivals specialized gen-AI tools. aia
231
- currently supports "mods" and "sgpt" CLI tools. aia uses "ripgrep"
232
- and "fzf" CLI utilities to search for and select prompt files to
233
- send to the backend gen-AI tool along with supported context
234
- files.
223
+ description: A command-line AI Assistante (aia) that provides pre-compositional template
224
+ prompt management to various backend gen-AI processes such as llm, mods and sgpt
225
+ support processing of prompts both via remote API calls as well as keeping everything
226
+ local through the use of locally managed models and the LocalAI API. Complete shell
227
+ integration allows a prompt to access system environment variables and execut shell
228
+ commands as part of the prompt content. In addition full embedded Ruby support
229
+ is provided given even more dynamic prompt conditional content. It is a generalized
230
+ power house that rivals specialized gen-AI tools. aia currently supports "mods"
231
+ and "sgpt" CLI tools. aia uses "ripgrep" and "fzf" CLI utilities to search for
232
+ and select prompt files to send to the backend gen-AI tool along with supported
233
+ context files.
235
234
  email:
236
235
  - dvanhoozer@gmail.com
237
236
  executables:
@@ -268,6 +267,7 @@ files:
268
267
  - lib/aia/tools/editor.rb
269
268
  - lib/aia/tools/fzf.rb
270
269
  - lib/aia/tools/glow.rb
270
+ - lib/aia/tools/llm.rb
271
271
  - lib/aia/tools/mods.rb
272
272
  - lib/aia/tools/sgpt.rb
273
273
  - lib/aia/tools/subl.rb