rubber-ducky 1.1.5__tar.gz → 1.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,72 @@
1
+ Metadata-Version: 2.4
2
+ Name: rubber-ducky
3
+ Version: 1.2.0
4
+ Summary: For developers who can never remember the right bash command
5
+ Requires-Python: >=3.10
6
+ Description-Content-Type: text/markdown
7
+ License-File: LICENSE
8
+ Requires-Dist: colorama>=0.4.6
9
+ Requires-Dist: fastapi>=0.115.11
10
+ Requires-Dist: ollama>=0.6.0
11
+ Requires-Dist: openai>=1.60.2
12
+ Requires-Dist: prompt-toolkit>=3.0.48
13
+ Requires-Dist: rich>=13.9.4
14
+ Requires-Dist: termcolor>=2.5.0
15
+ Dynamic: license-file
16
+
17
+ # Rubber Ducky
18
+
19
+ Rubber Ducky is an inline terminal companion that turns natural language prompts into runnable shell commands. Paste multi-line context, get a suggested command, and run it without leaving your terminal.
20
+
21
+ ## Quick Start
22
+
23
+ | Action | Command |
24
+ | --- | --- |
25
+ | Install globally | `uv tool install rubber-ducky` |
26
+ | Run once | `uvx rubber-ducky -- --help` |
27
+ | Local install | `uv pip install rubber-ducky` |
28
+
29
+ Requirements:
30
+ - [Ollama](https://ollama.com) running locally
31
+ - Model available via Ollama (default: `qwen3-coder:480b-cloud`, install with `ollama pull qwen3-coder:480b-cloud`)
32
+
33
+ ## Usage
34
+
35
+ ```
36
+ ducky # interactive inline session
37
+ ducky --directory src # preload code from a directory
38
+ ducky --model llama3 # use a different Ollama model
39
+ ```
40
+
41
+ Both `ducky` and `rubber-ducky` executables map to the same CLI, so `uvx rubber-ducky -- <args>` works as well.
42
+
43
+ ### Inline Session (default)
44
+
45
+ Launching `ducky` with no arguments opens the inline interface:
46
+ - **Enter** submits; **Ctrl+J** inserts a newline (helpful when crafting multi-line prompts).
47
+ - **Ctrl+R** re-runs the last suggested command.
48
+ - Prefix any line with **`!`** (e.g., `!ls -la`) to run a shell command immediately.
49
+ - Arrow keys browse prompt history, backed by `~/.ducky/prompt_history`.
50
+ - Every prompt, assistant response, and executed command is logged to `~/.ducky/conversation.log`.
51
+ - Press **Ctrl+D** on an empty line to exit.
52
+ - Non-interactive runs such as `cat prompt.txt | ducky` print one response (and suggested command) before exiting; if a TTY is available you'll be asked whether to run the suggested command immediately.
53
+ - If `prompt_toolkit` is unavailable in your environment, Rubber Ducky falls back to a basic input loop (no history or shortcuts); install `prompt-toolkit>=3.0.48` to unlock the richer UI.
54
+
55
+ `ducky --directory <path>` streams the contents of the provided directory to the assistant the next time you submit a prompt (the directory is read once at startup).
56
+
57
+ ## Development (uv)
58
+
59
+ ```
60
+ uv sync
61
+ uv run ducky --help
62
+ ```
63
+
64
+ `uv sync` creates a virtual environment and installs dependencies defined in `pyproject.toml` / `uv.lock`.
65
+
66
+ ## Telemetry & Storage
67
+
68
+ Rubber Ducky stores:
69
+ - `~/.ducky/prompt_history`: readline-compatible history file.
70
+ - `~/.ducky/conversation.log`: JSON lines with timestamps for prompts, assistant messages, and shell executions.
71
+
72
+ No other telemetry is collected; delete the directory if you want a fresh slate.
@@ -0,0 +1,56 @@
1
+ # Rubber Ducky
2
+
3
+ Rubber Ducky is an inline terminal companion that turns natural language prompts into runnable shell commands. Paste multi-line context, get a suggested command, and run it without leaving your terminal.
4
+
5
+ ## Quick Start
6
+
7
+ | Action | Command |
8
+ | --- | --- |
9
+ | Install globally | `uv tool install rubber-ducky` |
10
+ | Run once | `uvx rubber-ducky -- --help` |
11
+ | Local install | `uv pip install rubber-ducky` |
12
+
13
+ Requirements:
14
+ - [Ollama](https://ollama.com) running locally
15
+ - Model available via Ollama (default: `qwen3-coder:480b-cloud`, install with `ollama pull qwen3-coder:480b-cloud`)
16
+
17
+ ## Usage
18
+
19
+ ```
20
+ ducky # interactive inline session
21
+ ducky --directory src # preload code from a directory
22
+ ducky --model llama3 # use a different Ollama model
23
+ ```
24
+
25
+ Both `ducky` and `rubber-ducky` executables map to the same CLI, so `uvx rubber-ducky -- <args>` works as well.
26
+
27
+ ### Inline Session (default)
28
+
29
+ Launching `ducky` with no arguments opens the inline interface:
30
+ - **Enter** submits; **Ctrl+J** inserts a newline (helpful when crafting multi-line prompts).
31
+ - **Ctrl+R** re-runs the last suggested command.
32
+ - Prefix any line with **`!`** (e.g., `!ls -la`) to run a shell command immediately.
33
+ - Arrow keys browse prompt history, backed by `~/.ducky/prompt_history`.
34
+ - Every prompt, assistant response, and executed command is logged to `~/.ducky/conversation.log`.
35
+ - Press **Ctrl+D** on an empty line to exit.
36
+ - Non-interactive runs such as `cat prompt.txt | ducky` print one response (and suggested command) before exiting; if a TTY is available you'll be asked whether to run the suggested command immediately.
37
+ - If `prompt_toolkit` is unavailable in your environment, Rubber Ducky falls back to a basic input loop (no history or shortcuts); install `prompt-toolkit>=3.0.48` to unlock the richer UI.
38
+
39
+ `ducky --directory <path>` streams the contents of the provided directory to the assistant the next time you submit a prompt (the directory is read once at startup).
40
+
41
+ ## Development (uv)
42
+
43
+ ```
44
+ uv sync
45
+ uv run ducky --help
46
+ ```
47
+
48
+ `uv sync` creates a virtual environment and installs dependencies defined in `pyproject.toml` / `uv.lock`.
49
+
50
+ ## Telemetry & Storage
51
+
52
+ Rubber Ducky stores:
53
+ - `~/.ducky/prompt_history`: readline-compatible history file.
54
+ - `~/.ducky/conversation.log`: JSON lines with timestamps for prompts, assistant messages, and shell executions.
55
+
56
+ No other telemetry is collected; delete the directory if you want a fresh slate.
@@ -0,0 +1 @@
1
+ from .ducky import ducky
@@ -0,0 +1,472 @@
1
+ from __future__ import annotations
2
+
3
+ import argparse
4
+ import asyncio
5
+ import json
6
+ import sys
7
+ from dataclasses import dataclass
8
+ from datetime import datetime
9
+ from pathlib import Path
10
+ from textwrap import dedent
11
+ from typing import Any, Dict, List
12
+
13
+ from ollama import AsyncClient
14
+ from contextlib import nullcontext
15
+
16
+ try: # prompt_toolkit is optional at runtime
17
+ from prompt_toolkit import PromptSession
18
+ from prompt_toolkit.history import FileHistory
19
+ from prompt_toolkit.key_binding import KeyBindings
20
+ from prompt_toolkit.patch_stdout import patch_stdout
21
+ except ImportError: # pragma: no cover - fallback mode
22
+ PromptSession = None # type: ignore[assignment]
23
+ FileHistory = None # type: ignore[assignment]
24
+ KeyBindings = None # type: ignore[assignment]
25
+
26
+ def patch_stdout() -> nullcontext:
27
+ return nullcontext()
28
+ from rich.console import Console
29
+
30
+
31
+ @dataclass
32
+ class AssistantResult:
33
+ content: str
34
+ command: str | None
35
+ thinking: str | None = None
36
+
37
+
38
+ @dataclass
39
+ class ShellResult:
40
+ command: str
41
+ stdout: str
42
+ stderr: str
43
+ returncode: int
44
+
45
+
46
+ HISTORY_DIR = Path.home() / ".ducky"
47
+ PROMPT_HISTORY_FILE = HISTORY_DIR / "prompt_history"
48
+ CONVERSATION_LOG_FILE = HISTORY_DIR / "conversation.log"
49
+ console = Console()
50
+
51
+
52
+ def ensure_history_dir() -> Path:
53
+ HISTORY_DIR.mkdir(parents=True, exist_ok=True)
54
+ return HISTORY_DIR
55
+
56
+
57
+ class ConversationLogger:
58
+ def __init__(self, log_path: Path) -> None:
59
+ self.log_path = log_path
60
+
61
+ def log_user(self, content: str) -> None:
62
+ if content.strip():
63
+ self._append({"role": "user", "content": content})
64
+
65
+ def log_assistant(self, content: str, command: str | None) -> None:
66
+ entry: Dict[str, Any] = {"role": "assistant", "content": content}
67
+ if command:
68
+ entry["suggested_command"] = command
69
+ self._append(entry)
70
+
71
+ def log_shell(self, result: ShellResult) -> None:
72
+ self._append(
73
+ {
74
+ "role": "shell",
75
+ "command": result.command,
76
+ "stdout": result.stdout,
77
+ "stderr": result.stderr,
78
+ "returncode": result.returncode,
79
+ }
80
+ )
81
+
82
+ def _append(self, entry: Dict[str, Any]) -> None:
83
+ entry["timestamp"] = datetime.utcnow().isoformat()
84
+ with self.log_path.open("a", encoding="utf-8") as handle:
85
+ handle.write(json.dumps(entry, ensure_ascii=False))
86
+ handle.write("\n")
87
+
88
+
89
+ def print_shell_result(result: ShellResult) -> None:
90
+ printed = False
91
+ if result.stdout.strip():
92
+ console.print(result.stdout.rstrip(), highlight=False)
93
+ printed = True
94
+ if result.stderr.strip():
95
+ if printed:
96
+ console.print()
97
+ console.print("[stderr]", style="bold red")
98
+ console.print(result.stderr.rstrip(), style="red", highlight=False)
99
+ printed = True
100
+ if result.returncode != 0 or not printed:
101
+ suffix = (
102
+ f"(exit status {result.returncode})"
103
+ if result.returncode != 0
104
+ else "(command produced no output)"
105
+ )
106
+ console.print(suffix, style="yellow")
107
+
108
+
109
+ async def run_shell_and_print(
110
+ assistant: RubberDuck,
111
+ command: str,
112
+ logger: ConversationLogger | None = None,
113
+ ) -> None:
114
+ if not command:
115
+ console.print("No command provided.", style="yellow")
116
+ return
117
+ console.print(f"$ {command}", style="bold magenta")
118
+ result = await assistant.run_shell_command(command)
119
+ print_shell_result(result)
120
+ if logger:
121
+ logger.log_shell(result)
122
+
123
+
124
+ class RubberDuck:
125
+ def __init__(
126
+ self, model: str, quick: bool = False, command_mode: bool = False
127
+ ) -> None:
128
+ self.system_prompt = dedent(
129
+ """
130
+ You are a pair programming tool called Ducky or RubberDucky to help
131
+ developers debug, think through design decisions, and write code.
132
+ Help the user reason about their approach and provide feedback on
133
+ the code. Think step by step and ask clarifying questions if
134
+ needed.
135
+
136
+ When the user provides git status output or similar multi-line terminal
137
+ output, provide a single comprehensive response that addresses all the
138
+ changes rather than responding to each line individually.
139
+ """
140
+ ).strip()
141
+ self.client = AsyncClient()
142
+ self.model = model
143
+ self.quick = quick
144
+ self.command_mode = command_mode
145
+ self.messages: List[Dict[str, str]] = [
146
+ {"role": "system", "content": self.system_prompt}
147
+ ]
148
+ self.last_thinking: str | None = None
149
+
150
+ async def send_prompt(
151
+ self, prompt: str | None = None, code: str | None = None
152
+ ) -> AssistantResult:
153
+ user_content = (prompt or "").strip()
154
+
155
+ if code:
156
+ user_content = f"{user_content}\n\n{code}" if user_content else code
157
+
158
+ if self.quick and user_content:
159
+ user_content += ". Return a command and be extremely concise"
160
+
161
+ if self.command_mode:
162
+ instruction = (
163
+ "Return a single bash command that accomplishes the task. "
164
+ "Do not include explanations or formatting other than the command itself."
165
+ )
166
+ user_content = (
167
+ f"{user_content}\n\n{instruction}" if user_content else instruction
168
+ )
169
+
170
+ user_message: Dict[str, str] = {"role": "user", "content": user_content}
171
+ self.messages.append(user_message)
172
+
173
+ response = await self.client.chat(
174
+ model=self.model,
175
+ messages=self.messages,
176
+ stream=False,
177
+ think=True,
178
+ )
179
+
180
+ assistant_message: Any | None = response.message
181
+ if assistant_message is None:
182
+ raise RuntimeError("No response received from the model.")
183
+
184
+ content = getattr(assistant_message, "content", "") or ""
185
+ thinking = getattr(assistant_message, "thinking", None)
186
+
187
+ self.messages.append({"role": "assistant", "content": content})
188
+
189
+ if thinking:
190
+ self.last_thinking = thinking
191
+
192
+ command = self._extract_command(content) if self.command_mode else None
193
+
194
+ return AssistantResult(content=content, command=command, thinking=thinking)
195
+
196
+ async def run_shell_command(self, command: str) -> ShellResult:
197
+ process = await asyncio.create_subprocess_shell(
198
+ command,
199
+ stdout=asyncio.subprocess.PIPE,
200
+ stderr=asyncio.subprocess.PIPE,
201
+ )
202
+ stdout, stderr = await process.communicate()
203
+ return ShellResult(
204
+ command=command,
205
+ stdout=stdout.decode(errors="replace"),
206
+ stderr=stderr.decode(errors="replace"),
207
+ returncode=process.returncode or 0,
208
+ )
209
+
210
+ def _extract_command(self, content: str) -> str | None:
211
+ lines = content.strip().splitlines()
212
+ if not lines:
213
+ return None
214
+
215
+ command_lines: List[str] = []
216
+
217
+ in_block = False
218
+ for line in lines:
219
+ stripped = line.strip()
220
+ if stripped.startswith("```"):
221
+ if in_block:
222
+ break
223
+ in_block = True
224
+ continue
225
+ if in_block:
226
+ if stripped:
227
+ command_lines = [stripped]
228
+ break
229
+ continue
230
+ if stripped:
231
+ command_lines = [stripped]
232
+ break
233
+
234
+ if not command_lines:
235
+ return None
236
+
237
+ command = command_lines[0]
238
+ first_semicolon = command.find(";")
239
+ if first_semicolon != -1:
240
+ command = command[:first_semicolon].strip()
241
+
242
+ return command or None
243
+
244
+
245
+ class InlineInterface:
246
+ def __init__(
247
+ self,
248
+ assistant: RubberDuck,
249
+ logger: ConversationLogger | None = None,
250
+ code: str | None = None,
251
+ ) -> None:
252
+ ensure_history_dir()
253
+ self.assistant = assistant
254
+ self.logger = logger
255
+ self.last_command: str | None = None
256
+ self.code = code
257
+ self._code_sent = False
258
+ self.session: PromptSession | None = None
259
+
260
+ if (
261
+ PromptSession is not None
262
+ and FileHistory is not None
263
+ and KeyBindings is not None
264
+ ):
265
+ self.session = PromptSession(
266
+ message=">> ",
267
+ multiline=True,
268
+ history=FileHistory(str(PROMPT_HISTORY_FILE)),
269
+ key_bindings=self._create_key_bindings(),
270
+ )
271
+
272
+ def _create_key_bindings(self) -> KeyBindings | None:
273
+ if KeyBindings is None: # pragma: no cover - fallback mode
274
+ return None
275
+
276
+ kb = KeyBindings()
277
+
278
+ @kb.add("enter")
279
+ def _(event) -> None:
280
+ buffer = event.current_buffer
281
+ buffer.validate_and_handle()
282
+
283
+ @kb.add("c-j")
284
+ def _(event) -> None:
285
+ event.current_buffer.insert_text("\n")
286
+
287
+ @kb.add("c-r")
288
+ def _(event) -> None:
289
+ event.app.exit(result="__RUN_LAST__")
290
+
291
+ return kb
292
+
293
+ async def run(self) -> None:
294
+ if self.session is None:
295
+ console.print(
296
+ "prompt_toolkit not installed. Falling back to basic input (no history/shortcuts).",
297
+ style="yellow",
298
+ )
299
+ await self._run_basic_loop()
300
+ return
301
+
302
+ console.print(
303
+ "Enter submits • Ctrl+J inserts newline • Ctrl+R reruns last command • '!cmd' runs shell • Ctrl+D exits",
304
+ style="dim",
305
+ )
306
+ while True:
307
+ try:
308
+ with patch_stdout():
309
+ text = await self.session.prompt_async()
310
+ except EOFError:
311
+ console.print()
312
+ console.print("Exiting.", style="dim")
313
+ return
314
+ except KeyboardInterrupt:
315
+ console.print()
316
+ console.print("Interrupted. Press Ctrl+D to exit.", style="yellow")
317
+ continue
318
+
319
+ if text == "__RUN_LAST__":
320
+ await self._run_last_command()
321
+ continue
322
+
323
+ await self._process_text(text)
324
+
325
+ async def _run_last_command(self) -> None:
326
+ if not self.last_command:
327
+ console.print("No suggested command available yet.", style="yellow")
328
+ return
329
+ await run_shell_and_print(self.assistant, self.last_command, logger=self.logger)
330
+
331
+ async def _process_text(self, text: str) -> None:
332
+ stripped = text.strip()
333
+ if not stripped:
334
+ return
335
+
336
+ if stripped.lower() in {":run", "/run"}:
337
+ await self._run_last_command()
338
+ return
339
+
340
+ if stripped.startswith("!"):
341
+ await run_shell_and_print(
342
+ self.assistant, stripped[1:].strip(), logger=self.logger
343
+ )
344
+ return
345
+
346
+ result = await run_single_prompt(
347
+ self.assistant,
348
+ stripped,
349
+ code=self.code if not self._code_sent else None,
350
+ logger=self.logger,
351
+ )
352
+ if self.code and not self._code_sent:
353
+ self._code_sent = True
354
+ self.last_command = result.command
355
+
356
+ async def _run_basic_loop(self) -> None: # pragma: no cover - fallback path
357
+ while True:
358
+ try:
359
+ text = await asyncio.to_thread(input, ">> ")
360
+ except EOFError:
361
+ console.print()
362
+ console.print("Exiting.", style="dim")
363
+ return
364
+ except KeyboardInterrupt:
365
+ console.print()
366
+ console.print("Interrupted. Press Ctrl+D to exit.", style="yellow")
367
+ continue
368
+
369
+ await self._process_text(text)
370
+
371
+
372
+ def read_files_from_dir(directory: str) -> str:
373
+ import os
374
+
375
+ files = os.listdir(directory)
376
+ code = ""
377
+ for file in files:
378
+ full_path = f"{directory}/{file}"
379
+ if not os.path.isfile(full_path):
380
+ continue
381
+ with open(full_path, "r", encoding="utf-8", errors="ignore") as handle:
382
+ code += handle.read()
383
+ return code
384
+
385
+
386
+ async def run_single_prompt(
387
+ rubber_ducky: RubberDuck,
388
+ prompt: str,
389
+ code: str | None = None,
390
+ logger: ConversationLogger | None = None,
391
+ ) -> AssistantResult:
392
+ if logger:
393
+ logger.log_user(prompt)
394
+ result = await rubber_ducky.send_prompt(prompt=prompt, code=code)
395
+ content = result.content or "(No content returned.)"
396
+ console.print(content, style="green", highlight=False)
397
+ if logger:
398
+ logger.log_assistant(content, result.command)
399
+ if result.command:
400
+ console.print("\nSuggested command:", style="cyan", highlight=False)
401
+ console.print(result.command, style="bold cyan", highlight=False)
402
+ return result
403
+
404
+
405
+ def confirm(prompt: str, default: bool = False) -> bool:
406
+ suffix = " [Y/n]: " if default else " [y/N]: "
407
+ try:
408
+ choice = input(prompt + suffix)
409
+ except EOFError:
410
+ return default
411
+ choice = choice.strip().lower()
412
+ if not choice:
413
+ return default
414
+ return choice in {"y", "yes"}
415
+
416
+
417
+ async def interactive_session(
418
+ rubber_ducky: RubberDuck,
419
+ logger: ConversationLogger | None = None,
420
+ code: str | None = None,
421
+ ) -> None:
422
+ ui = InlineInterface(rubber_ducky, logger=logger, code=code)
423
+ await ui.run()
424
+
425
+
426
+ async def ducky() -> None:
427
+ parser = argparse.ArgumentParser()
428
+ parser.add_argument(
429
+ "--directory", "-d", help="The directory to be processed", default=None
430
+ )
431
+ parser.add_argument(
432
+ "--model", "-m", help="The model to be used", default="qwen3-coder:480b-cloud"
433
+ )
434
+ args, _ = parser.parse_known_args()
435
+
436
+ ensure_history_dir()
437
+ logger = ConversationLogger(CONVERSATION_LOG_FILE)
438
+ rubber_ducky = RubberDuck(model=args.model, quick=False, command_mode=True)
439
+
440
+ code = read_files_from_dir(args.directory) if args.directory else None
441
+
442
+ piped_prompt: str | None = None
443
+ if not sys.stdin.isatty():
444
+ piped_prompt = sys.stdin.read()
445
+ piped_prompt = piped_prompt.strip() or None
446
+
447
+ if piped_prompt is not None:
448
+ if piped_prompt:
449
+ result = await run_single_prompt(
450
+ rubber_ducky, piped_prompt, code=code, logger=logger
451
+ )
452
+ if (
453
+ result.command
454
+ and sys.stdout.isatty()
455
+ and confirm("Run suggested command?")
456
+ ):
457
+ await run_shell_and_print(
458
+ rubber_ducky, result.command, logger=logger
459
+ )
460
+ else:
461
+ console.print("No input received from stdin.", style="yellow")
462
+ return
463
+
464
+ await interactive_session(rubber_ducky, logger=logger, code=code)
465
+
466
+
467
+ def main() -> None:
468
+ asyncio.run(ducky())
469
+
470
+
471
+ if __name__ == "__main__":
472
+ main()
@@ -0,0 +1,28 @@
1
+ [project]
2
+ name = "rubber-ducky"
3
+ version = "1.2.0"
4
+ description = "For developers who can never remember the right bash command"
5
+ readme = "README.md"
6
+ requires-python = ">=3.10"
7
+ dependencies = [
8
+ "colorama>=0.4.6",
9
+ "fastapi>=0.115.11",
10
+ "ollama>=0.6.0",
11
+ "openai>=1.60.2",
12
+ "prompt-toolkit>=3.0.48",
13
+ "rich>=13.9.4",
14
+ "termcolor>=2.5.0",
15
+ ]
16
+
17
+ [project.scripts]
18
+ ducky = "ducky.ducky:main"
19
+ rubber-ducky = "ducky.ducky:main"
20
+
21
+ [build-system]
22
+ requires = ["setuptools>=68", "wheel"]
23
+ build-backend = "setuptools.build_meta"
24
+
25
+ [tool.uv]
26
+ dev-dependencies = [
27
+ "ruff>=0.9.4",
28
+ ]
@@ -0,0 +1,72 @@
1
+ Metadata-Version: 2.4
2
+ Name: rubber-ducky
3
+ Version: 1.2.0
4
+ Summary: For developers who can never remember the right bash command
5
+ Requires-Python: >=3.10
6
+ Description-Content-Type: text/markdown
7
+ License-File: LICENSE
8
+ Requires-Dist: colorama>=0.4.6
9
+ Requires-Dist: fastapi>=0.115.11
10
+ Requires-Dist: ollama>=0.6.0
11
+ Requires-Dist: openai>=1.60.2
12
+ Requires-Dist: prompt-toolkit>=3.0.48
13
+ Requires-Dist: rich>=13.9.4
14
+ Requires-Dist: termcolor>=2.5.0
15
+ Dynamic: license-file
16
+
17
+ # Rubber Ducky
18
+
19
+ Rubber Ducky is an inline terminal companion that turns natural language prompts into runnable shell commands. Paste multi-line context, get a suggested command, and run it without leaving your terminal.
20
+
21
+ ## Quick Start
22
+
23
+ | Action | Command |
24
+ | --- | --- |
25
+ | Install globally | `uv tool install rubber-ducky` |
26
+ | Run once | `uvx rubber-ducky -- --help` |
27
+ | Local install | `uv pip install rubber-ducky` |
28
+
29
+ Requirements:
30
+ - [Ollama](https://ollama.com) running locally
31
+ - Model available via Ollama (default: `qwen3-coder:480b-cloud`, install with `ollama pull qwen3-coder:480b-cloud`)
32
+
33
+ ## Usage
34
+
35
+ ```
36
+ ducky # interactive inline session
37
+ ducky --directory src # preload code from a directory
38
+ ducky --model llama3 # use a different Ollama model
39
+ ```
40
+
41
+ Both `ducky` and `rubber-ducky` executables map to the same CLI, so `uvx rubber-ducky -- <args>` works as well.
42
+
43
+ ### Inline Session (default)
44
+
45
+ Launching `ducky` with no arguments opens the inline interface:
46
+ - **Enter** submits; **Ctrl+J** inserts a newline (helpful when crafting multi-line prompts).
47
+ - **Ctrl+R** re-runs the last suggested command.
48
+ - Prefix any line with **`!`** (e.g., `!ls -la`) to run a shell command immediately.
49
+ - Arrow keys browse prompt history, backed by `~/.ducky/prompt_history`.
50
+ - Every prompt, assistant response, and executed command is logged to `~/.ducky/conversation.log`.
51
+ - Press **Ctrl+D** on an empty line to exit.
52
+ - Non-interactive runs such as `cat prompt.txt | ducky` print one response (and suggested command) before exiting; if a TTY is available you'll be asked whether to run the suggested command immediately.
53
+ - If `prompt_toolkit` is unavailable in your environment, Rubber Ducky falls back to a basic input loop (no history or shortcuts); install `prompt-toolkit>=3.0.48` to unlock the richer UI.
54
+
55
+ `ducky --directory <path>` streams the contents of the provided directory to the assistant the next time you submit a prompt (the directory is read once at startup).
56
+
57
+ ## Development (uv)
58
+
59
+ ```
60
+ uv sync
61
+ uv run ducky --help
62
+ ```
63
+
64
+ `uv sync` creates a virtual environment and installs dependencies defined in `pyproject.toml` / `uv.lock`.
65
+
66
+ ## Telemetry & Storage
67
+
68
+ Rubber Ducky stores:
69
+ - `~/.ducky/prompt_history`: readline-compatible history file.
70
+ - `~/.ducky/conversation.log`: JSON lines with timestamps for prompts, assistant messages, and shell executions.
71
+
72
+ No other telemetry is collected; delete the directory if you want a fresh slate.
@@ -1,6 +1,6 @@
1
1
  LICENSE
2
2
  README.md
3
- setup.py
3
+ pyproject.toml
4
4
  ducky/__init__.py
5
5
  ducky/ducky.py
6
6
  rubber_ducky.egg-info/PKG-INFO
@@ -1,2 +1,3 @@
1
1
  [console_scripts]
2
2
  ducky = ducky.ducky:main
3
+ rubber-ducky = ducky.ducky:main
@@ -0,0 +1,7 @@
1
+ colorama>=0.4.6
2
+ fastapi>=0.115.11
3
+ ollama>=0.6.0
4
+ openai>=1.60.2
5
+ prompt-toolkit>=3.0.48
6
+ rich>=13.9.4
7
+ termcolor>=2.5.0
@@ -1,71 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: rubber-ducky
3
- Version: 1.1.5
4
- Summary: AI Companion for Pair Programming
5
- Home-page: https://github.com/ParthSareen/ducky
6
- Author: Parth Sareen
7
- Author-email: psareen@uwaterloo.ca
8
- License: MIT
9
- Description-Content-Type: text/markdown
10
- License-File: LICENSE
11
- Requires-Dist: ollama
12
- Dynamic: author
13
- Dynamic: author-email
14
- Dynamic: description
15
- Dynamic: description-content-type
16
- Dynamic: home-page
17
- Dynamic: license
18
- Dynamic: requires-dist
19
- Dynamic: summary
20
-
21
- # rubber ducky
22
- <p align="center">
23
- <img src="ducky_img.webp" alt="Ducky Image" width="200" height="200">
24
- </p>
25
-
26
- ## tl;dr
27
- - `pip install rubber-ducky`
28
- - Install ollama
29
- - `ollama pull codellama` (first time and then you can just have application in background)
30
- - There are probably other dependencies which I forgot to put in setup.py sorry in advance.
31
- - Run with `ducky <path>` or `ducky <question>`
32
-
33
- ## Dependencies
34
-
35
- You will need Ollama installed on your machine. The model I use for this project is `codellama`.
36
-
37
- For the first installation you can run `ollama pull codellama` and it should pull the necessary binaries for you.
38
-
39
- Ollama is also great because it'll spin up a server which can run in the background and can even do automatic model switching as long as you have it installed.
40
-
41
- ## Usage
42
-
43
- Install through [pypi](https://pypi.org/project/rubber-ducky/):
44
-
45
- `pip install rubber-ducky` .
46
-
47
- ### Simple run
48
- `ducky`
49
-
50
- or
51
-
52
- `ducky <question>`
53
-
54
- or
55
-
56
- `ducky -f <path>`
57
-
58
-
59
- ### All options
60
- `ducky --file <path> --prompt <prompt> --directory <directory> --chain --model <model>`
61
-
62
- Where:
63
- - `--prompt` or `-p`: Custom prompt to be used
64
- - `--file` or `-f`: The file to be processed
65
- - `--directory` or `-d`: The directory to be processed
66
- - `--chain` or `-c`: Chain the output of the previous command to the next command
67
- - `--model` or `-m`: The model to be used (default is "codellama")
68
-
69
-
70
- ## Example output
71
- ![Screenshot of ducky](image.png)
@@ -1,51 +0,0 @@
1
- # rubber ducky
2
- <p align="center">
3
- <img src="ducky_img.webp" alt="Ducky Image" width="200" height="200">
4
- </p>
5
-
6
- ## tl;dr
7
- - `pip install rubber-ducky`
8
- - Install ollama
9
- - `ollama pull codellama` (first time and then you can just have application in background)
10
- - There are probably other dependencies which I forgot to put in setup.py sorry in advance.
11
- - Run with `ducky <path>` or `ducky <question>`
12
-
13
- ## Dependencies
14
-
15
- You will need Ollama installed on your machine. The model I use for this project is `codellama`.
16
-
17
- For the first installation you can run `ollama pull codellama` and it should pull the necessary binaries for you.
18
-
19
- Ollama is also great because it'll spin up a server which can run in the background and can even do automatic model switching as long as you have it installed.
20
-
21
- ## Usage
22
-
23
- Install through [pypi](https://pypi.org/project/rubber-ducky/):
24
-
25
- `pip install rubber-ducky` .
26
-
27
- ### Simple run
28
- `ducky`
29
-
30
- or
31
-
32
- `ducky <question>`
33
-
34
- or
35
-
36
- `ducky -f <path>`
37
-
38
-
39
- ### All options
40
- `ducky --file <path> --prompt <prompt> --directory <directory> --chain --model <model>`
41
-
42
- Where:
43
- - `--prompt` or `-p`: Custom prompt to be used
44
- - `--file` or `-f`: The file to be processed
45
- - `--directory` or `-d`: The directory to be processed
46
- - `--chain` or `-c`: Chain the output of the previous command to the next command
47
- - `--model` or `-m`: The model to be used (default is "codellama")
48
-
49
-
50
- ## Example output
51
- ![Screenshot of ducky](image.png)
@@ -1 +0,0 @@
1
- from .ducky import ducky
@@ -1,93 +0,0 @@
1
- import argparse
2
- import asyncio
3
- from ollama import AsyncClient
4
-
5
-
6
- class RubberDuck:
7
- def __init__(self, model: str, quick: bool = False) -> None:
8
- self.system_prompt = """You are a pair progamming tool called Ducky or RubberDucky to help developers debug, think through design, and write code.
9
- Help the user think through their approach and provide feedback on the code. Think step by step and ask clarifying questions if needed.
10
- If asked """
11
- self.client = AsyncClient()
12
- self.model = model
13
- self.quick = quick
14
-
15
- async def call_llm(self, prompt: str | None = None) -> None:
16
- chain = False if prompt else True
17
-
18
- if prompt is None:
19
- prompt = input("\nEnter your prompt (or press Enter for default review): ")
20
-
21
- if self.quick:
22
- prompt += ". Return a command and be extremely concise"
23
-
24
- responses = [self.system_prompt]
25
- while True:
26
- context_prompt = "\n".join(responses) + "\n" + prompt
27
- stream = await self.client.generate(model=self.model, prompt=context_prompt, stream=True)
28
- response_text = ""
29
- async for chunk in stream:
30
- if 'response' in chunk:
31
- print(chunk['response'], end='', flush=True)
32
- response_text += chunk['response']
33
- print()
34
- responses.append(response_text)
35
- if not chain:
36
- break
37
- prompt = input("\n>> ")
38
-
39
-
40
- def read_files_from_dir(directory: str) -> str:
41
- import os
42
-
43
- files = os.listdir(directory)
44
- code = ""
45
- for file in files:
46
- code += open(directory + "/" + file).read()
47
- return code
48
-
49
-
50
- async def ducky() -> None:
51
- parser = argparse.ArgumentParser()
52
- parser.add_argument("question", nargs="*", help="Direct question to ask", default=None)
53
- parser.add_argument("--prompt", "-p", help="Custom prompt to be used", default=None)
54
- parser.add_argument("--file", "-f", help="The file to be processed", default=None)
55
- parser.add_argument("--directory", "-d", help="The directory to be processed", default=None)
56
- parser.add_argument("--quick", "-q", help="Quick mode", default=False)
57
- parser.add_argument(
58
- "--chain",
59
- "-c",
60
- help="Chain the output of the previous command to the next command",
61
- action="store_true",
62
- default=False,
63
- )
64
- parser.add_argument(
65
- "--model", "-m", help="The model to be used", default="qwen2.5-coder"
66
- )
67
- args, _ = parser.parse_known_args()
68
-
69
- rubber_ducky = RubberDuck(model=args.model, quick=args.quick)
70
-
71
- # Handle direct question from CLI
72
- if args.question:
73
- question = " ".join(args.question)
74
- await rubber_ducky.call_llm(prompt=question)
75
- return
76
-
77
- # Handle interactive mode (no file/directory specified)
78
- if args.file is None and args.directory is None:
79
- await rubber_ducky.call_llm(prompt=args.prompt)
80
- return
81
-
82
- # Get code from file or directory
83
- code = (open(args.file).read() if args.file
84
- else read_files_from_dir(args.directory))
85
-
86
- await rubber_ducky.call_llm(code=code, prompt=args.prompt)
87
-
88
-
89
- def main():
90
- asyncio.run(ducky())
91
-
92
- if __name__ == "__main__":
93
- main()
@@ -1,71 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: rubber-ducky
3
- Version: 1.1.5
4
- Summary: AI Companion for Pair Programming
5
- Home-page: https://github.com/ParthSareen/ducky
6
- Author: Parth Sareen
7
- Author-email: psareen@uwaterloo.ca
8
- License: MIT
9
- Description-Content-Type: text/markdown
10
- License-File: LICENSE
11
- Requires-Dist: ollama
12
- Dynamic: author
13
- Dynamic: author-email
14
- Dynamic: description
15
- Dynamic: description-content-type
16
- Dynamic: home-page
17
- Dynamic: license
18
- Dynamic: requires-dist
19
- Dynamic: summary
20
-
21
- # rubber ducky
22
- <p align="center">
23
- <img src="ducky_img.webp" alt="Ducky Image" width="200" height="200">
24
- </p>
25
-
26
- ## tl;dr
27
- - `pip install rubber-ducky`
28
- - Install ollama
29
- - `ollama pull codellama` (first time and then you can just have application in background)
30
- - There are probably other dependencies which I forgot to put in setup.py sorry in advance.
31
- - Run with `ducky <path>` or `ducky <question>`
32
-
33
- ## Dependencies
34
-
35
- You will need Ollama installed on your machine. The model I use for this project is `codellama`.
36
-
37
- For the first installation you can run `ollama pull codellama` and it should pull the necessary binaries for you.
38
-
39
- Ollama is also great because it'll spin up a server which can run in the background and can even do automatic model switching as long as you have it installed.
40
-
41
- ## Usage
42
-
43
- Install through [pypi](https://pypi.org/project/rubber-ducky/):
44
-
45
- `pip install rubber-ducky` .
46
-
47
- ### Simple run
48
- `ducky`
49
-
50
- or
51
-
52
- `ducky <question>`
53
-
54
- or
55
-
56
- `ducky -f <path>`
57
-
58
-
59
- ### All options
60
- `ducky --file <path> --prompt <prompt> --directory <directory> --chain --model <model>`
61
-
62
- Where:
63
- - `--prompt` or `-p`: Custom prompt to be used
64
- - `--file` or `-f`: The file to be processed
65
- - `--directory` or `-d`: The directory to be processed
66
- - `--chain` or `-c`: Chain the output of the previous command to the next command
67
- - `--model` or `-m`: The model to be used (default is "codellama")
68
-
69
-
70
- ## Example output
71
- ![Screenshot of ducky](image.png)
@@ -1 +0,0 @@
1
- ollama
@@ -1,25 +0,0 @@
1
- from setuptools import setup, find_packages
2
-
3
- with open('README.md', 'r', encoding='utf-8') as f:
4
- long_description = f.read()
5
-
6
- setup(
7
- name='rubber-ducky',
8
- version='1.1.5',
9
- description='AI Companion for Pair Programming',
10
- long_description=long_description,
11
- long_description_content_type='text/markdown',
12
- url='https://github.com/ParthSareen/ducky',
13
- author='Parth Sareen',
14
- author_email='psareen@uwaterloo.ca',
15
- license='MIT',
16
- packages=find_packages(),
17
- install_requires=[
18
- 'ollama',
19
- ],
20
- entry_points={
21
- 'console_scripts': [
22
- 'ducky=ducky.ducky:main',
23
- ],
24
- },
25
- )
File without changes
File without changes