rubber-ducky 1.5.0__py3-none-any.whl → 1.5.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,198 @@
1
+ Metadata-Version: 2.4
2
+ Name: rubber-ducky
3
+ Version: 1.5.1
4
+ Summary: Quick CLI do-it-all tool. Use natural language to spit out bash commands
5
+ Requires-Python: >=3.10
6
+ Description-Content-Type: text/markdown
7
+ License-File: LICENSE
8
+ Requires-Dist: colorama>=0.4.6
9
+ Requires-Dist: fastapi>=0.115.11
10
+ Requires-Dist: ollama>=0.6.0
11
+ Requires-Dist: openai>=1.60.2
12
+ Requires-Dist: prompt-toolkit>=3.0.48
13
+ Requires-Dist: rich>=13.9.4
14
+ Requires-Dist: termcolor>=2.5.0
15
+ Dynamic: license-file
16
+
17
+ # Rubber Ducky
18
+
19
+ Rubber Ducky is an inline terminal companion that turns natural language prompts into runnable shell commands. Paste multi-line context, get a suggested command, and run it without leaving your terminal.
20
+
21
+ ## Quick Start
22
+
23
+ | Action | Command |
24
+ | --- | --- |
25
+ | Install globally | `uv tool install rubber-ducky` |
26
+ | Run once | `uvx rubber-ducky -- --help` |
27
+ | Local install | `uv pip install rubber-ducky` |
28
+
29
+ Requirements:
30
+ - [Ollama](https://ollama.com) running locally or use cloud models
31
+ - Model available via Ollama (default: `glm-4.7:cloud`)
32
+
33
+ ## Usage
34
+
35
+ ```
36
+ ducky # interactive inline session
37
+ ducky --directory src # preload code from a directory
38
+ ducky --model qwen3 # use a different Ollama model
39
+ ducky --local # use local models with qwen3 default
40
+ ```
41
+
42
+ Both `ducky` and `rubber-ducky` executables map to the same CLI, so `uvx rubber-ducky -- <args>` works as well.
43
+
44
+ ### Inline Session (default)
45
+
46
+ Launching `ducky` with no arguments opens the inline interface:
47
+ - **Enter** submits; **Ctrl+J** inserts a newline (helpful when crafting multi-line prompts). Hitting **Enter on an empty prompt** reruns the latest suggested command; if none exists yet, it explains the most recent shell output.
48
+ - **Ctrl+R** re-runs the last suggested command.
49
+ - **Ctrl+S** copies the last suggested command to clipboard.
50
+ - Prefix any line with **`!`** (e.g., `!ls -la`) to run a shell command immediately.
51
+ - Arrow keys browse prompt history, backed by `~/.ducky/prompt_history`.
52
+ - Every prompt, assistant response, and executed command is logged to `~/.ducky/conversation.log`.
53
+ - Press **Ctrl+D** on an empty line to exit.
54
+ - Non-interactive runs such as `cat prompt.txt | ducky` print one response (and suggested command) before exiting; if a TTY is available you'll be asked whether to run the suggested command immediately.
55
+ - If `prompt_toolkit` is unavailable in your environment, Rubber Ducky falls back to a basic input loop (no history or shortcuts); install `prompt-toolkit>=3.0.48` to unlock the richer UI.
56
+
57
+ `ducky --directory <path>` streams the contents of the provided directory to the assistant the next time you submit a prompt (the directory is read once at startup).
58
+
59
+ ### Model Management
60
+
61
+ Rubber Ducky now supports easy switching between local and cloud models:
62
+ - **`/model`** - Interactive model selection between local and cloud models
63
+ - **`/local`** - List and select from local models (localhost:11434)
64
+ - **`/cloud`** - List and select from cloud models (ollama.com)
65
+ - Last used model is automatically saved and loaded on startup
66
+ - Type **`esc`** during model selection to cancel
67
+
68
+ ### Additional Commands
69
+
70
+ - **`/help`** - Show all available commands and shortcuts
71
+ - **`/crumbs`** - List all saved crumb shortcuts
72
+ - **`/crumb <name>`** - Save the last AI-suggested command as a named crumb
73
+ - **`/crumb add <name> <command>`** - Manually add a crumb with a specific command
74
+ - **`/crumb del <name>`** - Delete a saved crumb
75
+ - **`<crumb-name>`** - Invoke a saved crumb (displays info and executes the command)
76
+ - **`/clear`** or **`/reset`** - Clear conversation history
77
+ - **`/run`** or **`:run`** - Re-run the last suggested command
78
+
79
+ ## Crumbs
80
+
81
+ Crumbs are saved command shortcuts that let you quickly reuse AI-generated bash commands without regenerating them each time. Perfect for frequently-used workflows or complex commands.
82
+
83
+ ### Saving Crumbs
84
+
85
+ When the AI suggests a command that you want to reuse:
86
+
87
+ 1. Get a command suggestion from ducky
88
+ 2. Save it immediately: `/crumb <name>`
89
+ 3. Example:
90
+ ```
91
+ >> How do I list all Ollama processes?
92
+ ...
93
+ Suggested command: ps aux | grep -i ollama | grep -v grep
94
+ >> /crumb ols
95
+ Saved crumb 'ols'!
96
+ Generating explanation...
97
+ Explanation added: Finds and lists all running Ollama processes.
98
+ ```
99
+
100
+ The crumb is saved with:
101
+ - The original command
102
+ - An AI-generated one-line explanation
103
+ - A timestamp
104
+
105
+ ### Invoking Crumbs
106
+
107
+ Simply type the crumb name in the REPL or use it as a CLI argument:
108
+
109
+ **In REPL:**
110
+ ```
111
+ >> ols
112
+
113
+ Crumb: ols
114
+ Explanation: Finds and lists all running Ollama processes.
115
+ Command: ps aux | grep -i ollama | grep -v grep
116
+
117
+ $ ps aux | grep -i ollama | grep -v grep
118
+ user123 12345 0.3 1.2 456789 98765 ? Sl 10:00 0:05 ollama serve
119
+ ```
120
+
121
+ **From CLI:**
122
+ ```bash
123
+ ducky ols # Runs the saved crumb and displays output
124
+ ```
125
+
126
+ When you invoke a crumb:
127
+ 1. It displays the crumb name, explanation, and command
128
+ 2. Automatically executes the command
129
+ 3. Shows the output
130
+
131
+ ### Managing Crumbs
132
+
133
+ **List all crumbs:**
134
+ ```bash
135
+ >> /crumbs
136
+ ```
137
+
138
+ Output:
139
+ ```
140
+ Saved Crumbs
141
+ =============
142
+ ols | Finds and lists all running Ollama processes. | ps aux | grep -i ollama | grep -v grep
143
+ test | Run tests and build project | pytest && python build.py
144
+ deploy | Deploy to production | docker push app:latest
145
+ ```
146
+
147
+ **Manually add a crumb:**
148
+ ```bash
149
+ >> /crumb add deploy-prod docker build -t app:latest && docker push app:latest
150
+ ```
151
+
152
+ **Delete a crumb:**
153
+ ```bash
154
+ >> /crumb ols
155
+ Deleted crumb 'ols'.
156
+ ```
157
+
158
+ ### Storage
159
+
160
+ Crumbs are stored in `~/.ducky/crumbs.json` as JSON. Each crumb includes:
161
+ - `prompt`: Original user prompt
162
+ - `response`: AI's full response
163
+ - `command`: The suggested bash command
164
+ - `explanation`: AI-generated one-line summary
165
+ - `created_at`: ISO timestamp
166
+
167
+ **Example:**
168
+ ```json
169
+ {
170
+ "ols": {
171
+ "prompt": "How do I list all Ollama processes?",
172
+ "response": "To list all running Ollama processes...",
173
+ "command": "ps aux | grep -i ollama | grep -v grep",
174
+ "explanation": "Finds and lists all running Ollama processes.",
175
+ "created_at": "2024-01-05T10:30:00.000000+00:00"
176
+ }
177
+ }
178
+ ```
179
+
180
+ Delete `~/.ducky/crumbs.json` to clear all saved crumbs.
181
+
182
+ ## Development (uv)
183
+
184
+ ```
185
+ uv sync
186
+ uv run ducky --help
187
+ ```
188
+
189
+ `uv sync` creates a virtual environment and installs dependencies defined in `pyproject.toml` / `uv.lock`.
190
+
191
+ ## Telemetry & Storage
192
+
193
+ Rubber Ducky stores:
194
+ - `~/.ducky/prompt_history`: readline-compatible history file.
195
+ - `~/.ducky/conversation.log`: JSON lines with timestamps for prompts, assistant messages, and shell executions.
196
+ - `~/.ducky/config`: User preferences including last selected model.
197
+
198
+ No other telemetry is collected; delete the directory if you want a fresh slate.
@@ -0,0 +1,13 @@
1
+ ducky/__init__.py,sha256=2vLhJxOuJ3lnIeg5rmF6xUvybUT5Qhjej6AS0BeBASY,60
2
+ ducky/config.py,sha256=Lh7xTUYh4i8Gxgrl0oTYadZB_72Wy2BKIqLCcDQduOA,2116
3
+ ducky/crumb.py,sha256=7BlyjD81-cZptYxQM97y6gOGdVDBF2qzxW0xbPqbspE,2693
4
+ ducky/ducky.py,sha256=n0KOBHpbeBuk8g0BoF2mdJia9sdpY9FUyYjN5gBxaH8,38915
5
+ examples/POLLING_USER_GUIDE.md,sha256=rMEAczZhpgyJ9BgwHkN-SKwSdyas8nlw_CjpV7SFOLA,10685
6
+ examples/mock-logs/info.txt,sha256=apJqEO__UM1R2_2x9MlQOA7XmxvLvbhRvOy-FAwrINo,258
7
+ examples/mock-logs/mock-logs.sh,sha256=zM2JSaCR1eCQLlMvXDWjFnpxZTqrMpnFRa_SgNLPmBk,1132
8
+ rubber_ducky-1.5.1.dist-info/licenses/LICENSE,sha256=gQ1rCmw18NqTk5GxG96F6vgyN70e1c4kcKUtWDwdNaE,1069
9
+ rubber_ducky-1.5.1.dist-info/METADATA,sha256=Gi8l6zNW3_CERWQO-b_lz5H5pZqyGuwFDTrC8fKEQ7A,6733
10
+ rubber_ducky-1.5.1.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
11
+ rubber_ducky-1.5.1.dist-info/entry_points.txt,sha256=WPnVUUNvWdMDcBlCo8JCzkLghGllMX5QVZyQghyq85Q,75
12
+ rubber_ducky-1.5.1.dist-info/top_level.txt,sha256=hid_mDkugR6XIeravFKuzcRPpuN_ylN3ejC_06Fmnb4,15
13
+ rubber_ducky-1.5.1.dist-info/RECORD,,
@@ -1,3 +1,2 @@
1
- crumbs
2
1
  ducky
3
2
  examples
@@ -1,12 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Show disk usage with highlights
4
-
5
- echo "=== Disk Usage Overview ==="
6
- df -h 2>/dev/null | grep -E "(/|Filesystem)"
7
-
8
- echo -e "\n=== Detailed Disk Usage ==="
9
- du -h -d 2 . 2>/dev/null | sort -hr | head -20
10
-
11
- echo -e "\n=== Largest Files in Current Directory ==="
12
- find . -type f -not -path '*/\.git/*' -not -path '*/node_modules/*' -not -path '*/venv/*' -not -path '*/\__pycache__/*' -exec du -h {} + 2>/dev/null | sort -rh | head -10
@@ -1,3 +0,0 @@
1
- name: disk-usage
2
- type: shell
3
- description: Show disk usage with highlights on full drives
crumbs/git-log/git-log.sh DELETED
@@ -1,24 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Show recent commit history with details
4
-
5
- if ! git rev-parse --git-dir > /dev/null 2>&1; then
6
- echo "Not a git repository."
7
- exit 1
8
- fi
9
-
10
- # Default to showing last 10 commits
11
- COMMIT_COUNT=10
12
-
13
- if [ -n "$1" ] && [[ "$1" =~ ^[0-9]+$ ]]; then
14
- COMMIT_COUNT=$1
15
- fi
16
-
17
- echo "=== Recent ${COMMIT_COUNT} Commits ==="
18
- git log --oneline -$COMMIT_COUNT
19
-
20
- echo -e "\n=== Detailed View of Last ${COMMIT_COUNT} Commits ==="
21
- git log -${COMMIT_COUNT} --pretty=format:"%h - %an, %ar : %s" --stat
22
-
23
- echo -e "\n=== Author Statistics ==="
24
- git shortlog -sn --all -${COMMIT_COUNT} 2>/dev/null
crumbs/git-log/info.txt DELETED
@@ -1,3 +0,0 @@
1
- name: git-log
2
- type: shell
3
- description: Show recent commit history with detailed information
@@ -1,21 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Show comprehensive git status and recent activity
4
-
5
- echo "=== Git Status ==="
6
- git status --short 2>/dev/null || echo "Not a git repository."
7
-
8
- echo -e "\n=== Current Branch ==="
9
- git branch --show-current 2>/dev/null || echo "Not a git repository."
10
-
11
- echo -e "\n=== Last 3 Commits ==="
12
- git log --oneline -3 2>/dev/null || echo "No commits found."
13
-
14
- echo -e "\n=== Staged Changes (if any) ==="
15
- git diff --cached --stat 2>/dev/null
16
-
17
- echo -e "\n=== Unstaged Changes (if any) ==="
18
- git diff --stat 2>/dev/null
19
-
20
- echo -e "\n=== Untracked Files (if any) ==="
21
- git ls-files --others --exclude-standard 2>/dev/null | head -20
@@ -1,3 +0,0 @@
1
- name: git-status
2
- type: shell
3
- description: Show current git branch, uncommitted changes, and provide suggestions for next steps
@@ -1,3 +0,0 @@
1
- name: process-list
2
- type: shell
3
- description: Show running processes with key information
@@ -1,20 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Show running processes with useful information
4
-
5
- echo "=== Running Processes (Top 20 by CPU) ==="
6
- ps aux | sort -rk 3,3 | head -21 | awk '{printf "%-8s %-6s %-8s %s\n", $1, $2, $3, $11}' | column -t
7
-
8
- echo -e "\n=== Running Processes (Top 20 by Memory) ==="
9
- ps aux | sort -rk 4,4 | head -21 | awk '{printf "%-8s %-6s %-8s %s\n", $1, $2, $4, $11}' | column -t
10
-
11
- echo -e "\n=== Process Counts by User ==="
12
- ps aux | awk '{print $1}' | sort | uniq -c | sort -rn | head -10
13
-
14
- echo -e "\n=== Check for Specific Processes ==="
15
- for proc in "node" "python" "java" "docker" "npm" "uv"; do
16
- count=$(pgrep -c "$proc" 2>/dev/null || echo "0")
17
- if [ "$count" -gt 0 ]; then
18
- echo "$proc: $count process(es)"
19
- fi
20
- done | sort
@@ -1,3 +0,0 @@
1
- name: recent-files
2
- type: shell
3
- description: Show recently modified files in the current directory
@@ -1,13 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Show recently modified files in current directory
4
- # Takes optional argument for number of files to show (default 20)
5
-
6
- FILE_COUNT=20
7
-
8
- if [ -n "$1" ] && [[ "$1" =~ ^[0-9]+$ ]]; then
9
- FILE_COUNT=$1
10
- fi
11
-
12
- echo "=== ${FILE_COUNT} Most Recently Modified Files ==="
13
- find . -type f -not -path '*/\.*' -not -path '*/node_modules/*' -not -path '*/\.git/*' -not -path '*/venv/*' -not -path '*/\__pycache__/*' -printf '%T@ %p\n' 2>/dev/null | sort -rn | head -${FILE_COUNT} | awk '{print strftime("%Y-%m-%d %H:%M:%S", $1), $2}'
@@ -1,3 +0,0 @@
1
- name: system-health
2
- type: shell
3
- description: Show system health metrics including CPU, memory, and load
@@ -1,58 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Show system health metrics
4
- # Works on macOS and Linux
5
-
6
- detect_os() {
7
- if [[ "$OSTYPE" == "darwin"* ]]; then
8
- echo "macos"
9
- elif [[ "$OSTYPE" == "linux-gnu"* ]]; then
10
- echo "linux"
11
- else
12
- echo "unknown"
13
- fi
14
- }
15
-
16
- OS=$(detect_os)
17
-
18
- echo "=== System Health ==="
19
- echo "Platform: $OS"
20
- echo "Uptime: $(uptime)" | awk '{print $3, $4}'
21
- echo "Users logged in: $(who | wc -l | tr -d ' ')"
22
-
23
- echo -e "\n=== CPU Usage ==="
24
-
25
- if [ "$OS" == "macos" ]; then
26
- echo "Load averages (1m, 5m, 15m): $(sysctl -n vm.loadavg)"
27
- top -l 1 | grep "CPU usage"
28
- elif [ "$OS" == "linux" ]; then
29
- echo "Load averages (1m, 5m, 15m): $(uptime | awk -F'load average:' '{print $2}')"
30
- top -bn1 | grep "Cpu(s)"
31
- fi
32
-
33
- echo -e "\n=== Memory Usage ==="
34
-
35
- if [ "$OS" == "macos" ]; then
36
- # macOS memory
37
- echo "Memory Stats:"
38
- vm_stat | perl -ne '/page size of (\d+)/ and $ps=$1; /Pages\s+([^:]+)[^\d]+(\d+)/ and printf("%-16s % 16.2f MB\n", "$1:", $2 * $ps / 1048576);'
39
- elif [ "$OS" == "linux" ]; then
40
- free -h
41
- fi
42
-
43
- echo -e "\n=== Disk Space ==="
44
- df -h | grep -vE '^Filesystem|tmpfs|cdrom|devtmpfs'
45
-
46
- echo -e "\n=== Network Connections ==="
47
-
48
- if [ "$OS" == "macos" ]; then
49
- netstat -an | grep ESTABLISHED | wc -l | xargs echo "Active network connections:"
50
- elif [ "$OS" == "linux" ]; then
51
- ss -tun | grep ESTAB | wc -l | xargs echo "Active network connections:"
52
- fi
53
-
54
- echo -e "\n=== Top 5 Processes by CPU ==="
55
- ps aux | sort -rk 3,3 | head -11 | tail -10 | awk '{printf "%-10s %6s %s\n", $1, $3, $11}'
56
-
57
- echo -e "\n=== Top 5 Processes by Memory ==="
58
- ps aux | sort -rk 4,4 | head -11 | tail -10 | awk '{printf "%-10s %6s %s\n", $1, $4, $11}'
@@ -1,210 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: rubber-ducky
3
- Version: 1.5.0
4
- Summary: Quick CLI do-it-all tool. Use natural language to spit out bash commands
5
- Requires-Python: >=3.10
6
- Description-Content-Type: text/markdown
7
- License-File: LICENSE
8
- Requires-Dist: colorama>=0.4.6
9
- Requires-Dist: fastapi>=0.115.11
10
- Requires-Dist: ollama>=0.6.0
11
- Requires-Dist: openai>=1.60.2
12
- Requires-Dist: prompt-toolkit>=3.0.48
13
- Requires-Dist: rich>=13.9.4
14
- Requires-Dist: termcolor>=2.5.0
15
- Dynamic: license-file
16
-
17
- # Rubber Ducky
18
-
19
- Rubber Ducky is an inline terminal companion that turns natural language prompts into runnable shell commands. Paste multi-line context, get a suggested command, and run it without leaving your terminal.
20
-
21
- ## Quick Start
22
-
23
- | Action | Command |
24
- | --- | --- |
25
- | Install globally | `uv tool install rubber-ducky` |
26
- | Run once | `uvx rubber-ducky -- --help` |
27
- | Local install | `uv pip install rubber-ducky` |
28
-
29
- Requirements:
30
- - [Ollama](https://ollama.com) running locally
31
- - Model available via Ollama (default: `qwen3-coder:480b-cloud`, install with `ollama pull qwen3-coder:480b-cloud`)
32
-
33
- ## Usage
34
-
35
- ```
36
- ducky # interactive inline session
37
- ducky --directory src # preload code from a directory
38
- ducky --model qwen3 # use a different Ollama model
39
- ducky --local # use local models with gemma2:9b default
40
- ducky --poll log-crumb # start polling mode for a crumb
41
- ```
42
-
43
- Both `ducky` and `rubber-ducky` executables map to the same CLI, so `uvx rubber-ducky -- <args>` works as well.
44
-
45
- ### Inline Session (default)
46
-
47
- Launching `ducky` with no arguments opens the inline interface:
48
- - **Enter** submits; **Ctrl+J** inserts a newline (helpful when crafting multi-line prompts). Hitting **Enter on an empty prompt** reruns the latest suggested command; if none exists yet, it explains the most recent shell output.
49
- - **Ctrl+R** re-runs the last suggested command.
50
- - **Ctrl+S** copies the last suggested command to clipboard.
51
- - Prefix any line with **`!`** (e.g., `!ls -la`) to run a shell command immediately.
52
- - Arrow keys browse prompt history, backed by `~/.ducky/prompt_history`.
53
- - Every prompt, assistant response, and executed command is logged to `~/.ducky/conversation.log`.
54
- - Press **Ctrl+D** on an empty line to exit.
55
- - Non-interactive runs such as `cat prompt.txt | ducky` print one response (and suggested command) before exiting; if a TTY is available you'll be asked whether to run the suggested command immediately.
56
- - If `prompt_toolkit` is unavailable in your environment, Rubber Ducky falls back to a basic input loop (no history or shortcuts); install `prompt-toolkit>=3.0.48` to unlock the richer UI.
57
-
58
- `ducky --directory <path>` streams the contents of the provided directory to the assistant the next time you submit a prompt (the directory is read once at startup).
59
-
60
- ### Model Management
61
-
62
- Rubber Ducky now supports easy switching between local and cloud models:
63
- - **`/model`** - Interactive model selection between local and cloud models
64
- - **`/local`** - List and select from local models (localhost:11434)
65
- - **`/cloud`** - List and select from cloud models (ollama.com)
66
- - Last used model is automatically saved and loaded on startup
67
- - Type **`esc`** during model selection to cancel
68
-
69
- ### Additional Commands
70
-
71
- - **`/help`** - Show all available commands and shortcuts
72
- - **`/crumbs`** - List all available crumbs (default and user-created)
73
- - **`/clear`** or **`/reset`** - Clear conversation history
74
- - **`/poll <crumb>`** - Start polling session for a crumb
75
- - **`/poll <crumb> -i <interval>`** - Start polling with custom interval
76
- - **`/poll <crumb> -p <prompt>`** - Start polling with custom prompt
77
- - **`/stop-poll`** - Stop current polling session
78
- - **`/run`** or **`:run`** - Re-run the last suggested command
79
-
80
- ## Crumbs
81
-
82
- Crumbs are simple scripts that can be executed within Rubber Ducky. They are stored in `~/.ducky/crumbs/` (for user crumbs) and shipped with the package (default crumbs).
83
-
84
- Rubber Ducky ships with the following default crumbs:
85
-
86
- | Crumb | Description |
87
- |-------|-------------|
88
- | `git-status` | Show current git status and provide suggestions |
89
- | `git-log` | Show recent commit history with detailed information |
90
- | `recent-files` | Show recently modified files in current directory |
91
- | `disk-usage` | Show disk usage with highlights |
92
- | `system-health` | Show CPU, memory, and system load metrics |
93
- | `process-list` | Show running processes with analysis |
94
-
95
- **Tip:** Run `/crumbs` in interactive mode to see all available crumbs with descriptions and polling status.
96
-
97
- To use a crumb, simply mention it in your prompt:
98
- ```
99
- Can you use the git-status crumb to see what needs to be committed?
100
- ```
101
-
102
- **Note:** User-defined crumbs (in `~/.ducky/crumbs/`) override default crumbs with the same name.
103
-
104
- ### Creating Crumbs
105
-
106
- To create a new crumb:
107
-
108
- 1. Create a new directory in `~/.ducky/crumbs/` with your crumb name
109
- 2. Add an `info.txt` file with metadata:
110
- ```
111
- name: your-crumb-name
112
- type: shell
113
- description: Brief description of what this crumb does
114
- ```
115
- 3. Add your executable script file (e.g., `your-crumb-name.sh`)
116
- 4. Create a symbolic link in `~/.local/bin` to make it available as a command:
117
- ```bash
118
- ln -s ~/.ducky/crumbs/your-crumb-name/your-crumb-name.sh ~/.local/bin/your-crumb-name
119
- ```
120
-
121
- ### Polling Mode
122
-
123
- Crumbs can be configured for background polling, where the crumb script runs at intervals and the AI analyzes the output.
124
-
125
- **Enabling Polling in a Crumb:**
126
-
127
- Add polling configuration to your crumb's `info.txt`:
128
- ```
129
- name: log-crumb
130
- type: shell
131
- description: Fetch and analyze server logs
132
- poll: true
133
- poll_type: interval # "interval" (run repeatedly) or "continuous" (run once, tail output)
134
- poll_interval: 5 # seconds between polls
135
- poll_prompt: Analyze these logs for errors, warnings, or anomalies. Be concise.
136
- ```
137
-
138
- **Polling via CLI:**
139
-
140
- ```bash
141
- # Start polling with crumb's default configuration
142
- ducky --poll log-crumb
143
-
144
- # Override interval
145
- ducky --poll log-crumb --interval 10
146
-
147
- # Override prompt
148
- ducky --poll log-crumb --prompt "Extract only error messages"
149
- ```
150
-
151
- **Polling via Interactive Mode:**
152
-
153
- ```bash
154
- ducky
155
- >> /poll log-crumb # Use crumb defaults
156
- >> /poll log-crumb -i 10 # Override interval
157
- >> /poll log-crumb -p "Summarize" # Override prompt
158
- >> /stop-poll # Stop polling
159
- ```
160
-
161
- **Example Crumb with Polling:**
162
-
163
- Directory: `~/.ducky/crumbs/server-logs/`
164
-
165
- ```
166
- info.txt:
167
- name: server-logs
168
- type: shell
169
- description: Fetch and analyze server logs
170
- poll: true
171
- poll_type: interval
172
- poll_interval: 5
173
- poll_prompt: Analyze these logs for errors, warnings, or anomalies. Be concise.
174
-
175
- server-logs.sh:
176
- #!/bin/bash
177
- curl -s http://localhost:8080/logs | tail -50
178
- ```
179
-
180
- **Polling Types:**
181
-
182
- - **interval**: Run the crumb script at regular intervals (default)
183
- - **continuous**: Run the crumb once in the background and stream its output, analyzing periodically
184
-
185
- **Stopping Polling:**
186
-
187
- Press `Ctrl+C` at any time to stop polling. In interactive mode, you can also use `/stop-poll`.
188
-
189
- ## Documentation
190
-
191
- - **Polling Feature Guide**: See [examples/POLLING_USER_GUIDE.md](examples/POLLING_USER_GUIDE.md) for detailed instructions on creating and using polling crumbs
192
- - **Mock Log Crumb**: See [examples/mock-logs/](examples/mock-logs/) for an example polling crumb
193
-
194
- ## Development (uv)
195
-
196
- ```
197
- uv sync
198
- uv run ducky --help
199
- ```
200
-
201
- `uv sync` creates a virtual environment and installs dependencies defined in `pyproject.toml` / `uv.lock`.
202
-
203
- ## Telemetry & Storage
204
-
205
- Rubber Ducky stores:
206
- - `~/.ducky/prompt_history`: readline-compatible history file.
207
- - `~/.ducky/conversation.log`: JSON lines with timestamps for prompts, assistant messages, and shell executions.
208
- - `~/.ducky/config`: User preferences including last selected model.
209
-
210
- No other telemetry is collected; delete the directory if you want a fresh slate.
@@ -1,24 +0,0 @@
1
- crumbs/disk-usage/disk-usage.sh,sha256=paiyWTmvzJD2A7wHDU2aIHJVnNqNmBfNV33Os0q7UnQ,451
2
- crumbs/disk-usage/info.txt,sha256=nKESO2G0biA0fV7peHPk7XaxcJ-wz_RJOkSUXI0VTKA,89
3
- crumbs/git-log/git-log.sh,sha256=L41W6s-hfJ7FsKgyaS39CRI6tRkgUg-0tHb3Z1FyHqY,595
4
- crumbs/git-log/info.txt,sha256=G0SDE5nv9mLU5zRpiYLdiR81ox6XRVJhNgnpoFArqeI,92
5
- crumbs/git-status/git-status.sh,sha256=hXdZxFA8hiJoZ4P8Zv1KPJjwIEpPJG7xibjL2ggbp7o,633
6
- crumbs/git-status/info.txt,sha256=MgzzfF23muScvS_JihXlpny7unyOCG7hZSAVZf-hQ7c,127
7
- crumbs/process-list/info.txt,sha256=5KGBQ3zn7hlL3Pv7miAoa3UBLQ-vKD-aLew6h0xeBcQ,88
8
- crumbs/process-list/process-list.sh,sha256=cwqMqu-xCbCn_DZNCaboZrDwg8A_UOF-bU9D8wxz5X8,744
9
- crumbs/recent-files/info.txt,sha256=7y9SS_lvYgagaKkVaJp-rF9TEgnUWCe6nj-34FcCvb8,98
10
- crumbs/recent-files/recent-files.sh,sha256=xpLUYYbouYQzGQZTTVrcybpRYPuoaLdyHny2HoHJZFw,540
11
- crumbs/system-health/info.txt,sha256=jym4tR5Xy7hEdowlvpufQ4_mA4H88JfQYPGXDZBI9UU,104
12
- crumbs/system-health/system-health.sh,sha256=NlLK3Th-sJ13Op44fjjEgi7oPscDQUCDBbR1Gd8ghlE,1670
13
- ducky/__init__.py,sha256=2vLhJxOuJ3lnIeg5rmF6xUvybUT5Qhjej6AS0BeBASY,60
14
- ducky/config.py,sha256=Lh7xTUYh4i8Gxgrl0oTYadZB_72Wy2BKIqLCcDQduOA,2116
15
- ducky/ducky.py,sha256=ZCoPH9c0SXRVS0HzdKavlokGWhlWeoPKG-ezi5B_isA,49466
16
- examples/POLLING_USER_GUIDE.md,sha256=rMEAczZhpgyJ9BgwHkN-SKwSdyas8nlw_CjpV7SFOLA,10685
17
- examples/mock-logs/info.txt,sha256=apJqEO__UM1R2_2x9MlQOA7XmxvLvbhRvOy-FAwrINo,258
18
- examples/mock-logs/mock-logs.sh,sha256=zM2JSaCR1eCQLlMvXDWjFnpxZTqrMpnFRa_SgNLPmBk,1132
19
- rubber_ducky-1.5.0.dist-info/licenses/LICENSE,sha256=gQ1rCmw18NqTk5GxG96F6vgyN70e1c4kcKUtWDwdNaE,1069
20
- rubber_ducky-1.5.0.dist-info/METADATA,sha256=Omw7U_3Q4dhj6Tzk0vTp3ZBDIcLQTnwTH5_X7p_fYk4,7859
21
- rubber_ducky-1.5.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
22
- rubber_ducky-1.5.0.dist-info/entry_points.txt,sha256=WPnVUUNvWdMDcBlCo8JCzkLghGllMX5QVZyQghyq85Q,75
23
- rubber_ducky-1.5.0.dist-info/top_level.txt,sha256=cFot69fWrmToFkRuRXwS7_RmtIc9Gjp3RAgrmKkGZoY,22
24
- rubber_ducky-1.5.0.dist-info/RECORD,,