patchpal 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,731 @@
1
+ Metadata-Version: 2.4
2
+ Name: patchpal
3
+ Version: 0.1.0
4
+ Summary: A lean Claude Code clone in pure Python
5
+ Author: PatchPal Contributors
6
+ License-Expression: Apache-2.0
7
+ Project-URL: Homepage, https://github.com/amaiya/patchpal
8
+ Project-URL: Repository, https://github.com/amaiya/patchpal
9
+ Project-URL: Issues, https://github.com/amaiya/patchpal/issues
10
+ Classifier: Development Status :: 3 - Alpha
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.10
14
+ Classifier: Programming Language :: Python :: 3.11
15
+ Classifier: Programming Language :: Python :: 3.12
16
+ Classifier: Programming Language :: Python :: 3.13
17
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Requires-Python: >=3.10
20
+ Description-Content-Type: text/markdown
21
+ License-File: LICENSE
22
+ Requires-Dist: litellm>=1.0.0
23
+ Requires-Dist: requests>=2.31.0
24
+ Requires-Dist: beautifulsoup4>=4.12.0
25
+ Requires-Dist: ddgs>=1.0.0
26
+ Requires-Dist: rich>=13.0.0
27
+ Requires-Dist: pyyaml>=6.0.0
28
+ Requires-Dist: prompt_toolkit>=3.0.0
29
+ Requires-Dist: tiktoken>=0.5.0
30
+ Provides-Extra: dev
31
+ Requires-Dist: pytest>=7.0.0; extra == "dev"
32
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
33
+ Requires-Dist: ruff==0.14.13; extra == "dev"
34
+ Requires-Dist: pre-commit>=3.0.0; extra == "dev"
35
+ Dynamic: license-file
36
+
37
+ # PatchPal — A Claude Code–Style Agent in Python
38
+
39
+ <!--![PatchPal Screenshot](patchpal_screenshot.png)-->
40
+ <img src="patchpal_screenshot.png" alt="PatchPal Screenshot" width="650"/>
41
+
42
+ > A lightweight Claude Code–inspired coding and automation assistant -- supports both local and cloud LLMs.
43
+
44
+ **PatchPal** is an AI coding agent that helps you build software, debug issues, and automate tasks. Like Claude Code, it supports agent skills, tool use, and executable Python generation, enabling interactive workflows for tasks such as data analysis, visualization, web scraping, API interactions, and research with synthesized findings.
45
+
46
+ A key goal of this project is to approximate Claude Code's core functionality while remaining lean, accessible, and configurable, enabling learning, experimentation, and broad applicability across use cases.
47
+
48
+
49
+ ```bash
50
+ $ls ./patchpal
51
+ __init__.py agent.py cli.py context.py permissions.py skills.py system_prompt.md tools.py
52
+ ```
53
+
54
+ ## Installation
55
+
56
+ Install PatchPal from PyPI:
57
+
58
+ ```bash
59
+ pip install patchpal
60
+ ```
61
+
62
+ **Supported Operating Systems:** Linux, MacOS, MS Windows.
63
+
64
+
65
+ ## Setup
66
+
67
+
68
+ 1. **Get an API key or a Local LLM Engine**:
69
+ - **[Cloud]** For Anthropic models (default): Sign up at https://console.anthropic.com/
70
+ - **[Cloud]** For OpenAI models: Get a key from https://platform.openai.com/
71
+ - **[Local]** For vLLM: Install from https://docs.vllm.ai/ (free - no API charges) **Recommended for Local Use**
72
+ - **[Local]** For Ollama: Install from https://ollama.com/ (⚠️ not well-suited for agents - use vLLM)
73
+ - For other providers: Check the [LiteLLM documentation](https://docs.litellm.ai/docs/providers)
74
+
75
+ 2. **Set up your API key as environment variable**:
76
+ ```bash
77
+
78
+ # For Anthropic (default)
79
+ export ANTHROPIC_API_KEY=your_api_key_here
80
+
81
+ # For OpenAI
82
+ export OPENAI_API_KEY=your_api_key_here
83
+
84
+ # For vLLM - API key required only if configured
85
+ export HOSTED_VLLM_API_BASE=http://localhost:8000 # depends on your vLLM setup
86
+ export HOSTED_VLLM_API_KEY=token-abc123 # optional depending on your vLLM setup
87
+
88
+ # For other providers, check LiteLLM docs
89
+ ```
90
+
91
+ 3. **Run PatchPal**:
92
+ ```bash
93
+ # Use default model (anthropic/claude-sonnet-4-5)
94
+ patchpal
95
+
96
+ # Use a specific model via command-line argument
97
+ patchpal --model openai/gpt-4o # or openai/gpt-5, anthropic/claude-opus-4-5 etc.
98
+
99
+ # Use vLLM (local)
100
+ # Note: vLLM server must be started with --tool-call-parser and --enable-auto-tool-choice
101
+ # See "Using Local Models (vLLM & Ollama)" section below for details
102
+ export HOSTED_VLLM_API_BASE=http://localhost:8000
103
+ export HOSTED_VLLM_API_KEY=token-abc123
104
+ patchpal --model hosted_vllm/openai/gpt-oss-20b
105
+
106
+ # Use Ollama (local, ⚠️ not recommended - use vLLM)
107
+ patchpal --model ollama_chat/qwen3:32b # vLLM is better for agents
108
+
109
+ # Or set the model via environment variable
110
+ export PATCHPAL_MODEL=openai/gpt-5
111
+ patchpal
112
+ ```
113
+
114
+ ## Features
115
+
116
+ ### Tools
117
+
118
+ The agent has the following tools:
119
+
120
+ ### File Operations
121
+ - **read_file**: Read contents of files in the repository
122
+ - **list_files**: List all files in the repository
123
+ - **get_file_info**: Get detailed metadata for file(s) - size, modification time, type
124
+ - Supports single files: `get_file_info("file.txt")`
125
+ - Supports directories: `get_file_info("src/")`
126
+ - Supports glob patterns: `get_file_info("tests/*.py")`
127
+ - **find_files**: Find files by name pattern using glob-style wildcards
128
+ - Example: `find_files("*.py")` - all Python files
129
+ - Example: `find_files("test_*.py")` - all test files
130
+ - Example: `find_files("**/*.md")` - all markdown files recursively
131
+ - Supports case-insensitive matching
132
+ - **tree**: Show directory tree structure to understand folder organization
133
+ - Example: `tree(".")` - show tree from current directory
134
+ - Configurable max depth (default: 3, max: 10)
135
+ - Option to show/hide hidden files
136
+ - **grep_code**: Search for patterns in code files (regex support, file filtering)
137
+ - **edit_file**: Edit a file by replacing an exact string (efficient for small changes)
138
+ - Example: `edit_file("config.py", "port = 3000", "port = 8080")`
139
+ - More efficient than apply_patch for targeted changes
140
+ - Old string must appear exactly once in the file
141
+ - **apply_patch**: Modify files by providing complete new content
142
+ - **run_shell**: Execute shell commands (requires user permission; privilege escalation blocked)
143
+
144
+ ### Git Operations (No Permission Required)
145
+ - **git_status**: Show modified, staged, and untracked files
146
+ - **git_diff**: Show changes in working directory or staged area
147
+ - Optional parameters: `path` (specific file), `staged` (show staged changes)
148
+ - **git_log**: Show commit history
149
+ - Optional parameters: `max_count` (number of commits, max 50), `path` (specific file history)
150
+
151
+ ### Web Capabilities
152
+ - **web_search**: Search the web using DuckDuckGo (no API key required!)
153
+ - Look up error messages and solutions
154
+ - Find current documentation and best practices
155
+ - Research library versions and compatibility
156
+ - **web_fetch**: Fetch and read content from URLs
157
+ - Read documentation pages
158
+ - Access API references
159
+ - Extract readable text from HTML pages
160
+
161
+ ### Skills System
162
+
163
+ Skills are reusable workflows and custom commands that can be invoked by name or discovered automatically by the agent.
164
+
165
+ **Creating Your Own Skills:**
166
+
167
+ 1. **Choose a location:**
168
+ - Personal skills (all projects): `~/.patchpal/skills/<skill-name>/SKILL.md`
169
+ - Project-specific skills: `<repo>/.patchpal/skills/<skill-name>/SKILL.md`
170
+
171
+ 2. **Create the skill file:**
172
+ ```bash
173
+ # Create a personal skill
174
+ mkdir -p ~/.patchpal/skills/my-skill
175
+ cat > ~/.patchpal/skills/my-skill/SKILL.md <<'EOF'
176
+ ---
177
+ name: my-skill
178
+ description: Brief description of what this skill does
179
+ ---
180
+ # Instructions
181
+ Your detailed instructions here...
182
+ EOF
183
+ ```
184
+
185
+ 3. **Skill File Format:**
186
+ ```markdown
187
+ ---
188
+ name: skill-name
189
+ description: One-line description
190
+ ---
191
+ # Detailed Instructions
192
+ - Step 1: Do this
193
+ - Step 2: Do that
194
+ - Use specific PatchPal tools like git_status, read_file, etc.
195
+ ```
196
+
197
+ **Example Skills:**
198
+
199
+ The PatchPal repository includes [example skills](https://github.com/amaiya/patchpal/tree/main/examples) you can use as templates:
200
+ - **commit**: Best practices for creating git commits
201
+ - **review**: Comprehensive code review checklist
202
+ - **add-tests**: Add comprehensive pytest tests (includes code block templates)
203
+ - **slack-gif-creator**: Create animated GIFs for Slack (from [Anthropic's official skills repo](https://github.com/anthropics/skills), demonstrates Claude Code compatibility)
204
+ - **skill-creator**: Guide for creating effective skills with bundled scripts and references (from [Anthropic's official skills repo](https://github.com/anthropics/skills/tree/main/skills/skill-creator), demonstrates full bundled resources support)
205
+
206
+ **After `pip install patchpal`, get examples:**
207
+
208
+ ```bash
209
+ # Quick way: Download examples directly from GitHub
210
+ curl -L https://github.com/amaiya/patchpal/archive/main.tar.gz | tar xz --strip=1 patchpal-main/examples
211
+
212
+ # Or clone the repository
213
+ git clone https://github.com/amaiya/patchpal.git
214
+ cd patchpal
215
+
216
+ # Copy examples to your personal skills directory
217
+ cp -r examples/skills/commit ~/.patchpal/skills/
218
+ cp -r examples/skills/review ~/.patchpal/skills/
219
+ cp -r examples/skills/add-tests ~/.patchpal/skills/
220
+ ```
221
+
222
+ **View examples online:**
223
+ Browse the [examples/skills/](https://github.com/amaiya/patchpal/tree/main/examples/skills) directory on GitHub to see the skill format and create your own.
224
+
225
+ You can also try out the example skills at [anthropic/skills](https://github.com/anthropics/skills).
226
+
227
+
228
+ **Using Skills:**
229
+
230
+ There are two ways to invoke skills:
231
+
232
+ 1. **Direct invocation** - Type `/skillname` at the prompt:
233
+ ```bash
234
+ $ patchpal
235
+ You: /commit Fix authentication bug
236
+ ```
237
+
238
+ 2. **Natural language** - Just ask, and the agent discovers the right skill:
239
+ ```bash
240
+ You: Help me commit these changes following best practices
241
+ # Agent automatically discovers and uses the commit skill
242
+ ```
243
+
244
+ **Finding Available Skills:**
245
+
246
+ Ask the agent to list them:
247
+ ```bash
248
+ You: list skills
249
+ ```
250
+
251
+ **Skill Priority:**
252
+
253
+ Project skills (`.patchpal/skills/`) override personal skills (`~/.patchpal/skills/`) with the same name.
254
+
255
+ ## Model Configuration
256
+
257
+ PatchPal supports any LiteLLM-compatible model. You can configure the model in three ways (in order of priority):
258
+
259
+ ### 1. Command-line Argument
260
+ ```bash
261
+ patchpal --model openai/gpt-5
262
+ patchpal --model anthropic/claude-sonnet-4-5
263
+ patchpal --model hosted_vllm/openai/gpt-oss-20b # local model - no API charges
264
+ ```
265
+
266
+ ### 2. Environment Variable
267
+ ```bash
268
+ export PATCHPAL_MODEL=openai/gpt-5
269
+ patchpal
270
+ ```
271
+
272
+ ### 3. Default Model
273
+ If no model is specified, PatchPal uses `anthropic/claude-sonnet-4-5` (Claude Sonnet 4.5).
274
+
275
+ ### Supported Models
276
+
277
+ PatchPal works with any model supported by LiteLLM, including:
278
+
279
+ - **Anthropic** (Recommended): `anthropic/claude-sonnet-4-5`, `anthropic/claude-opus-4-5`, `anthropic/claude-3-7-sonnet-latest`
280
+ - **OpenAI**: `openai/gpt-5`, `openai/gpt-4o`
281
+ - **AWS Bedrock**: `bedrock/anthropic.claude-sonnet-4-5-v1:0`
282
+ - **vLLM (Local)** (Recommended for local): See vLLM section below for setup
283
+ - **Ollama (Local)**: See Ollama section below for setup
284
+ - **Google**: `gemini/gemini-pro`, `vertex_ai/gemini-pro`
285
+ - **Others**: Cohere, Azure OpenAI, and many more
286
+
287
+
288
+ See the [LiteLLM providers documentation](https://docs.litellm.ai/docs/providers) for the complete list.
289
+
290
+ <!--### Using AWS Bedrock (Including GovCloud and VPC Endpoints)-->
291
+
292
+ <!--PatchPal supports AWS Bedrock with custom regions and VPC endpoints for secure enterprise deployments.-->
293
+
294
+ <!--**Basic AWS Bedrock Setup:**-->
295
+ <!--```bash-->
296
+ <!--# Set AWS credentials-->
297
+ <!--export AWS_ACCESS_KEY_ID=your_access_key-->
298
+ <!--export AWS_SECRET_ACCESS_KEY=your_secret_key-->
299
+
300
+ <!--# Use Bedrock model-->
301
+ <!--patchpal --model bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0-->
302
+ <!--```-->
303
+
304
+ <!--**AWS GovCloud or VPC Endpoint Setup:**-->
305
+ <!--```bash-->
306
+ <!--# Set AWS credentials-->
307
+ <!--export AWS_ACCESS_KEY_ID=your_access_key-->
308
+ <!--export AWS_SECRET_ACCESS_KEY=your_secret_key-->
309
+
310
+ <!--# Set custom region (e.g., GovCloud)-->
311
+ <!--export AWS_BEDROCK_REGION=us-gov-east-1-->
312
+
313
+ <!--# Set VPC endpoint URL (optional, for VPC endpoints)-->
314
+ <!--export AWS_BEDROCK_ENDPOINT=https://vpce-xxxxx.bedrock-runtime.us-gov-east-1.vpce.amazonaws.com-->
315
+
316
+ <!--# Use Bedrock with full ARN (bedrock/ prefix is optional - auto-detected)-->
317
+ <!--patchpal --model "arn:aws-us-gov:bedrock:us-gov-east-1:012345678901:inference-profile/us-gov.anthropic.claude-sonnet-4-5-20250929-v1:0"-->
318
+ <!--```-->
319
+
320
+ <!--**Environment Variables for Bedrock:**-->
321
+ <!--- `AWS_ACCESS_KEY_ID`: AWS access key ID (required)-->
322
+ <!--- `AWS_SECRET_ACCESS_KEY`: AWS secret access key (required)-->
323
+ <!--- `AWS_BEDROCK_REGION`: Custom AWS region (e.g., `us-gov-east-1` for GovCloud)-->
324
+ <!--- `AWS_BEDROCK_ENDPOINT`: Custom endpoint URL for VPC endpoints or GovCloud-->
325
+
326
+ ### Using Local Models (vLLM & Ollama)
327
+
328
+ Run models locally on your machine without needing API keys or internet access.
329
+
330
+ **⚠️ IMPORTANT: For local models, we recommend vLLM.**
331
+
332
+ vLLM provides:
333
+ - ✅ Robust multi-turn tool calling
334
+ - ✅ 3-10x faster inference than Ollama
335
+ - ✅ Production-ready reliability
336
+
337
+ #### vLLM (Recommended for Local Models)
338
+
339
+ vLLM is significantly faster than Ollama due to optimized inference with continuous batching and PagedAttention.
340
+
341
+ **Important:** vLLM >= 0.10.2 is required for proper tool calling support.
342
+
343
+ **Using Local vLLM Server:**
344
+
345
+ ```bash
346
+ # 1. Install vLLM (>= 0.10.2)
347
+ pip install vllm
348
+
349
+ # 2. Start vLLM server with tool calling enabled
350
+ vllm serve openai/gpt-oss-20b \
351
+ --dtype auto \
352
+ --api-key token-abc123 \
353
+ --tool-call-parser openai \
354
+ --enable-auto-tool-choice
355
+
356
+ # 3. Use with PatchPal (in another terminal)
357
+ export HOSTED_VLLM_API_BASE=http://localhost:8000
358
+ export HOSTED_VLLM_API_KEY=token-abc123
359
+ patchpal --model hosted_vllm/openai/gpt-oss-20b
360
+ ```
361
+
362
+ **Using Remote/Hosted vLLM Server:**
363
+
364
+ ```bash
365
+ # For remote vLLM servers (e.g., hosted by your organization)
366
+ export HOSTED_VLLM_API_BASE=https://your-vllm-server.com
367
+ export HOSTED_VLLM_API_KEY=your_api_key_here
368
+ patchpal --model hosted_vllm/openai/gpt-oss-20b
369
+ ```
370
+
371
+ **Environment Variables:**
372
+ - Use `HOSTED_VLLM_API_BASE` and `HOSTED_VLLM_API_KEY`
373
+
374
+ **Using YAML Configuration (Alternative):**
375
+
376
+ Create a `config.yaml`:
377
+ ```yaml
378
+ host: "0.0.0.0"
379
+ port: 8000
380
+ api-key: "token-abc123"
381
+ tool-call-parser: "openai" # Use appropriate parser for your model
382
+ enable-auto-tool-choice: true
383
+ dtype: "auto"
384
+ ```
385
+
386
+ Then start vLLM:
387
+ ```bash
388
+ vllm serve openai/gpt-oss-20b --config config.yaml
389
+
390
+ # Use with PatchPal
391
+ export HOSTED_VLLM_API_BASE=http://localhost:8000
392
+ export HOSTED_VLLM_API_KEY=token-abc123
393
+ patchpal --model hosted_vllm/openai/gpt-oss-20b
394
+ ```
395
+
396
+ **Recommended models for vLLM:**
397
+ - `openai/gpt-oss-20b` - OpenAI's open-source model (use parser: `openai`)
398
+
399
+ **Tool Call Parser Reference:**
400
+ Different models require different parsers. Common parsers include: `qwen3_xml`, `openai`, `deepseek_v3`, `llama3_json`, `mistral`, `hermes`, `pythonic`, `xlam`. See [vLLM Tool Calling docs](https://docs.vllm.ai/en/latest/features/tool_calling/) for the complete list.
401
+
402
+ #### Ollama
403
+
404
+ We find that Ollama models do not work well in agentic settings. For instance, while [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) works well in vLLM, the [Ollama version](https://ollama.com/library/gpt-oss) of the same model performs poorly. vLLM is recommended for local deployments.
405
+
406
+ **Examples:**
407
+
408
+ ```bash
409
+ patchpal --model ollama_chat/qwen3:32b # local model: performs poorly
410
+ patchpal --model ollama_chat/gpt-oss:20b # local model: performs poorly
411
+ patchpal --model hosted_vllm/openai/gpt-oss-20b # local model: performs well
412
+ ```
413
+
414
+ ### Air-Gapped and Offline Environments
415
+
416
+ For environments without internet access (air-gapped, offline, or restricted networks), you can disable web search and fetch tools:
417
+
418
+ ```bash
419
+ # Disable web tools for air-gapped environment
420
+ export PATCHPAL_ENABLE_WEB=false
421
+ patchpal
422
+
423
+ # Or combine with local vLLM for complete offline operation (recommended)
424
+ export PATCHPAL_ENABLE_WEB=false
425
+ export HOSTED_VLLM_API_BASE=http://localhost:8000
426
+ export HOSTED_VLLM_API_KEY=token-abc123
427
+ patchpal --model hosted_vllm/openai/gpt-oss-20b
428
+ ```
429
+
430
+ When web tools are disabled:
431
+ - `web_search` and `web_fetch` are removed from available tools
432
+ - With a local model, the agent won't attempt any network requests
433
+ - Perfect for secure, isolated, or offline development environments
434
+
435
+ ### Viewing Help
436
+ ```bash
437
+ patchpal --help
438
+ ```
439
+
440
+ ## Usage
441
+
442
+ Simply run the `patchpal` command and type your requests interactively:
443
+
444
+ ```bash
445
+ $ patchpal
446
+ ================================================================================
447
+ PatchPal - Claude Code Clone
448
+ ================================================================================
449
+
450
+ Using model: anthropic/claude-sonnet-4-5
451
+
452
+ Type 'exit' to quit.
453
+ Use '/status' to check context window usage, '/compact' to manually compact.
454
+ Use 'list skills' or /skillname to invoke skills.
455
+ Press Ctrl-C during agent execution to interrupt the agent.
456
+
457
+ You: Add type hints and basic logging to my_module.py
458
+ ```
459
+
460
+ The agent will process your request and show you the results. You can continue with follow-up tasks or type `exit` to quit.
461
+
462
+ **Interactive Features:**
463
+ - **Path Autocompletion**: Press `Tab` while typing file paths to see suggestions (e.g., `./src/mo` + Tab → `./src/models.py`)
464
+ - **Skill Autocompletion**: Type `/` followed by Tab to see available skills (e.g., `/comm` + Tab → `/commit`)
465
+ - **Command History**: Use ↑ (up arrow) and ↓ (down arrow) to navigate through previous commands within the current session
466
+ - **Interrupt Agent**: Press `Ctrl-C` during agent execution to stop the current task without exiting PatchPal
467
+ - **Exit**: Type `exit`, `quit`, or press `Ctrl-C` at the prompt to exit PatchPal
468
+
469
+ ## Example Tasks
470
+
471
+ ```
472
+ Resolve this error message: "UnicodeDecodeError: 'charmap' codec can't decode"
473
+
474
+ Build a streamlit app to <whatever you want>
475
+
476
+ Create a bar chart for top 5 downloaded Python packages as of yesterday
477
+
478
+ Find and implement best practices for async/await in Python
479
+
480
+ Add GitHub CI/CD for this project
481
+
482
+ Add type hints and basic logging to mymodule.py
483
+
484
+ Create unit tests for the utils module
485
+
486
+ Refactor the authentication code for better security
487
+
488
+ Add error handling to all API calls
489
+
490
+ Look up the latest FastAPI documentation and add dependency injection
491
+ ```
492
+
493
+ ## Safety
494
+
495
+ The agent operates with a security model inspired by Claude Code:
496
+
497
+ - **Permission system**: User approval required for all shell commands and file modifications (can be customized)
498
+ - **Write boundary enforcement**: Write operations restricted to repository (matches Claude Code)
499
+ - Read operations allowed anywhere (system files, libraries, debugging, automation)
500
+ - Write operations outside repository require explicit permission
501
+ - **Privilege escalation blocking**: Platform-aware blocking of privilege escalation commands
502
+ - Unix/Linux/macOS: `sudo`, `su`
503
+ - Windows: `runas`, `psexec`
504
+ - **Dangerous pattern detection**: Blocks patterns like `> /dev/`, `rm -rf /`, `| dd`, `--force`
505
+ - **Timeout protection**: Shell commands timeout after 30 seconds
506
+
507
+ ### Security Guardrails ✅ FULLY ENABLED
508
+
509
+ PatchPal includes comprehensive security protections enabled by default:
510
+
511
+ **Critical Security:**
512
+ - **Permission prompts**: Agent asks for permission before executing commands or modifying files (like Claude Code)
513
+ - **Sensitive file protection**: Blocks access to `.env`, credentials, API keys
514
+ - **File size limits**: Prevents OOM with configurable size limits (10MB default)
515
+ - **Binary file detection**: Blocks reading non-text files
516
+ - **Critical file warnings**: Warns when modifying infrastructure files (package.json, Dockerfile, etc.)
517
+ - **Read-only mode**: Optional mode that prevents all modifications
518
+ - **Command timeout**: 30-second timeout on shell commands
519
+ - **Pattern-based blocking**: Blocks dangerous command patterns (`> /dev/`, `--force`, etc.)
520
+ - **Write boundary protection**: Requires permission for write operations
521
+
522
+ **Operational Safety:**
523
+ - **Operation audit logging**: All file operations and commands logged to `~/.patchpal/<repo-name>/audit.log` (enabled by default)
524
+ - Includes user prompts to show what triggered each operation
525
+ - Rotates at 10 MB with 3 backups (40 MB total max)
526
+ - **Command history**: User commands saved to `~/.patchpal/<repo-name>/history.txt` (last 1000 commands)
527
+ - Clean, user-friendly format for reviewing past interactions
528
+ - **Automatic backups**: Optional auto-backup of files to `~/.patchpal/<repo-name>/backups/` before modification
529
+ - **Resource limits**: Configurable operation counter prevents infinite loops (1000 operations default)
530
+ - **Git state awareness**: Warns when modifying files with uncommitted changes
531
+
532
+ **Configuration via environment variables:**
533
+ ```bash
534
+ # Critical Security Controls
535
+ export PATCHPAL_REQUIRE_PERMISSION=true # Prompt for permission before executing commands/modifying files (default: true)
536
+ # Set to false to disable prompts (not recommended for production use)
537
+ export PATCHPAL_MAX_FILE_SIZE=5242880 # Maximum file size in bytes for read/write operations (default: 10485760 = 10MB)
538
+ export PATCHPAL_READ_ONLY=true # Prevent all file modifications, analysis-only mode (default: false)
539
+ # Useful for: code review, exploration, security audits, CI/CD analysis, or trying PatchPal risk-free
540
+ export PATCHPAL_ALLOW_SENSITIVE=true # Allow access to .env, credentials, API keys (default: false - blocked for safety)
541
+ # Only enable when working with test/dummy credentials or intentionally managing config files
542
+
543
+ # Operational Safety Controls
544
+ export PATCHPAL_AUDIT_LOG=false # Log all operations to ~/.patchpal/<repo-name>/audit.log (default: true)
545
+ export PATCHPAL_ENABLE_BACKUPS=true # Auto-backup files to ~/.patchpal/<repo-name>/backups/ before modification (default: false)
546
+ export PATCHPAL_MAX_OPERATIONS=5000 # Maximum operations per session to prevent infinite loops (default: 1000)
547
+ export PATCHPAL_MAX_ITERATIONS=150 # Maximum agent iterations per task (default: 100)
548
+ # Increase for very complex multi-file tasks, decrease for testing
549
+
550
+ # Customization
551
+ export PATCHPAL_SYSTEM_PROMPT=~/.patchpal/my_prompt.md # Use custom system prompt file (default: built-in prompt)
552
+ # The file can use template variables like {current_date}, {platform_info}, etc.
553
+ # Useful for: custom agent behavior, team standards, domain-specific instructions
554
+
555
+ # Web Tool Controls
556
+ export PATCHPAL_ENABLE_WEB=false # Enable/disable web search and fetch tools (default: true)
557
+ # Set to false for air-gapped or offline environments
558
+ export PATCHPAL_WEB_TIMEOUT=60 # Timeout for web requests in seconds (default: 30)
559
+ export PATCHPAL_MAX_WEB_SIZE=10485760 # Maximum web content size in bytes (default: 5242880 = 5MB)
560
+ export PATCHPAL_MAX_WEB_CHARS=500000 # Maximum characters from web content to prevent context overflow (default: 500000 ≈ 125k tokens)
561
+
562
+ # Shell Command Controls
563
+ export PATCHPAL_SHELL_TIMEOUT=60 # Timeout for shell commands in seconds (default: 30)
564
+ ```
565
+
566
+ **Permission System:**
567
+
568
+ When the agent wants to execute a command or modify a file, you'll see a prompt like:
569
+
570
+ ```
571
+ ================================================================================
572
+ Run Shell
573
+ --------------------------------------------------------------------------------
574
+ pytest tests/test_cli.py -v
575
+ --------------------------------------------------------------------------------
576
+
577
+ Do you want to proceed?
578
+ 1. Yes
579
+ 2. Yes, and don't ask again this session for 'pytest'
580
+ 3. No, and tell me what to do differently
581
+
582
+ Choice [1-3]:
583
+ ```
584
+
585
+ - Option 1: Allow this one operation
586
+ - Option 2: Allow for the rest of this session (like Claude Code - resets when you restart PatchPal)
587
+ - Option 3: Cancel the operation
588
+
589
+ **Advanced:** You can manually edit `~/.patchpal/<repo-name>/permissions.json` to grant persistent permissions across sessions.
590
+
591
+ **Example permissions.json:**
592
+
593
+ ```json
594
+ {
595
+ "run_shell": ["pytest", "npm", "git"],
596
+ "apply_patch": true,
597
+ "edit_file": ["config.py", "settings.json"]
598
+ }
599
+ ```
600
+
601
+ Format:
602
+ - `"tool_name": true` - Grant all operations for this tool (no more prompts)
603
+ - `"tool_name": ["pattern1", "pattern2"]` - Grant only specific patterns (e.g., specific commands or file names)
604
+
605
+ <!--**Test coverage:** 131 tests including 38 dedicated security tests and 11 skills tests-->
606
+
607
+
608
+ ## Context Management
609
+
610
+ PatchPal automatically manages the context window to prevent "input too long" errors during long coding sessions.
611
+
612
+ **Features:**
613
+ - **Automatic token tracking**: Monitors context usage in real-time
614
+ - **Smart pruning**: Removes old tool outputs (keeps last 40k tokens) before resorting to full compaction
615
+ - **Auto-compaction**: Summarizes conversation history when approaching 85% capacity
616
+ - **Manual control**: Check status with `/status`, disable with environment variable
617
+
618
+ **Commands:**
619
+ ```bash
620
+ # Check context window usage
621
+ You: /status
622
+
623
+ # Output shows:
624
+ # - Messages in history
625
+ # - Token usage breakdown
626
+ # - Visual progress bar
627
+ # - Auto-compaction status
628
+
629
+ # Manually trigger compaction
630
+ You: /compact
631
+
632
+ # Useful when:
633
+ # - You want to free up context space before a large operation
634
+ # - Testing compaction behavior
635
+ # - Context is getting full but hasn't auto-compacted yet
636
+ # Note: Requires at least 5 messages; most effective when context >50% full
637
+ ```
638
+
639
+ **Configuration:**
640
+ ```bash
641
+ # Disable auto-compaction (not recommended for long sessions)
642
+ export PATCHPAL_DISABLE_AUTOCOMPACT=true
643
+
644
+ # Adjust compaction threshold (default: 0.85 = 85%)
645
+ export PATCHPAL_COMPACT_THRESHOLD=0.90
646
+
647
+ # Adjust pruning thresholds
648
+ export PATCHPAL_PRUNE_PROTECT=40000 # Keep last 40k tokens (default)
649
+ export PATCHPAL_PRUNE_MINIMUM=20000 # Min tokens to prune (default)
650
+
651
+ # Override context limit for testing (useful for testing compaction with small values)
652
+ export PATCHPAL_CONTEXT_LIMIT=10000 # Force 10k token limit instead of model default
653
+ ```
654
+
655
+ **Testing Context Management:**
656
+
657
+ You can test the context management system with small values to trigger compaction quickly:
658
+
659
+ ```bash
660
+ # Set up small context window for testing
661
+ export PATCHPAL_CONTEXT_LIMIT=10000 # Force 10k token limit (instead of 200k for Claude)
662
+ export PATCHPAL_COMPACT_THRESHOLD=0.75 # Trigger at 75% (instead of 85%)
663
+ # Note: System prompt + output reserve = ~6.4k tokens baseline
664
+ # So 75% of 10k = 7.5k, leaving ~1k for conversation
665
+ export PATCHPAL_PRUNE_PROTECT=500 # Keep only last 500 tokens of tool outputs
666
+ export PATCHPAL_PRUNE_MINIMUM=100 # Prune if we can save 100+ tokens
667
+
668
+ # Start PatchPal and watch it compact quickly
669
+ patchpal
670
+
671
+ # Generate context with tool calls (tool outputs consume tokens)
672
+ You: list all python files
673
+ You: read patchpal/agent.py
674
+ You: read patchpal/tools.py
675
+
676
+ # Check status - should show compaction happening
677
+ You: /status
678
+
679
+ # Continue - should see pruning messages
680
+ You: search for "context" in all files
681
+ # You should see:
682
+ # ⚠️ Context window at 85% capacity. Compacting...
683
+ # Pruned old tool outputs (saved ~400 tokens)
684
+ # ✓ Compaction complete. Saved 850 tokens (85% → 68%)
685
+ ```
686
+
687
+ **How It Works:**
688
+
689
+ 1. **Phase 1 - Pruning**: When context fills up, old tool outputs are pruned first
690
+ - Keeps last 40k tokens of tool outputs protected (only tool outputs, not conversation)
691
+ - Only prunes if it saves >20k tokens
692
+ - Pruning is transparent and fast
693
+ - Requires at least 5 messages in history
694
+
695
+ 2. **Phase 2 - Compaction**: If pruning isn't enough, full compaction occurs
696
+ - Requires at least 5 messages to be effective
697
+ - LLM summarizes the entire conversation
698
+ - Summary replaces old messages, keeping last 2 complete conversation turns
699
+ - Work continues seamlessly from the summary
700
+ - Preserves complete tool call/result pairs (important for Bedrock compatibility)
701
+
702
+ **Example:**
703
+ ```
704
+ Context Window Status
705
+ ======================================================================
706
+ Model: anthropic/claude-sonnet-4-5
707
+ Messages in history: 47
708
+ System prompt: 15,234 tokens
709
+ Conversation: 142,567 tokens
710
+ Output reserve: 4,096 tokens
711
+ Total: 161,897 / 200,000 tokens
712
+ Usage: 80%
713
+ [████████████████████████████████████████░░░░░░░░░]
714
+
715
+ Auto-compaction: Enabled (triggers at 85%)
716
+ ======================================================================
717
+ ```
718
+
719
+ The system ensures you can work for extended periods without hitting context limits.
720
+
721
+ ## Troubleshooting
722
+
723
+ **Error: "maximum iterations reached"**
724
+ - The default number of iterations is 100.
725
+ - You can increase by setting the environment variable, `export PATCHPAL_MAX_ITERATIONS`
726
+
727
+ **Error: "Context Window Error - Input is too long"**
728
+ - PatchPal includes automatic context management (compaction) to prevent this error.
729
+ - Use `/status` to check your context window usage.
730
+ - If auto-compaction is disabled, re-enable it: `unset PATCHPAL_DISABLE_AUTOCOMPACT`
731
+ - Context is automatically managed at 85% capacity through pruning and compaction.