patchllm 0.2.1__tar.gz → 0.2.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: patchllm
3
- Version: 0.2.1
3
+ Version: 0.2.2
4
4
  Summary: Lightweight tool to manage contexts and update code with LLMs
5
5
  Author: nassimberrada
6
6
  License: MIT License
@@ -49,21 +49,21 @@ Dynamic: license-file
49
49
  PatchLLM is a command-line tool that lets you flexibly build LLM context from your codebase using glob patterns, URLs, and keyword searches. It then automatically applies file edits directly from the LLM's response.
50
50
 
51
51
  ## Usage
52
- PatchLLM is designed to be used directly from your terminal.
52
+ PatchLLM is designed to be used directly from your terminal. The core workflow is to define a **scope** of files, provide a **task**, and choose an **action** (like patching files directly).
53
53
 
54
- ### 1. Initialize a Configuration
55
- The easiest way to get started is to run the interactive initializer. This will create a `configs.py` file for you.
54
+ ### 1. Initialize a Scope
55
+ The easiest way to get started is to run the interactive initializer. This will create a `scopes.py` file for you, which holds your saved scopes.
56
56
 
57
57
  ```bash
58
58
  patchllm --init
59
59
  ```
60
60
 
61
- This will guide you through creating your first context configuration, including setting a base path and file patterns. You can add multiple configurations to this file.
61
+ This will guide you through creating your first scope, including setting a base path and file patterns. You can add multiple scopes to this file for different projects or tasks.
62
62
 
63
- A generated `configs.py` might look like this:
63
+ A generated `scopes.py` might look like this:
64
64
  ```python
65
- # configs.py
66
- configs = {
65
+ # scopes.py
66
+ scopes = {
67
67
  "default": {
68
68
  "path": ".",
69
69
  "include_patterns": ["**/*.py"],
@@ -78,45 +78,47 @@ configs = {
78
78
  ```
79
79
 
80
80
  ### 2. Run a Task
81
- Use the `patchllm` command with a configuration name and a task instruction.
81
+ Use the `patchllm` command with a scope, a task, and an action flag like `--patch` (`-p`).
82
82
 
83
83
  ```bash
84
- # Apply a change using the 'default' configuration
85
- patchllm --config default --task "Add type hints to the main function in main.py"
84
+ # Apply a change using the 'default' scope and the --patch action
85
+ patchllm -s default -t "Add type hints to the main function in main.py" -p
86
86
  ```
87
87
 
88
88
  The tool will then:
89
- 1. Build a context from the files and URLs matching your configuration.
89
+ 1. Build a context from the files and URLs matching your `default` scope.
90
90
  2. Send the context and your task to the configured LLM.
91
91
  3. Parse the response and automatically write the changes to the relevant files.
92
92
 
93
93
  ### All Commands & Options
94
94
 
95
- #### Configuration Management
96
- * `--init`: Create a new configuration interactively.
97
- * `--list-configs`: List all available configurations from your `configs.py`.
98
- * `--show-config <name>`: Display the settings for a specific configuration.
95
+ #### Core Patching Flow
96
+ * `-s, --scope <name>`: Name of the scope to use from your `scopes.py` file.
97
+ * `-t, --task "<instruction>"`: The task instruction for the LLM.
98
+ * `-p, --patch`: Query the LLM and directly apply the file updates from the response. **This is the main action flag.**
99
99
 
100
- #### Core Task Execution
101
- * `--config <name>`: The name of the configuration to use for building context.
102
- * `--task "<instruction>"`: The task instruction for the LLM.
103
- * `--model <model_name>`: Specify a different model (e.g., `claude-3-opus`). Defaults to `gemini/gemini-1.5-flash`.
100
+ #### Scope Management
101
+ * `-i, --init`: Create a new scope interactively.
102
+ * `-sl, --list-scopes`: List all available scopes from your `scopes.py` file.
103
+ * `-ss, --show-scope <name>`: Display the settings for a specific scope.
104
104
 
105
- #### Context Handling
106
- * `--context-out [filename]`: Save the generated context to a file (defaults to `context.md`) instead of sending it to the LLM.
107
- * `--context-in <filename>`: Use a previously saved context file directly, skipping context generation.
108
- * `--update False`: A flag to prevent sending the prompt to the LLM. Useful when you only want to generate and save the context with `--context-out`.
105
+ #### I/O & Context Management
106
+ * `-co, --context-out [filename]`: Export the generated context to a file (defaults to `context.md`) instead of running a task.
107
+ * `-ci, --context-in <filename>`: Use a previously saved context file as input for a task.
108
+ * `-tf, --to-file [filename]`: Send the LLM response to a file (defaults to `response.md`) instead of patching directly.
109
+ * `-tc, --to-clipboard`: Copy the LLM response to the clipboard.
110
+ * `-ff, --from-file <filename>`: Apply patches from a local file instead of an LLM response.
111
+ * `-fc, --from-clipboard`: Apply patches directly from your clipboard content.
109
112
 
110
- #### Alternative Inputs
111
- * `--from-file <filename>`: Apply file patches directly from a local file instead of from an LLM response.
112
- * `--from-clipboard`: Apply file patches directly from your clipboard content.
113
- * `--voice True`: Use voice recognition to provide the task instruction. Requires extra dependencies.
113
+ #### General Options
114
+ * `--model <model_name>`: Specify a different model (e.g., `gpt-4o`). Defaults to `gemini/gemini-1.5-flash`.
115
+ * `--voice`: Enable voice recognition to provide the task instruction.
114
116
 
115
117
  ### Setup
116
118
 
117
119
  PatchLLM uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood. Please refer to their documentation for setting up API keys (e.g., `OPENAI_API_KEY`, `GEMINI_API_KEY`) in a `.env` file and for a full list of available models.
118
120
 
119
- To use the voice feature (`--voice True`), you will need to install extra dependencies:
121
+ To use the voice feature (`--voice`), you will need to install extra dependencies:
120
122
  ```bash
121
123
  pip install "speechrecognition>=3.10" "pyttsx3>=2.90"
122
124
  # Note: speechrecognition may require PyAudio, which might have system-level dependencies.
@@ -0,0 +1,90 @@
1
+ <p align="center">
2
+ <picture>
3
+ <source srcset="./assets/logo_dark.png" media="(prefers-color-scheme: dark)">
4
+ <source srcset="./assets/logo_light.png" media="(prefers-color-scheme: light)">
5
+ <img src="./assets/logo_light.png" alt="PatchLLM Logo" height="200">
6
+ </picture>
7
+ </p>
8
+
9
+ ## About
10
+ PatchLLM is a command-line tool that lets you flexibly build LLM context from your codebase using glob patterns, URLs, and keyword searches. It then automatically applies file edits directly from the LLM's response.
11
+
12
+ ## Usage
13
+ PatchLLM is designed to be used directly from your terminal. The core workflow is to define a **scope** of files, provide a **task**, and choose an **action** (like patching files directly).
14
+
15
+ ### 1. Initialize a Scope
16
+ The easiest way to get started is to run the interactive initializer. This will create a `scopes.py` file for you, which holds your saved scopes.
17
+
18
+ ```bash
19
+ patchllm --init
20
+ ```
21
+
22
+ This will guide you through creating your first scope, including setting a base path and file patterns. You can add multiple scopes to this file for different projects or tasks.
23
+
24
+ A generated `scopes.py` might look like this:
25
+ ```python
26
+ # scopes.py
27
+ scopes = {
28
+ "default": {
29
+ "path": ".",
30
+ "include_patterns": ["**/*.py"],
31
+ "exclude_patterns": ["**/tests/*", "venv/*"],
32
+ "urls": ["https://docs.python.org/3/library/argparse.html"]
33
+ },
34
+ "docs": {
35
+ "path": "./docs",
36
+ "include_patterns": ["**/*.md"],
37
+ }
38
+ }
39
+ ```
40
+
41
+ ### 2. Run a Task
42
+ Use the `patchllm` command with a scope, a task, and an action flag like `--patch` (`-p`).
43
+
44
+ ```bash
45
+ # Apply a change using the 'default' scope and the --patch action
46
+ patchllm -s default -t "Add type hints to the main function in main.py" -p
47
+ ```
48
+
49
+ The tool will then:
50
+ 1. Build a context from the files and URLs matching your `default` scope.
51
+ 2. Send the context and your task to the configured LLM.
52
+ 3. Parse the response and automatically write the changes to the relevant files.
53
+
54
+ ### All Commands & Options
55
+
56
+ #### Core Patching Flow
57
+ * `-s, --scope <name>`: Name of the scope to use from your `scopes.py` file.
58
+ * `-t, --task "<instruction>"`: The task instruction for the LLM.
59
+ * `-p, --patch`: Query the LLM and directly apply the file updates from the response. **This is the main action flag.**
60
+
61
+ #### Scope Management
62
+ * `-i, --init`: Create a new scope interactively.
63
+ * `-sl, --list-scopes`: List all available scopes from your `scopes.py` file.
64
+ * `-ss, --show-scope <name>`: Display the settings for a specific scope.
65
+
66
+ #### I/O & Context Management
67
+ * `-co, --context-out [filename]`: Export the generated context to a file (defaults to `context.md`) instead of running a task.
68
+ * `-ci, --context-in <filename>`: Use a previously saved context file as input for a task.
69
+ * `-tf, --to-file [filename]`: Send the LLM response to a file (defaults to `response.md`) instead of patching directly.
70
+ * `-tc, --to-clipboard`: Copy the LLM response to the clipboard.
71
+ * `-ff, --from-file <filename>`: Apply patches from a local file instead of an LLM response.
72
+ * `-fc, --from-clipboard`: Apply patches directly from your clipboard content.
73
+
74
+ #### General Options
75
+ * `--model <model_name>`: Specify a different model (e.g., `gpt-4o`). Defaults to `gemini/gemini-1.5-flash`.
76
+ * `--voice`: Enable voice recognition to provide the task instruction.
77
+
78
+ ### Setup
79
+
80
+ PatchLLM uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood. Please refer to their documentation for setting up API keys (e.g., `OPENAI_API_KEY`, `GEMINI_API_KEY`) in a `.env` file and for a full list of available models.
81
+
82
+ To use the voice feature (`--voice`), you will need to install extra dependencies:
83
+ ```bash
84
+ pip install "speechrecognition>=3.10" "pyttsx3>=2.90"
85
+ # Note: speechrecognition may require PyAudio, which might have system-level dependencies.
86
+ ```
87
+
88
+ ## License
89
+
90
+ This project is licensed under the MIT License. See the `LICENSE` file for details.
@@ -175,23 +175,23 @@ def fetch_and_process_urls(urls: list[str]) -> str:
175
175
 
176
176
  # --- Main Context Building Function ---
177
177
 
178
- def build_context(config: dict) -> dict | None:
178
+ def build_context(scope: dict) -> dict | None:
179
179
  """
180
- Builds the context string from files specified in the config.
180
+ Builds the context string from files specified in the scope.
181
181
 
182
182
  Args:
183
- config (dict): The configuration for file searching.
183
+ scope (dict): The scope for file searching.
184
184
 
185
185
  Returns:
186
186
  dict: A dictionary with the source tree and formatted context, or None.
187
187
  """
188
- base_path = Path(config.get("path", ".")).resolve()
188
+ base_path = Path(scope.get("path", ".")).resolve()
189
189
 
190
- include_patterns = config.get("include_patterns", [])
191
- exclude_patterns = config.get("exclude_patterns", [])
192
- exclude_extensions = config.get("exclude_extensions", DEFAULT_EXCLUDE_EXTENSIONS)
193
- search_words = config.get("search_words", [])
194
- urls = config.get("urls", [])
190
+ include_patterns = scope.get("include_patterns", [])
191
+ exclude_patterns = scope.get("exclude_patterns", [])
192
+ exclude_extensions = scope.get("exclude_extensions", DEFAULT_EXCLUDE_EXTENSIONS)
193
+ search_words = scope.get("search_words", [])
194
+ urls = scope.get("urls", [])
195
195
 
196
196
  # Step 1: Find files
197
197
  relevant_files = find_files(base_path, include_patterns, exclude_patterns)
@@ -0,0 +1,326 @@
1
+ import textwrap
2
+ import argparse
3
+ import litellm
4
+ import pprint
5
+ import os
6
+ from dotenv import load_dotenv
7
+ from rich.console import Console
8
+ from rich.panel import Panel
9
+
10
+ from .context import build_context
11
+ from .parser import paste_response
12
+ from .utils import load_from_py_file
13
+
14
+ console = Console()
15
+
16
+ # --- Core Functions ---
17
+
18
+ def collect_context(scope_name, scopes):
19
+ """Builds the code context from a provided scope dictionary."""
20
+ console.print("\n--- Building Code Context... ---", style="bold")
21
+ if not scopes:
22
+ raise FileNotFoundError("Could not find a 'scopes.py' file.")
23
+ selected_scope = scopes.get(scope_name)
24
+ if selected_scope is None:
25
+ raise KeyError(f"Context scope '{scope_name}' not found in provided scopes file.")
26
+
27
+ context_object = build_context(selected_scope)
28
+ if context_object:
29
+ tree, context = context_object.values()
30
+ console.print("--- Context Building Finished. The following files were extracted ---", style="bold")
31
+ console.print(tree)
32
+ return context
33
+ else:
34
+ console.print("--- Context Building Failed (No files found) ---", style="yellow")
35
+ return None
36
+
37
+ def run_llm_query(task_instructions, model_name, history, context=None):
38
+ """
39
+ Assembles the final prompt, sends it to the LLM, and returns the response.
40
+ """
41
+ console.print("\n--- Sending Prompt to LLM... ---", style="bold")
42
+ final_prompt = task_instructions
43
+ if context:
44
+ final_prompt = f"{context}\n\n{task_instructions}"
45
+
46
+ history.append({"role": "user", "content": final_prompt})
47
+
48
+ try:
49
+ with console.status("[bold cyan]Waiting for LLM response...", spinner="dots"):
50
+ response = litellm.completion(model=model_name, messages=history)
51
+
52
+ assistant_response_content = response.choices[0].message.content
53
+ history.append({"role": "assistant", "content": assistant_response_content})
54
+
55
+ if not assistant_response_content or not assistant_response_content.strip():
56
+ console.print("⚠️ Response is empty. Nothing to process.", style="yellow")
57
+ return None
58
+
59
+ return assistant_response_content
60
+
61
+ except Exception as e:
62
+ history.pop() # Keep history clean on error
63
+ raise RuntimeError(f"An error occurred while communicating with the LLM via litellm: {e}") from e
64
+
65
+ def write_to_file(file_path, content):
66
+ """Utility function to write content to a file."""
67
+ console.print(f"Writing to {file_path}..", style="cyan")
68
+ try:
69
+ with open(file_path, "w", encoding="utf-8") as file:
70
+ file.write(content)
71
+ console.print(f'✅ Content saved to {file_path}', style="green")
72
+ except Exception as e:
73
+ raise RuntimeError(f"Failed to write to file {file_path}: {e}") from e
74
+
75
+ def read_from_file(file_path):
76
+ """Utility function to read and return the content of a file."""
77
+ console.print(f"Importing from {file_path}..", style="cyan")
78
+ try:
79
+ with open(file_path, "r", encoding="utf-8") as file:
80
+ content = file.read()
81
+ console.print("✅ Finished reading file.", style="green")
82
+ return content
83
+ except Exception as e:
84
+ raise RuntimeError(f"Failed to read from file {file_path}: {e}") from e
85
+
86
+ def create_new_scope(scopes, scopes_file_str):
87
+ """Interactively creates a new scope and saves it to the specified scopes file."""
88
+ console.print(f"\n--- Creating a new scope in '{scopes_file_str}' ---", style="bold")
89
+
90
+ try:
91
+ name = console.input("[bold]Enter a name for the new scope: [/]").strip()
92
+ if not name:
93
+ console.print("❌ Scope name cannot be empty.", style="red")
94
+ return
95
+
96
+ if name in scopes:
97
+ overwrite = console.input(f"Scope '[bold]{name}[/]' already exists. Overwrite? (y/n): ").lower()
98
+ if overwrite not in ['y', 'yes']:
99
+ console.print("Operation cancelled.", style="yellow")
100
+ return
101
+
102
+ path = console.input("[bold]Enter the base path[/] (e.g., '.' for current directory): ").strip() or "."
103
+
104
+ console.print("\nEnter comma-separated glob patterns for files to include.")
105
+ include_raw = console.input('[cyan]> (e.g., "[bold]**/*.py, src/**/*.js[/]"): [/]').strip()
106
+ include_patterns = [p.strip() for p in include_raw.split(',') if p.strip()]
107
+
108
+ console.print("\nEnter comma-separated glob patterns for files to exclude (optional).")
109
+ exclude_raw = console.input('[cyan]> (e.g., "[bold]**/tests/*, venv/*[/]"): [/]').strip()
110
+ exclude_patterns = [p.strip() for p in exclude_raw.split(',') if p.strip()]
111
+
112
+ new_scope_data = {
113
+ "path": path,
114
+ "include_patterns": include_patterns,
115
+ "exclude_patterns": exclude_patterns
116
+ }
117
+
118
+ scopes[name] = new_scope_data
119
+
120
+ with open(scopes_file_str, "w", encoding="utf-8") as f:
121
+ f.write("# scopes.py\n")
122
+ f.write("scopes = ")
123
+ f.write(pprint.pformat(scopes, indent=4))
124
+ f.write("\n")
125
+
126
+ console.print(f"\n✅ Successfully created and saved scope '[bold]{name}[/]' in '[bold]{scopes_file_str}[/]'.", style="green")
127
+
128
+ except KeyboardInterrupt:
129
+ console.print("\n\n⚠️ Scope creation cancelled by user.", style="yellow")
130
+ return
131
+
132
+ def main():
133
+ """
134
+ Main entry point for the patchllm command-line tool.
135
+ """
136
+ load_dotenv()
137
+
138
+ scopes_file_path = os.getenv("PATCHLLM_SCOPES_FILE", "./scopes.py")
139
+
140
+ parser = argparse.ArgumentParser(
141
+ description="A CLI tool to apply code changes using an LLM.",
142
+ formatter_class=argparse.RawTextHelpFormatter
143
+ )
144
+
145
+ # --- Group: Core Patching Flow ---
146
+ patch_group = parser.add_argument_group('Core Patching Flow')
147
+ patch_group.add_argument("-s", "--scope", type=str, default=None, help="Name of the scope to use from the scopes file.")
148
+ patch_group.add_argument("-t", "--task", type=str, default=None, help="The task instructions to guide the assistant.")
149
+ patch_group.add_argument("-p", "--patch", action="store_true", help="Query the LLM and directly apply the file updates from the response. Requires --task.")
150
+
151
+ # --- Group: Scope Management ---
152
+ scope_group = parser.add_argument_group('Scope Management')
153
+ scope_group.add_argument("-i", "--init", action="store_true", help="Create a new scope interactively.")
154
+ scope_group.add_argument("-sl", "--list-scopes", action="store_true", help="List all available scopes from the scopes file and exit.")
155
+ scope_group.add_argument("-ss", "--show-scope", type=str, help="Display the settings for a specific scope and exit.")
156
+
157
+ # --- Group: I/O Utils---
158
+ code_io = parser.add_argument_group('Code I/O')
159
+ code_io.add_argument("-co", "--context-out", nargs='?', const="context.md", default=None, help="Export the generated context to a file. Defaults to 'context.md'.")
160
+ code_io.add_argument("-ci", "--context-in", type=str, default=None, help="Import a previously saved context from a file.")
161
+ code_io.add_argument("-tf", "--to-file", nargs='?', const="response.md", default=None, help="Query the LLM and save the response to a file. Requires --task. Defaults to 'response.md'.")
162
+ code_io.add_argument("-tc", "--to-clipboard", action="store_true", help="Query the LLM and save the response to the clipboard. Requires --task.")
163
+ code_io.add_argument("-ff", "--from-file", type=str, default=None, help="Apply code updates directly from a file.")
164
+ code_io.add_argument("-fc", "--from-clipboard", action="store_true", help="Apply code updates directly from the clipboard.")
165
+
166
+ # --- Group: General Options ---
167
+ options_group = parser.add_argument_group('General Options')
168
+ options_group.add_argument("-m", "--model", type=str, default="gemini/gemini-2.5-flash", help="Model name to use (e.g., 'gpt-4o', 'claude-3-sonnet').")
169
+ options_group.add_argument("-v", "--voice", type=str, default="False", help="Enable voice interaction for providing task instructions. (True/False)")
170
+
171
+ args = parser.parse_args()
172
+
173
+ try:
174
+ scopes = load_from_py_file(scopes_file_path, "scopes")
175
+ except FileNotFoundError:
176
+ scopes = {}
177
+ if not any([args.init, args.list_scopes, args.show_scope]):
178
+ console.print(f"⚠️ Scope file '{scopes_file_path}' not found. You can create one with the --init flag.", style="yellow")
179
+
180
+
181
+ if args.list_scopes:
182
+ console.print(f"Available scopes in '[bold]{scopes_file_path}[/]':", style="bold")
183
+ if not scopes:
184
+ console.print(f" -> No scopes found or '{scopes_file_path}' is missing.")
185
+ else:
186
+ for scope_name in scopes:
187
+ console.print(f" - {scope_name}")
188
+ return
189
+
190
+ if args.show_scope:
191
+ scope_name = args.show_scope
192
+ if not scopes:
193
+ console.print(f"⚠️ Scope file '{scopes_file_path}' not found or is empty.", style="yellow")
194
+ return
195
+
196
+ scope_data = scopes.get(scope_name)
197
+ if scope_data:
198
+ pretty_scope = pprint.pformat(scope_data, indent=2)
199
+ console.print(
200
+ Panel(
201
+ pretty_scope,
202
+ title=f"[bold cyan]Scope: '{scope_name}'[/]",
203
+ subtitle=f"[dim]from {scopes_file_path}[/dim]",
204
+ border_style="blue"
205
+ )
206
+ )
207
+ else:
208
+ console.print(f"❌ Scope '[bold]{scope_name}[/]' not found in '{scopes_file_path}'.", style="red")
209
+ return
210
+
211
+ if args.init:
212
+ create_new_scope(scopes, scopes_file_path)
213
+ return
214
+
215
+ if args.from_clipboard:
216
+ try:
217
+ import pyperclip
218
+ updates = pyperclip.paste()
219
+ if updates:
220
+ console.print("--- Parsing updates from clipboard ---", style="bold")
221
+ paste_response(updates)
222
+ else:
223
+ console.print("⚠️ Clipboard is empty. Nothing to parse.", style="yellow")
224
+ except ImportError:
225
+ console.print("❌ The 'pyperclip' library is required for clipboard functionality.", style="red")
226
+ console.print("Please install it using: pip install pyperclip", style="cyan")
227
+ except Exception as e:
228
+ console.print(f"❌ An error occurred while reading from the clipboard: {e}", style="red")
229
+ return
230
+
231
+ if args.from_file:
232
+ updates = read_from_file(args.from_file)
233
+ paste_response(updates)
234
+ return
235
+
236
+ system_prompt = textwrap.dedent("""
237
+ You are an expert pair programmer. Your purpose is to help users by modifying files based on their instructions.
238
+ Follow these rules strictly:
239
+ Your output should be a single file including all the updated files. For each file-block:
240
+ 1. Only include code for files that need to be updated / edited.
241
+ 2. For updated files, do not exclude any code even if it is unchanged code; assume the file code will be copy-pasted full in the file.
242
+ 3. Do not include verbose inline comments explaining what every small change does. Try to keep comments concise but informative, if any.
243
+ 4. Only update the relevant parts of each file relative to the provided task; do not make irrelevant edits even if you notice areas of improvements elsewhere.
244
+ 5. Do not use diffs.
245
+ 6. Make sure each file-block is returned in the following exact format. No additional text, comments, or explanations should be outside these blocks.
246
+ Expected format for a modified or new file:
247
+ <file_path:/absolute/path/to/your/file.py>
248
+ ```python
249
+ # The full, complete content of /absolute/path/to/your/file.py goes here.
250
+ def example_function():
251
+ return "Hello, World!"
252
+ ```
253
+ """)
254
+ history = [{"role": "system", "content": system_prompt}]
255
+
256
+ context = None
257
+ if args.voice not in ["False", "false"]:
258
+ from .listener import listen, speak
259
+ speak("Say your task instruction.")
260
+ task = listen()
261
+ if not task:
262
+ speak("No instruction heard. Exiting.")
263
+ return
264
+ speak(f"You said: {task}. Should I proceed?")
265
+ confirm = listen()
266
+ if confirm and "yes" in confirm.lower():
267
+ if not args.scope:
268
+ parser.error("A --scope name is required when using --voice.")
269
+ context = collect_context(args.scope, scopes)
270
+ llm_response = run_llm_query(task, args.model, history, context)
271
+ if llm_response:
272
+ paste_response(llm_response)
273
+ speak("Changes applied.")
274
+ else:
275
+ speak("Cancelled.")
276
+ return
277
+
278
+ # --- Main LLM Task Logic ---
279
+ if args.task:
280
+ action_flags = [args.patch, args.to_file is not None, args.to_clipboard]
281
+ if sum(action_flags) == 0:
282
+ parser.error("A task was provided, but no action was specified. Use --patch, --to-file, or --to-clipboard.")
283
+ if sum(action_flags) > 1:
284
+ parser.error("Please specify only one action: --patch, --to-file, or --to-clipboard.")
285
+
286
+ if args.context_in:
287
+ context = read_from_file(args.context_in)
288
+ else:
289
+ if not args.scope:
290
+ parser.error("A --scope name is required to build context for a task.")
291
+ context = collect_context(args.scope, scopes)
292
+ if context and args.context_out:
293
+ write_to_file(args.context_out, context)
294
+
295
+ if not context:
296
+ console.print("Proceeding with task but without any file context.", style="yellow")
297
+
298
+ llm_response = run_llm_query(args.task, args.model, history, context)
299
+
300
+ if llm_response:
301
+ if args.patch:
302
+ console.print("\n--- Updating files ---", style="bold")
303
+ paste_response(llm_response)
304
+ console.print("--- File Update Process Finished ---", style="bold")
305
+
306
+ elif args.to_file is not None:
307
+ write_to_file(args.to_file, llm_response)
308
+
309
+ elif args.to_clipboard:
310
+ try:
311
+ import pyperclip
312
+ pyperclip.copy(llm_response)
313
+ console.print("✅ Copied LLM response to clipboard.", style="green")
314
+ except ImportError:
315
+ console.print("❌ The 'pyperclip' library is required for clipboard functionality.", style="red")
316
+ console.print("Please install it using: pip install pyperclip", style="cyan")
317
+ except Exception as e:
318
+ console.print(f"❌ An error occurred while copying to the clipboard: {e}", style="red")
319
+
320
+ elif args.scope and args.context_out:
321
+ context = collect_context(args.scope, scopes)
322
+ if context:
323
+ write_to_file(args.context_out, context)
324
+
325
+ if __name__ == "__main__":
326
+ main()
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: patchllm
3
- Version: 0.2.1
3
+ Version: 0.2.2
4
4
  Summary: Lightweight tool to manage contexts and update code with LLMs
5
5
  Author: nassimberrada
6
6
  License: MIT License
@@ -49,21 +49,21 @@ Dynamic: license-file
49
49
  PatchLLM is a command-line tool that lets you flexibly build LLM context from your codebase using glob patterns, URLs, and keyword searches. It then automatically applies file edits directly from the LLM's response.
50
50
 
51
51
  ## Usage
52
- PatchLLM is designed to be used directly from your terminal.
52
+ PatchLLM is designed to be used directly from your terminal. The core workflow is to define a **scope** of files, provide a **task**, and choose an **action** (like patching files directly).
53
53
 
54
- ### 1. Initialize a Configuration
55
- The easiest way to get started is to run the interactive initializer. This will create a `configs.py` file for you.
54
+ ### 1. Initialize a Scope
55
+ The easiest way to get started is to run the interactive initializer. This will create a `scopes.py` file for you, which holds your saved scopes.
56
56
 
57
57
  ```bash
58
58
  patchllm --init
59
59
  ```
60
60
 
61
- This will guide you through creating your first context configuration, including setting a base path and file patterns. You can add multiple configurations to this file.
61
+ This will guide you through creating your first scope, including setting a base path and file patterns. You can add multiple scopes to this file for different projects or tasks.
62
62
 
63
- A generated `configs.py` might look like this:
63
+ A generated `scopes.py` might look like this:
64
64
  ```python
65
- # configs.py
66
- configs = {
65
+ # scopes.py
66
+ scopes = {
67
67
  "default": {
68
68
  "path": ".",
69
69
  "include_patterns": ["**/*.py"],
@@ -78,45 +78,47 @@ configs = {
78
78
  ```
79
79
 
80
80
  ### 2. Run a Task
81
- Use the `patchllm` command with a configuration name and a task instruction.
81
+ Use the `patchllm` command with a scope, a task, and an action flag like `--patch` (`-p`).
82
82
 
83
83
  ```bash
84
- # Apply a change using the 'default' configuration
85
- patchllm --config default --task "Add type hints to the main function in main.py"
84
+ # Apply a change using the 'default' scope and the --patch action
85
+ patchllm -s default -t "Add type hints to the main function in main.py" -p
86
86
  ```
87
87
 
88
88
  The tool will then:
89
- 1. Build a context from the files and URLs matching your configuration.
89
+ 1. Build a context from the files and URLs matching your `default` scope.
90
90
  2. Send the context and your task to the configured LLM.
91
91
  3. Parse the response and automatically write the changes to the relevant files.
92
92
 
93
93
  ### All Commands & Options
94
94
 
95
- #### Configuration Management
96
- * `--init`: Create a new configuration interactively.
97
- * `--list-configs`: List all available configurations from your `configs.py`.
98
- * `--show-config <name>`: Display the settings for a specific configuration.
95
+ #### Core Patching Flow
96
+ * `-s, --scope <name>`: Name of the scope to use from your `scopes.py` file.
97
+ * `-t, --task "<instruction>"`: The task instruction for the LLM.
98
+ * `-p, --patch`: Query the LLM and directly apply the file updates from the response. **This is the main action flag.**
99
99
 
100
- #### Core Task Execution
101
- * `--config <name>`: The name of the configuration to use for building context.
102
- * `--task "<instruction>"`: The task instruction for the LLM.
103
- * `--model <model_name>`: Specify a different model (e.g., `claude-3-opus`). Defaults to `gemini/gemini-1.5-flash`.
100
+ #### Scope Management
101
+ * `-i, --init`: Create a new scope interactively.
102
+ * `-sl, --list-scopes`: List all available scopes from your `scopes.py` file.
103
+ * `-ss, --show-scope <name>`: Display the settings for a specific scope.
104
104
 
105
- #### Context Handling
106
- * `--context-out [filename]`: Save the generated context to a file (defaults to `context.md`) instead of sending it to the LLM.
107
- * `--context-in <filename>`: Use a previously saved context file directly, skipping context generation.
108
- * `--update False`: A flag to prevent sending the prompt to the LLM. Useful when you only want to generate and save the context with `--context-out`.
105
+ #### I/O & Context Management
106
+ * `-co, --context-out [filename]`: Export the generated context to a file (defaults to `context.md`) instead of running a task.
107
+ * `-ci, --context-in <filename>`: Use a previously saved context file as input for a task.
108
+ * `-tf, --to-file [filename]`: Send the LLM response to a file (defaults to `response.md`) instead of patching directly.
109
+ * `-tc, --to-clipboard`: Copy the LLM response to the clipboard.
110
+ * `-ff, --from-file <filename>`: Apply patches from a local file instead of an LLM response.
111
+ * `-fc, --from-clipboard`: Apply patches directly from your clipboard content.
109
112
 
110
- #### Alternative Inputs
111
- * `--from-file <filename>`: Apply file patches directly from a local file instead of from an LLM response.
112
- * `--from-clipboard`: Apply file patches directly from your clipboard content.
113
- * `--voice True`: Use voice recognition to provide the task instruction. Requires extra dependencies.
113
+ #### General Options
114
+ * `--model <model_name>`: Specify a different model (e.g., `gpt-4o`). Defaults to `gemini/gemini-1.5-flash`.
115
+ * `--voice`: Enable voice recognition to provide the task instruction.
114
116
 
115
117
  ### Setup
116
118
 
117
119
  PatchLLM uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood. Please refer to their documentation for setting up API keys (e.g., `OPENAI_API_KEY`, `GEMINI_API_KEY`) in a `.env` file and for a full list of available models.
118
120
 
119
- To use the voice feature (`--voice True`), you will need to install extra dependencies:
121
+ To use the voice feature (`--voice`), you will need to install extra dependencies:
120
122
  ```bash
121
123
  pip install "speechrecognition>=3.10" "pyttsx3>=2.90"
122
124
  # Note: speechrecognition may require PyAudio, which might have system-level dependencies.
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "patchllm"
7
- version = "0.2.1"
7
+ version = "0.2.2"
8
8
  description = "Lightweight tool to manage contexts and update code with LLMs"
9
9
  readme = "README.md"
10
10
  requires-python = ">=3.8"
patchllm-0.2.1/README.md DELETED
@@ -1,88 +0,0 @@
1
- <p align="center">
2
- <picture>
3
- <source srcset="./assets/logo_dark.png" media="(prefers-color-scheme: dark)">
4
- <source srcset="./assets/logo_light.png" media="(prefers-color-scheme: light)">
5
- <img src="./assets/logo_light.png" alt="PatchLLM Logo" height="200">
6
- </picture>
7
- </p>
8
-
9
- ## About
10
- PatchLLM is a command-line tool that lets you flexibly build LLM context from your codebase using glob patterns, URLs, and keyword searches. It then automatically applies file edits directly from the LLM's response.
11
-
12
- ## Usage
13
- PatchLLM is designed to be used directly from your terminal.
14
-
15
- ### 1. Initialize a Configuration
16
- The easiest way to get started is to run the interactive initializer. This will create a `configs.py` file for you.
17
-
18
- ```bash
19
- patchllm --init
20
- ```
21
-
22
- This will guide you through creating your first context configuration, including setting a base path and file patterns. You can add multiple configurations to this file.
23
-
24
- A generated `configs.py` might look like this:
25
- ```python
26
- # configs.py
27
- configs = {
28
- "default": {
29
- "path": ".",
30
- "include_patterns": ["**/*.py"],
31
- "exclude_patterns": ["**/tests/*", "venv/*"],
32
- "urls": ["https://docs.python.org/3/library/argparse.html"]
33
- },
34
- "docs": {
35
- "path": "./docs",
36
- "include_patterns": ["**/*.md"],
37
- }
38
- }
39
- ```
40
-
41
- ### 2. Run a Task
42
- Use the `patchllm` command with a configuration name and a task instruction.
43
-
44
- ```bash
45
- # Apply a change using the 'default' configuration
46
- patchllm --config default --task "Add type hints to the main function in main.py"
47
- ```
48
-
49
- The tool will then:
50
- 1. Build a context from the files and URLs matching your configuration.
51
- 2. Send the context and your task to the configured LLM.
52
- 3. Parse the response and automatically write the changes to the relevant files.
53
-
54
- ### All Commands & Options
55
-
56
- #### Configuration Management
57
- * `--init`: Create a new configuration interactively.
58
- * `--list-configs`: List all available configurations from your `configs.py`.
59
- * `--show-config <name>`: Display the settings for a specific configuration.
60
-
61
- #### Core Task Execution
62
- * `--config <name>`: The name of the configuration to use for building context.
63
- * `--task "<instruction>"`: The task instruction for the LLM.
64
- * `--model <model_name>`: Specify a different model (e.g., `claude-3-opus`). Defaults to `gemini/gemini-1.5-flash`.
65
-
66
- #### Context Handling
67
- * `--context-out [filename]`: Save the generated context to a file (defaults to `context.md`) instead of sending it to the LLM.
68
- * `--context-in <filename>`: Use a previously saved context file directly, skipping context generation.
69
- * `--update False`: A flag to prevent sending the prompt to the LLM. Useful when you only want to generate and save the context with `--context-out`.
70
-
71
- #### Alternative Inputs
72
- * `--from-file <filename>`: Apply file patches directly from a local file instead of from an LLM response.
73
- * `--from-clipboard`: Apply file patches directly from your clipboard content.
74
- * `--voice True`: Use voice recognition to provide the task instruction. Requires extra dependencies.
75
-
76
- ### Setup
77
-
78
- PatchLLM uses [LiteLLM](https://github.com/BerriAI/litellm) under the hood. Please refer to their documentation for setting up API keys (e.g., `OPENAI_API_KEY`, `GEMINI_API_KEY`) in a `.env` file and for a full list of available models.
79
-
80
- To use the voice feature (`--voice True`), you will need to install extra dependencies:
81
- ```bash
82
- pip install "speechrecognition>=3.10" "pyttsx3>=2.90"
83
- # Note: speechrecognition may require PyAudio, which might have system-level dependencies.
84
- ```
85
-
86
- ## License
87
-
88
- This project is licensed under the MIT License. See the `LICENSE` file for details.
@@ -1,286 +0,0 @@
1
- import textwrap
2
- import argparse
3
- import litellm
4
- import pprint
5
- import os
6
- from dotenv import load_dotenv
7
- from rich.console import Console
8
- from rich.panel import Panel
9
-
10
- from .context import build_context
11
- from .parser import paste_response
12
- from .utils import load_from_py_file
13
-
14
- console = Console()
15
-
16
- # --- Core Functions ---
17
-
18
- def collect_context(config_name, configs):
19
- """Builds the code context from a provided configuration dictionary."""
20
- console.print("\n--- Building Code Context... ---", style="bold")
21
- if not configs:
22
- raise FileNotFoundError("Could not find a 'configs.py' file.")
23
- selected_config = configs.get(config_name)
24
- if selected_config is None:
25
- raise KeyError(f"Context config '{config_name}' not found in provided configs file.")
26
-
27
- context_object = build_context(selected_config)
28
- if context_object:
29
- tree, context = context_object.values()
30
- console.print("--- Context Building Finished. The following files were extracted ---", style="bold")
31
- console.print(tree)
32
- return context
33
- else:
34
- console.print("--- Context Building Failed (No files found) ---", style="yellow")
35
- return None
36
-
37
- def run_update(task_instructions, model_name, history, context=None):
38
- """
39
- Assembles the final prompt, sends it to the LLM, and applies file updates.
40
- """
41
- console.print("\n--- Sending Prompt to LLM... ---", style="bold")
42
- final_prompt = task_instructions
43
- if context:
44
- final_prompt = f"{context}\n\n{task_instructions}"
45
-
46
- history.append({"role": "user", "content": final_prompt})
47
-
48
- try:
49
- with console.status("[bold cyan]Waiting for LLM response...", spinner="dots"):
50
- response = litellm.completion(model=model_name, messages=history)
51
-
52
- assistant_response_content = response.choices[0].message.content
53
- history.append({"role": "assistant", "content": assistant_response_content})
54
-
55
- if not assistant_response_content or not assistant_response_content.strip():
56
- console.print("⚠️ Response is empty. Nothing to paste.", style="yellow")
57
- return
58
-
59
- console.print("\n--- Updating files ---", style="bold")
60
- paste_response(assistant_response_content)
61
- console.print("--- File Update Process Finished ---", style="bold")
62
-
63
- except Exception as e:
64
- history.pop() # Keep history clean on error
65
- raise RuntimeError(f"An error occurred while communicating with the LLM via litellm: {e}") from e
66
-
67
- def write_context_to_file(file_path, context):
68
- """Utility function to write the context to a file."""
69
- console.print("Exporting context..", style="cyan")
70
- with open(file_path, "w", encoding="utf-8") as file:
71
- file.write(context)
72
- console.print(f'✅ Context exported to {file_path.split("/")[-1]}', style="green")
73
-
74
- def read_from_file(file_path):
75
- """Utility function to read and return the content of a file."""
76
- console.print(f"Importing from {file_path}..", style="cyan")
77
- try:
78
- with open(file_path, "r", encoding="utf-8") as file:
79
- content = file.read()
80
- console.print("✅ Finished reading file.", style="green")
81
- return content
82
- except Exception as e:
83
- raise RuntimeError(f"Failed to read from file {file_path}: {e}") from e
84
-
85
- def create_new_config(configs, configs_file_str):
86
- """Interactively creates a new configuration and saves it to the specified configs file."""
87
- console.print(f"\n--- Creating a new configuration in '{configs_file_str}' ---", style="bold")
88
-
89
- try:
90
- name = console.input("[bold]Enter a name for the new configuration: [/]").strip()
91
- if not name:
92
- console.print("❌ Configuration name cannot be empty.", style="red")
93
- return
94
-
95
- if name in configs:
96
- overwrite = console.input(f"Configuration '[bold]{name}[/]' already exists. Overwrite? (y/n): ").lower()
97
- if overwrite not in ['y', 'yes']:
98
- console.print("Operation cancelled.", style="yellow")
99
- return
100
-
101
- path = console.input("[bold]Enter the base path[/] (e.g., '.' for current directory): ").strip() or "."
102
-
103
- console.print("\nEnter comma-separated glob patterns for files to include.")
104
- include_raw = console.input('[cyan]> (e.g., "[bold]**/*.py, src/**/*.js[/]"): [/]').strip()
105
- include_patterns = [p.strip() for p in include_raw.split(',') if p.strip()]
106
-
107
- console.print("\nEnter comma-separated glob patterns for files to exclude (optional).")
108
- exclude_raw = console.input('[cyan]> (e.g., "[bold]**/tests/*, venv/*[/]"): [/]').strip()
109
- exclude_patterns = [p.strip() for p in exclude_raw.split(',') if p.strip()]
110
-
111
- console.print("\nEnter comma-separated URLs to include as context (optional).")
112
- urls_raw = console.input('[cyan]> (e.g., "[bold]https://docs.example.com, ...[/]"): [/]').strip()
113
- urls = [u.strip() for u in urls_raw.split(',') if u.strip()]
114
-
115
- new_config_data = {
116
- "path": path,
117
- "include_patterns": include_patterns,
118
- "exclude_patterns": exclude_patterns,
119
- "urls": urls,
120
- }
121
-
122
- configs[name] = new_config_data
123
-
124
- with open(configs_file_str, "w", encoding="utf-8") as f:
125
- f.write("# configs.py\n")
126
- f.write("configs = ")
127
- f.write(pprint.pformat(configs, indent=4))
128
- f.write("\n")
129
-
130
- console.print(f"\n✅ Successfully created and saved configuration '[bold]{name}[/]' in '[bold]{configs_file_str}[/]'.", style="green")
131
-
132
- except KeyboardInterrupt:
133
- console.print("\n\n⚠️ Configuration creation cancelled by user.", style="yellow")
134
- return
135
-
136
- def main():
137
- """
138
- Main entry point for the patchllm command-line tool.
139
- """
140
- load_dotenv()
141
-
142
- configs_file_path = os.getenv("PATCHLLM_CONFIGS_FILE", "./configs.py")
143
-
144
- parser = argparse.ArgumentParser(
145
- description="A CLI tool to apply code changes using an LLM.",
146
- formatter_class=argparse.RawTextHelpFormatter
147
- )
148
-
149
- parser.add_argument("-i", "--init", action="store_true", help="Create a new configuration interactively.")
150
-
151
- parser.add_argument("-c", "--config", type=str, default=None, help="Name of the config key to use from the configs file.")
152
- parser.add_argument("-t", "--task", type=str, default=None, help="The task instructions to guide the assistant.")
153
-
154
- parser.add_argument("-co", "--context-out", nargs='?', const="context.md", default=None, help="Export the generated context to a file. Defaults to 'context.md'.")
155
- parser.add_argument("-ci", "--context-in", type=str, default=None, help="Import a previously saved context from a file.")
156
-
157
- parser.add_argument("-u", "--update", type=str, default="True", help="Control whether to send the context to the LLM for updates. (True/False)")
158
- parser.add_argument("-ff", "--from-file", type=str, default=None, help="Apply updates directly from a file instead of the LLM.")
159
- parser.add_argument("-fc", "--from-clipboard", action="store_true", help="Apply updates directly from the clipboard.")
160
-
161
- parser.add_argument("--model", type=str, default="gemini/gemini-1.5-flash", help="Model name to use (e.g., 'gpt-4o', 'claude-3-sonnet').")
162
- parser.add_argument("--voice", type=str, default="False", help="Enable voice interaction for providing task instructions. (True/False)")
163
-
164
- parser.add_argument("--list-configs", action="store_true", help="List all available configurations from the configs file and exit.")
165
- parser.add_argument("--show-config", type=str, help="Display the settings for a specific configuration and exit.")
166
-
167
- args = parser.parse_args()
168
-
169
- try:
170
- configs = load_from_py_file(configs_file_path, "configs")
171
- except FileNotFoundError:
172
- configs = {}
173
- if not any([args.init, args.list_configs, args.show_config]):
174
- console.print(f"⚠️ Config file '{configs_file_path}' not found. You can create one with the --init flag.", style="yellow")
175
-
176
-
177
- if args.list_configs:
178
- console.print(f"Available configurations in '[bold]{configs_file_path}[/]':", style="bold")
179
- if not configs:
180
- console.print(f" -> No configurations found or '{configs_file_path}' is missing.")
181
- else:
182
- for config_name in configs:
183
- console.print(f" - {config_name}")
184
- return
185
-
186
- if args.show_config:
187
- config_name = args.show_config
188
- if not configs:
189
- console.print(f"⚠️ Config file '{configs_file_path}' not found or is empty.", style="yellow")
190
- return
191
-
192
- config_data = configs.get(config_name)
193
- if config_data:
194
- pretty_config = pprint.pformat(config_data, indent=2)
195
- console.print(
196
- Panel(
197
- pretty_config,
198
- title=f"[bold cyan]Configuration: '{config_name}'[/]",
199
- subtitle=f"[dim]from {configs_file_path}[/dim]",
200
- border_style="blue"
201
- )
202
- )
203
- else:
204
- console.print(f"❌ Configuration '[bold]{config_name}[/]' not found in '{configs_file_path}'.", style="red")
205
- return
206
-
207
- if args.init:
208
- create_new_config(configs, configs_file_path)
209
- return
210
-
211
- if args.from_clipboard:
212
- try:
213
- import pyperclip
214
- updates = pyperclip.paste()
215
- if updates:
216
- console.print("--- Parsing updates from clipboard ---", style="bold")
217
- paste_response(updates)
218
- else:
219
- console.print("⚠️ Clipboard is empty. Nothing to parse.", style="yellow")
220
- except ImportError:
221
- console.print("❌ The 'pyperclip' library is required for clipboard functionality.", style="red")
222
- console.print("Please install it using: pip install pyperclip", style="cyan")
223
- except Exception as e:
224
- console.print(f"❌ An error occurred while reading from the clipboard: {e}", style="red")
225
- return
226
-
227
- if args.from_file:
228
- updates = read_from_file(args.from_file)
229
- paste_response(updates)
230
- return
231
-
232
- system_prompt = textwrap.dedent("""
233
- You are an expert pair programmer. Your purpose is to help users by modifying files based on their instructions.
234
- Follow these rules strictly:
235
- Your output should be a single file including all the updated files. For each file-block:
236
- 1. Only include code for files that need to be updated / edited.
237
- 2. For updated files, do not exclude any code even if it is unchanged code; assume the file code will be copy-pasted full in the file.
238
- 3. Do not include verbose inline comments explaining what every small change does. Try to keep comments concise but informative, if any.
239
- 4. Only update the relevant parts of each file relative to the provided task; do not make irrelevant edits even if you notice areas of improvements elsewhere.
240
- 5. Do not use diffs.
241
- 6. Make sure each file-block is returned in the following exact format. No additional text, comments, or explanations should be outside these blocks.
242
- Expected format for a modified or new file:
243
- <file_path:/absolute/path/to/your/file.py>
244
- ```python
245
- # The full, complete content of /absolute/path/to/your/file.py goes here.
246
- def example_function():
247
- return "Hello, World!"
248
- ```
249
- """)
250
- history = [{"role": "system", "content": system_prompt}]
251
-
252
- context = None
253
- if args.voice not in ["False", "false"]:
254
- from .listener import listen, speak
255
- speak("Say your task instruction.")
256
- task = listen()
257
- if not task:
258
- speak("No instruction heard. Exiting.")
259
- return
260
- speak(f"You said: {task}. Should I proceed?")
261
- confirm = listen()
262
- if confirm and "yes" in confirm.lower():
263
- context = collect_context(args.config, configs)
264
- run_update(task, args.model, history, context)
265
- speak("Changes applied.")
266
- else:
267
- speak("Cancelled.")
268
- return
269
-
270
- if args.context_in:
271
- context = read_from_file(args.context_in)
272
- else:
273
- if not args.config:
274
- parser.error("A --config name is required unless using other flags like --context-in or other utility flags.")
275
- context = collect_context(args.config, configs)
276
- if context and args.context_out:
277
- write_context_to_file(args.context_out, context)
278
-
279
- if args.update not in ["False", "false"]:
280
- if not args.task:
281
- parser.error("The --task argument is required to generate updates.")
282
- if context:
283
- run_update(args.task, args.model, history, context)
284
-
285
- if __name__ == "__main__":
286
- main()
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes