agent-cli 0.68.4__py3-none-any.whl → 0.69.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. agent_cli/_requirements/audio.txt +10 -2
  2. agent_cli/_requirements/faster-whisper.txt +1 -0
  3. agent_cli/_requirements/kokoro.txt +1 -0
  4. agent_cli/_requirements/llm.txt +8 -1
  5. agent_cli/_requirements/memory.txt +1 -0
  6. agent_cli/_requirements/mlx-whisper.txt +2 -1
  7. agent_cli/_requirements/piper.txt +1 -0
  8. agent_cli/_requirements/rag.txt +1 -0
  9. agent_cli/_requirements/server.txt +1 -0
  10. agent_cli/_requirements/speed.txt +10 -2
  11. agent_cli/_requirements/vad.txt +10 -2
  12. agent_cli/agents/autocorrect.py +2 -1
  13. agent_cli/agents/memory/__init__.py +1 -0
  14. agent_cli/agents/memory/proxy.py +5 -12
  15. agent_cli/agents/rag_proxy.py +2 -9
  16. agent_cli/agents/transcribe.py +4 -1
  17. agent_cli/cli.py +18 -2
  18. agent_cli/config_cmd.py +1 -0
  19. agent_cli/core/chroma.py +4 -4
  20. agent_cli/core/deps.py +1 -1
  21. agent_cli/core/openai_proxy.py +9 -4
  22. agent_cli/core/process.py +2 -2
  23. agent_cli/core/reranker.py +5 -4
  24. agent_cli/core/utils.py +3 -1
  25. agent_cli/core/vad.py +2 -1
  26. agent_cli/core/watch.py +8 -6
  27. agent_cli/dev/cli.py +6 -5
  28. agent_cli/dev/coding_agents/base.py +1 -2
  29. agent_cli/dev/skill/SKILL.md +9 -4
  30. agent_cli/dev/skill/examples.md +65 -4
  31. agent_cli/install/extras.py +9 -7
  32. agent_cli/memory/_files.py +4 -1
  33. agent_cli/memory/_indexer.py +3 -2
  34. agent_cli/memory/_ingest.py +6 -5
  35. agent_cli/memory/_streaming.py +2 -2
  36. agent_cli/rag/_indexer.py +3 -2
  37. agent_cli/rag/api.py +1 -0
  38. agent_cli/server/cli.py +3 -5
  39. agent_cli/server/common.py +3 -4
  40. agent_cli/server/whisper/backends/faster_whisper.py +30 -23
  41. agent_cli/services/llm.py +2 -1
  42. {agent_cli-0.68.4.dist-info → agent_cli-0.69.0.dist-info}/METADATA +11 -6
  43. {agent_cli-0.68.4.dist-info → agent_cli-0.69.0.dist-info}/RECORD +46 -46
  44. {agent_cli-0.68.4.dist-info → agent_cli-0.69.0.dist-info}/WHEEL +0 -0
  45. {agent_cli-0.68.4.dist-info → agent_cli-0.69.0.dist-info}/entry_points.txt +0 -0
  46. {agent_cli-0.68.4.dist-info → agent_cli-0.69.0.dist-info}/licenses/LICENSE +0 -0
@@ -34,14 +34,19 @@ Do NOT spawn when:
34
34
 
35
35
  ## Core command
36
36
 
37
- For short prompts:
37
+ For new features (starts from origin/main):
38
38
  ```bash
39
- agent-cli dev new <branch-name> --agent --prompt "Fix the login bug"
39
+ agent-cli dev new <branch-name> --agent --prompt "Implement the new feature..."
40
+ ```
41
+
42
+ For work on current branch (review, test, fix) - use `--from HEAD`:
43
+ ```bash
44
+ agent-cli dev new <branch-name> --from HEAD --agent --prompt "Review/test/fix..."
40
45
  ```
41
46
 
42
47
  For longer prompts (recommended for multi-line or complex instructions):
43
48
  ```bash
44
- agent-cli dev new <branch-name> --agent --prompt-file path/to/prompt.md
49
+ agent-cli dev new <branch-name> --from HEAD --agent --prompt-file path/to/prompt.md
45
50
  ```
46
51
 
47
52
  This creates:
@@ -129,7 +134,7 @@ Each agent works independently in its own branch. Results can be reviewed and me
129
134
  | `--agent` / `-a` | Start AI coding agent after creation |
130
135
  | `--prompt` / `-p` | Initial prompt for the agent (short prompts only) |
131
136
  | `--prompt-file` / `-P` | Read prompt from file (recommended for longer prompts) |
132
- | `--from` / `-f` | Base branch (default: origin/main) |
137
+ | `--from` / `-f` | Base ref (default: origin/main). **Use `--from HEAD` when reviewing/testing current branch!** |
133
138
  | `--with-agent` | Specific agent: claude, aider, codex, gemini |
134
139
  | `--agent-args` | Extra arguments for the agent |
135
140
 
@@ -20,7 +20,68 @@ Each prompt for a spawned agent should follow this structure:
20
20
  5. **Focused scope** - Keep solutions minimal, implement only what's requested
21
21
  6. **Structured report** - Write conclusions to `.claude/REPORT.md`
22
22
 
23
- ## Scenario 1: Multi-feature implementation
23
+ ## Scenario 1: Code review of current branch
24
+
25
+ **User request**: "Review the code on this branch" or "Spawn an agent to review my changes"
26
+
27
+ **CRITICAL**: Use `--from HEAD` (or the branch name) so the review agent has access to the changes!
28
+
29
+ ```bash
30
+ # Review the current branch - MUST use --from HEAD
31
+ agent-cli dev new review-changes --from HEAD --agent --prompt "Review the code changes on this branch.
32
+
33
+ <workflow>
34
+ - Run git diff origin/main...HEAD to identify all changes
35
+ - Read changed files in parallel to understand context
36
+ - Check CLAUDE.md for project-specific guidelines
37
+ - Test changes with real services if applicable
38
+ </workflow>
39
+
40
+ <code_exploration>
41
+ - Use git diff origin/main...HEAD to see the full diff
42
+ - Read each changed file completely before judging
43
+ - Look at surrounding code to understand patterns
44
+ - Check existing tests to understand expected behavior
45
+ </code_exploration>
46
+
47
+ <context>
48
+ Code review catches issues before merge. Focus on real problems - not style nitpicks. Apply these criteria:
49
+ - Code cleanliness: Is the implementation clean and well-structured?
50
+ - DRY principle: Does it avoid duplication?
51
+ - Code reuse: Are there parts that should be reused from other places?
52
+ - Organization: Is everything in the right place?
53
+ - Consistency: Is it in the same style as other parts of the codebase?
54
+ - Simplicity: Is it over-engineered? Remember KISS and YAGNI. No dead code paths, no defensive programming.
55
+ - No pointless wrappers: Functions that just call another function should be inlined.
56
+ - User experience: Does it provide a good user experience?
57
+ - Tests: Are tests meaningful or just trivial coverage?
58
+ - Live tests: Test changes with real services if applicable.
59
+ - Rules: Does the code follow CLAUDE.md guidelines?
60
+ </context>
61
+
62
+ <scope>
63
+ Review only - identify issues but do not fix them. Write findings to report.
64
+ </scope>
65
+
66
+ <report>
67
+ Write your review to .claude/REPORT.md:
68
+
69
+ ## Summary
70
+ [Overall assessment of the changes]
71
+
72
+ ## Issues Found
73
+ | Severity | File:Line | Issue | Suggestion |
74
+ |----------|-----------|-------|------------|
75
+ | Critical/High/Medium/Low | path:123 | description | fix |
76
+
77
+ ## Positive Observations
78
+ [What's well done]
79
+ </report>"
80
+ ```
81
+
82
+ **Common mistake**: Forgetting `--from HEAD` means the agent starts from `origin/main` and won't see any of the branch changes!
83
+
84
+ ## Scenario 2: Multi-feature implementation
24
85
 
25
86
  **User request**: "Implement user auth, payment processing, and email notifications"
26
87
 
@@ -169,7 +230,7 @@ After verifying tests pass, write to .claude/REPORT.md with summary, files chang
169
230
  </report>"
170
231
  ```
171
232
 
172
- ## Scenario 2: Test-driven development
233
+ ## Scenario 3: Test-driven development
173
234
 
174
235
  **User request**: "Add a caching layer with comprehensive tests"
175
236
 
@@ -289,7 +350,7 @@ After ALL tests pass, write to .claude/REPORT.md:
289
350
  </report>"
290
351
  ```
291
352
 
292
- ## Scenario 3: Large refactoring by module
353
+ ## Scenario 4: Large refactoring by module
293
354
 
294
355
  **User request**: "Refactor the API to use consistent error handling"
295
356
 
@@ -357,7 +418,7 @@ After tests pass and linting is clean, write to .claude/REPORT.md:
357
418
  </report>"
358
419
  ```
359
420
 
360
- ## Scenario 4: Documentation and implementation in parallel
421
+ ## Scenario 5: Documentation and implementation in parallel
361
422
 
362
423
  **User request**: "Add a plugin system with documentation"
363
424
 
@@ -60,13 +60,15 @@ def _get_current_uv_tool_extras() -> list[str]:
60
60
  return []
61
61
 
62
62
 
63
- def _install_via_uv_tool(extras: list[str]) -> bool:
63
+ def _install_via_uv_tool(extras: list[str], *, quiet: bool = False) -> bool:
64
64
  """Reinstall agent-cli via uv tool with the specified extras."""
65
65
  current_version = get_version("agent-cli").split("+")[0] # Strip local version
66
66
  extras_str = ",".join(extras)
67
67
  package_spec = f"agent-cli[{extras_str}]=={current_version}"
68
68
  python_version = f"{sys.version_info.major}.{sys.version_info.minor}"
69
69
  cmd = ["uv", "tool", "install", package_spec, "--force", "--python", python_version]
70
+ if quiet:
71
+ cmd.append("-q")
70
72
  console.print(f"Running: [cyan]{' '.join(cmd)}[/]")
71
73
  result = subprocess.run(cmd, check=False)
72
74
  return result.returncode == 0
@@ -93,7 +95,7 @@ def _install_extras_impl(extras: list[str], *, quiet: bool = False) -> bool:
93
95
  if _is_uv_tool_install():
94
96
  current_extras = _get_current_uv_tool_extras()
95
97
  new_extras = sorted(set(current_extras) | set(extras))
96
- return _install_via_uv_tool(new_extras)
98
+ return _install_via_uv_tool(new_extras, quiet=quiet)
97
99
 
98
100
  cmd = _install_cmd()
99
101
  for extra in extras:
@@ -120,7 +122,7 @@ def install_extras_programmatic(extras: list[str], *, quiet: bool = False) -> bo
120
122
  return bool(valid) and _install_extras_impl(valid, quiet=quiet)
121
123
 
122
124
 
123
- @app.command("install-extras", rich_help_panel="Installation")
125
+ @app.command("install-extras", rich_help_panel="Installation", no_args_is_help=True)
124
126
  def install_extras(
125
127
  extras: Annotated[list[str] | None, typer.Argument(help="Extras to install")] = None,
126
128
  list_extras: Annotated[
@@ -135,10 +137,10 @@ def install_extras(
135
137
  """Install optional extras (rag, memory, vad, etc.) with pinned versions.
136
138
 
137
139
  Examples:
138
- agent-cli install-extras rag # Install RAG dependencies
139
- agent-cli install-extras memory vad # Install multiple extras
140
- agent-cli install-extras --list # Show available extras
141
- agent-cli install-extras --all # Install all extras
140
+ - `agent-cli install-extras rag` # Install RAG dependencies
141
+ - `agent-cli install-extras memory vad` # Install multiple extras
142
+ - `agent-cli install-extras --list` # Show available extras
143
+ - `agent-cli install-extras --all` # Install all extras
142
144
 
143
145
  """
144
146
  available = _available_extras()
@@ -9,7 +9,6 @@ from pathlib import Path
9
9
  from typing import TYPE_CHECKING
10
10
  from uuid import uuid4
11
11
 
12
- import yaml
13
12
  from pydantic import ValidationError
14
13
 
15
14
  from agent_cli.core.utils import atomic_write_text
@@ -218,6 +217,8 @@ def load_snapshot(snapshot_path: Path) -> dict[str, MemoryFileRecord]:
218
217
 
219
218
  def _render_front_matter(doc_id: str, metadata: MemoryMetadata) -> str:
220
219
  """Return YAML front matter string."""
220
+ import yaml # noqa: PLC0415
221
+
221
222
  meta_dict = metadata.model_dump(exclude_none=True)
222
223
  meta_dict = {"id": doc_id, **meta_dict}
223
224
  yaml_block = yaml.safe_dump(meta_dict, sort_keys=False)
@@ -233,6 +234,8 @@ def _split_front_matter(text: str) -> tuple[dict | None, str]:
233
234
  return None, text
234
235
  yaml_part = text[3:end]
235
236
  try:
237
+ import yaml # noqa: PLC0415
238
+
236
239
  meta = yaml.safe_load(yaml_part) or {}
237
240
  except Exception:
238
241
  return None, text
@@ -6,8 +6,6 @@ import logging
6
6
  from dataclasses import dataclass, field
7
7
  from typing import TYPE_CHECKING
8
8
 
9
- from watchfiles import Change
10
-
11
9
  from agent_cli.core.watch import watch_directory
12
10
  from agent_cli.memory._files import (
13
11
  _DELETED_DIRNAME,
@@ -24,6 +22,7 @@ if TYPE_CHECKING:
24
22
  from pathlib import Path
25
23
 
26
24
  from chromadb import Collection
25
+ from watchfiles import Change
27
26
 
28
27
  LOGGER = logging.getLogger(__name__)
29
28
 
@@ -108,6 +107,8 @@ async def watch_memory_store(collection: Collection, root: Path, *, index: Memor
108
107
 
109
108
 
110
109
  def _handle_change(change: Change, path: Path, collection: Collection, index: MemoryIndex) -> None:
110
+ from watchfiles import Change # noqa: PLC0415
111
+
111
112
  if path.suffix == ".tmp":
112
113
  return
113
114
 
@@ -9,8 +9,6 @@ from time import perf_counter
9
9
  from typing import TYPE_CHECKING
10
10
  from uuid import uuid4
11
11
 
12
- import httpx
13
-
14
12
  from agent_cli.memory._git import commit_changes
15
13
  from agent_cli.memory._persistence import delete_memory_files, persist_entries, persist_summary
16
14
  from agent_cli.memory._prompt import (
@@ -58,6 +56,7 @@ async def extract_salient_facts(
58
56
  if not user_message and not assistant_message:
59
57
  return []
60
58
 
59
+ import httpx # noqa: PLC0415
61
60
  from pydantic_ai import Agent # noqa: PLC0415
62
61
  from pydantic_ai.exceptions import AgentRunError, UnexpectedModelBehavior # noqa: PLC0415
63
62
  from pydantic_ai.models.openai import OpenAIChatModel # noqa: PLC0415
@@ -174,16 +173,18 @@ async def reconcile_facts(
174
173
  if f.strip()
175
174
  ]
176
175
  return entries, [], {}
177
- id_map: dict[int, str] = {idx: mem.id for idx, mem in enumerate(existing)}
178
- existing_json = [{"id": idx, "text": mem.content} for idx, mem in enumerate(existing)]
179
- existing_ids = set(id_map.keys())
180
176
 
177
+ import httpx # noqa: PLC0415
181
178
  from pydantic_ai import Agent, ModelRetry, PromptedOutput # noqa: PLC0415
182
179
  from pydantic_ai.exceptions import AgentRunError, UnexpectedModelBehavior # noqa: PLC0415
183
180
  from pydantic_ai.models.openai import OpenAIChatModel # noqa: PLC0415
184
181
  from pydantic_ai.providers.openai import OpenAIProvider # noqa: PLC0415
185
182
  from pydantic_ai.settings import ModelSettings # noqa: PLC0415
186
183
 
184
+ id_map: dict[int, str] = {idx: mem.id for idx, mem in enumerate(existing)}
185
+ existing_json = [{"id": idx, "text": mem.content} for idx, mem in enumerate(existing)]
186
+ existing_ids = set(id_map.keys())
187
+
187
188
  provider = OpenAIProvider(api_key=api_key or "dummy", base_url=openai_base_url)
188
189
  model_cfg = OpenAIChatModel(
189
190
  model_name=model,
@@ -4,8 +4,6 @@ from __future__ import annotations
4
4
 
5
5
  from typing import TYPE_CHECKING, Any
6
6
 
7
- import httpx
8
-
9
7
  from agent_cli.core.sse import extract_content_from_chunk, parse_chunk
10
8
 
11
9
  if TYPE_CHECKING:
@@ -20,6 +18,8 @@ async def stream_chat_sse(
20
18
  request_timeout: float = 120.0,
21
19
  ) -> AsyncGenerator[str, None]:
22
20
  """Stream Server-Sent Events from an OpenAI-compatible chat completion endpoint."""
21
+ import httpx # noqa: PLC0415
22
+
23
23
  url = f"{openai_base_url.rstrip('/')}/chat/completions"
24
24
  async with (
25
25
  httpx.AsyncClient(timeout=request_timeout) as client,
agent_cli/rag/_indexer.py CHANGED
@@ -5,8 +5,6 @@ from __future__ import annotations
5
5
  import logging
6
6
  from typing import TYPE_CHECKING
7
7
 
8
- from watchfiles import Change
9
-
10
8
  from agent_cli.core.watch import watch_directory
11
9
  from agent_cli.rag._indexing import index_file, remove_file
12
10
  from agent_cli.rag._utils import should_ignore_path
@@ -15,6 +13,7 @@ if TYPE_CHECKING:
15
13
  from pathlib import Path
16
14
 
17
15
  from chromadb import Collection
16
+ from watchfiles import Change
18
17
 
19
18
  LOGGER = logging.getLogger(__name__)
20
19
 
@@ -50,6 +49,8 @@ def _handle_change(
50
49
  file_hashes: dict[str, str],
51
50
  file_mtimes: dict[str, float],
52
51
  ) -> None:
52
+ from watchfiles import Change # noqa: PLC0415
53
+
53
54
  try:
54
55
  if change == Change.deleted:
55
56
  LOGGER.info("[deleted] Removing from index: %s", file_path.name)
agent_cli/rag/api.py CHANGED
@@ -24,6 +24,7 @@ from agent_cli.rag.models import ChatRequest # noqa: TC001
24
24
  if TYPE_CHECKING:
25
25
  from pathlib import Path
26
26
 
27
+
27
28
  LOGGER = logging.getLogger(__name__)
28
29
 
29
30
 
agent_cli/server/cli.py CHANGED
@@ -9,15 +9,13 @@ from pathlib import Path # noqa: TC003 - Typer needs this at runtime
9
9
  from typing import Annotated
10
10
 
11
11
  import typer
12
- from rich.console import Console
13
12
 
14
13
  from agent_cli.cli import app as main_app
15
14
  from agent_cli.core.deps import requires_extras
16
15
  from agent_cli.core.process import set_process_title
16
+ from agent_cli.core.utils import console, err_console
17
17
  from agent_cli.server.common import setup_rich_logging
18
18
 
19
- console = Console()
20
- err_console = Console(stderr=True)
21
19
  logger = logging.getLogger(__name__)
22
20
 
23
21
  # Check for optional dependencies at call time (not module load time)
@@ -295,7 +293,7 @@ def whisper_cmd( # noqa: PLR0912, PLR0915
295
293
 
296
294
  """
297
295
  # Setup Rich logging for consistent output
298
- setup_rich_logging(log_level, console=console)
296
+ setup_rich_logging(log_level)
299
297
 
300
298
  valid_backends = ("auto", "faster-whisper", "mlx")
301
299
  if backend not in valid_backends:
@@ -614,7 +612,7 @@ def tts_cmd( # noqa: PLR0915
614
612
 
615
613
  """
616
614
  # Setup Rich logging for consistent output
617
- setup_rich_logging(log_level, console=console)
615
+ setup_rich_logging(log_level)
618
616
 
619
617
  valid_backends = ("auto", "piper", "kokoro")
620
618
  if backend not in valid_backends:
@@ -9,10 +9,10 @@ import logging
9
9
  from contextlib import asynccontextmanager
10
10
  from typing import TYPE_CHECKING, Any, Protocol
11
11
 
12
- from rich.console import Console
13
12
  from rich.logging import RichHandler
14
13
 
15
14
  from agent_cli import constants
15
+ from agent_cli.core.utils import console
16
16
 
17
17
  if TYPE_CHECKING:
18
18
  import wave
@@ -128,7 +128,7 @@ def configure_app(app: FastAPI) -> None:
128
128
  return await log_requests_middleware(request, call_next)
129
129
 
130
130
 
131
- def setup_rich_logging(log_level: str = "info", *, console: Console | None = None) -> None:
131
+ def setup_rich_logging(log_level: str = "info") -> None:
132
132
  """Configure logging to use Rich for consistent, pretty output.
133
133
 
134
134
  This configures:
@@ -141,11 +141,10 @@ def setup_rich_logging(log_level: str = "info", *, console: Console | None = Non
141
141
 
142
142
  """
143
143
  level = getattr(logging, log_level.upper(), logging.INFO)
144
- rich_console = console or Console()
145
144
 
146
145
  # Create Rich handler with clean format
147
146
  handler = RichHandler(
148
- console=rich_console,
147
+ console=console,
149
148
  show_time=True,
150
149
  show_level=True,
151
150
  show_path=False, # Don't show file:line - too verbose
@@ -6,6 +6,7 @@ import asyncio
6
6
  import logging
7
7
  import tempfile
8
8
  from concurrent.futures import ProcessPoolExecutor
9
+ from dataclasses import dataclass
9
10
  from multiprocessing import get_context
10
11
  from pathlib import Path
11
12
  from typing import Any, Literal
@@ -19,6 +20,24 @@ from agent_cli.server.whisper.backends.base import (
19
20
  logger = logging.getLogger(__name__)
20
21
 
21
22
 
23
+ # --- Subprocess state (only used within subprocess worker) ---
24
+ # This state persists across function calls within the subprocess because:
25
+ # 1. Model loading is expensive and must be reused across transcription calls
26
+ # 2. CTranslate2 models cannot be pickled/passed through IPC queues
27
+ # 3. The subprocess is long-lived (ProcessPoolExecutor reuses workers)
28
+
29
+
30
+ @dataclass
31
+ class _SubprocessState:
32
+ """Container for subprocess-local state. Not shared with main process."""
33
+
34
+ model: Any = None
35
+ device: str | None = None
36
+
37
+
38
+ _state = _SubprocessState()
39
+
40
+
22
41
  # --- Subprocess worker functions (run in isolated process) ---
23
42
 
24
43
 
@@ -40,28 +59,22 @@ def _load_model_in_subprocess(
40
59
  cpu_threads=cpu_threads,
41
60
  download_root=download_root,
42
61
  )
43
- return str(model.model.device)
62
+
63
+ # Store in subprocess state for reuse across transcription calls
64
+ _state.model = model
65
+ _state.device = str(model.model.device)
66
+
67
+ return _state.device
44
68
 
45
69
 
46
70
  def _transcribe_in_subprocess(
47
- model_name: str,
48
- device: str,
49
- compute_type: str,
50
- cpu_threads: int,
51
- download_root: str | None,
52
71
  audio_bytes: bytes,
53
72
  kwargs: dict[str, Any],
54
73
  ) -> dict[str, Any]:
55
- """Run transcription in subprocess. Model is loaded fresh each call."""
56
- from faster_whisper import WhisperModel # noqa: PLC0415
57
-
58
- model = WhisperModel(
59
- model_name,
60
- device=device,
61
- compute_type=compute_type,
62
- cpu_threads=cpu_threads,
63
- download_root=download_root,
64
- )
74
+ """Run transcription in subprocess. Reuses model from _state."""
75
+ if _state.model is None:
76
+ msg = "Model not loaded in subprocess. Call _load_model_in_subprocess first."
77
+ raise RuntimeError(msg)
65
78
 
66
79
  # Write audio to temp file - faster-whisper needs a file path
67
80
  with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp:
@@ -69,7 +82,7 @@ def _transcribe_in_subprocess(
69
82
  tmp_path = tmp.name
70
83
 
71
84
  try:
72
- segments, info = model.transcribe(tmp_path, **kwargs)
85
+ segments, info = _state.model.transcribe(tmp_path, **kwargs)
73
86
  segment_list = list(segments) # Consume lazy generator
74
87
  finally:
75
88
  Path(tmp_path).unlink(missing_ok=True)
@@ -195,16 +208,10 @@ class FasterWhisperBackend:
195
208
  "word_timestamps": word_timestamps,
196
209
  }
197
210
 
198
- download_root = str(self._config.cache_dir) if self._config.cache_dir else None
199
211
  loop = asyncio.get_running_loop()
200
212
  result = await loop.run_in_executor(
201
213
  self._executor,
202
214
  _transcribe_in_subprocess,
203
- self._config.model_name,
204
- self._config.device,
205
- self._config.compute_type,
206
- self._config.cpu_threads,
207
- download_root,
208
215
  audio,
209
216
  kwargs,
210
217
  )
agent_cli/services/llm.py CHANGED
@@ -6,7 +6,6 @@ import sys
6
6
  import time
7
7
  from typing import TYPE_CHECKING
8
8
 
9
- import pyperclip
10
9
  from rich.live import Live
11
10
 
12
11
  from agent_cli.core.utils import console, live_timer, print_error_message, print_output_panel
@@ -156,6 +155,8 @@ async def get_llm_response(
156
155
  result_text = result.output
157
156
 
158
157
  if clipboard:
158
+ import pyperclip # noqa: PLC0415
159
+
159
160
  pyperclip.copy(result_text)
160
161
  logger.info("Copied result to clipboard.")
161
162
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: agent-cli
3
- Version: 0.68.4
3
+ Version: 0.69.0
4
4
  Summary: A suite of AI-powered command-line tools for text correction, audio transcription, and voice assistance.
5
5
  Project-URL: Homepage, https://github.com/basnijholt/agent-cli
6
6
  Author-email: Bas Nijholt <bas@nijho.lt>
@@ -13,6 +13,7 @@ Requires-Dist: pydantic
13
13
  Requires-Dist: pyperclip
14
14
  Requires-Dist: rich
15
15
  Requires-Dist: setproctitle
16
+ Requires-Dist: typer
16
17
  Requires-Dist: typer-slim[standard]
17
18
  Provides-Extra: audio
18
19
  Requires-Dist: numpy; extra == 'audio'
@@ -20,9 +21,10 @@ Requires-Dist: sounddevice>=0.4.6; extra == 'audio'
20
21
  Requires-Dist: wyoming>=1.5.2; extra == 'audio'
21
22
  Provides-Extra: dev
22
23
  Requires-Dist: markdown-code-runner>=2.7.0; extra == 'dev'
24
+ Requires-Dist: markdown-gfm-admonition; extra == 'dev'
23
25
  Requires-Dist: notebook; extra == 'dev'
26
+ Requires-Dist: pre-commit-uv>=4.1.4; extra == 'dev'
24
27
  Requires-Dist: pre-commit>=3.0.0; extra == 'dev'
25
- Requires-Dist: pydantic-ai-slim[openai]; extra == 'dev'
26
28
  Requires-Dist: pylint>=3.0.0; extra == 'dev'
27
29
  Requires-Dist: pytest-asyncio>=0.20.0; extra == 'dev'
28
30
  Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
@@ -31,6 +33,7 @@ Requires-Dist: pytest-timeout; extra == 'dev'
31
33
  Requires-Dist: pytest>=7.0.0; extra == 'dev'
32
34
  Requires-Dist: ruff; extra == 'dev'
33
35
  Requires-Dist: versioningit; extra == 'dev'
36
+ Requires-Dist: zensical; extra == 'dev'
34
37
  Provides-Extra: faster-whisper
35
38
  Requires-Dist: fastapi[standard]; extra == 'faster-whisper'
36
39
  Requires-Dist: faster-whisper>=1.0.0; extra == 'faster-whisper'
@@ -70,7 +73,6 @@ Requires-Dist: fastapi[standard]; extra == 'server'
70
73
  Provides-Extra: speed
71
74
  Requires-Dist: audiostretchy>=1.3.0; extra == 'speed'
72
75
  Provides-Extra: test
73
- Requires-Dist: pydantic-ai-slim[openai]; extra == 'test'
74
76
  Requires-Dist: pytest-asyncio>=0.20.0; extra == 'test'
75
77
  Requires-Dist: pytest-cov>=4.0.0; extra == 'test'
76
78
  Requires-Dist: pytest-mock; extra == 'test'
@@ -496,9 +498,12 @@ agent-cli install-extras rag memory vad
496
498
 
497
499
  Install optional extras (rag, memory, vad, etc.) with pinned versions.
498
500
 
499
- Examples: agent-cli install-extras rag # Install RAG dependencies agent-cli
500
- install-extras memory vad # Install multiple extras agent-cli install-extras --list
501
- # Show available extras agent-cli install-extras --all # Install all extras
501
+ Examples:
502
+
503
+ agent-cli install-extras rag # Install RAG dependencies
504
+ • agent-cli install-extras memory vad # Install multiple extras
505
+ • agent-cli install-extras --list # Show available extras
506
+ • agent-cli install-extras --all # Install all extras
502
507
 
503
508
  ╭─ Arguments ────────────────────────────────────────────────────────────────────────────╮
504
509
  │ extras [EXTRAS]... Extras to install │