testfix 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
testfix-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,214 @@
1
+ Metadata-Version: 2.4
2
+ Name: testfix
3
+ Version: 0.1.0
4
+ Summary: AI-powered CLI to automatically fix failing tests — non-interactive, pipeable, CI-ready
5
+ Author: faw21
6
+ License: BSL-1.1
7
+ Keywords: ai,automation,cli,developer-tools,pytest,tdd,testing
8
+ Classifier: Development Status :: 3 - Alpha
9
+ Classifier: Environment :: Console
10
+ Classifier: Intended Audience :: Developers
11
+ Classifier: Programming Language :: Python :: 3
12
+ Classifier: Topic :: Software Development :: Testing
13
+ Classifier: Topic :: Utilities
14
+ Requires-Python: >=3.9
15
+ Requires-Dist: anthropic>=0.25
16
+ Requires-Dist: click>=8.0
17
+ Requires-Dist: openai>=1.0
18
+ Requires-Dist: python-dotenv>=1.0
19
+ Requires-Dist: rich>=13.0
20
+ Provides-Extra: dev
21
+ Requires-Dist: anthropic>=0.25; extra == 'dev'
22
+ Requires-Dist: openai>=1.0; extra == 'dev'
23
+ Requires-Dist: pytest-cov; extra == 'dev'
24
+ Requires-Dist: pytest-mock; extra == 'dev'
25
+ Requires-Dist: pytest>=8.0; extra == 'dev'
26
+ Description-Content-Type: text/markdown
27
+
28
+ # testfix
29
+
30
+ **AI-powered failing test auto-fixer.** Run your tests, let the AI fix the failures, repeat until green — all from one command.
31
+
32
+ ```bash
33
+ pip install testfix
34
+ testfix pytest # run, fix, retry up to 5× until tests pass
35
+ ```
36
+
37
+ > Supports pytest · jest · vitest · go test · cargo test
38
+
39
+ ---
40
+
41
+ ## Why testfix?
42
+
43
+ You've been there: you write a feature, run tests, and see 5 failures. Fixing each one manually means reading tracebacks, understanding what broke, writing a fix, re-running — and repeating. testfix automates that loop.
44
+
45
+ Unlike GitHub Copilot (needs IDE) or aider (interactive session), testfix is a **pure CLI tool**: non-interactive, pipeable, pre-push hook-ready.
46
+
47
+ ---
48
+
49
+ ## Installation
50
+
51
+ ```bash
52
+ pip install testfix
53
+ ```
54
+
55
+ Requires Python 3.9+. No configuration needed — just install and run.
56
+
57
+ ### API Keys
58
+
59
+ Set one of these (or use Ollama for free local inference):
60
+
61
+ ```bash
62
+ export ANTHROPIC_API_KEY=sk-ant-... # Claude (default)
63
+ export OPENAI_API_KEY=sk-... # OpenAI
64
+ # or use --provider ollama (no key needed, runs locally)
65
+ ```
66
+
67
+ ---
68
+
69
+ ## Quick Start
70
+
71
+ ```bash
72
+ # Run pytest and fix failures (up to 5 tries)
73
+ testfix pytest
74
+
75
+ # Fix a specific test file
76
+ testfix pytest tests/test_auth.py
77
+
78
+ # Run once: test → fix → test again
79
+ testfix --once pytest
80
+
81
+ # Preview fixes without applying them
82
+ testfix --dry-run pytest
83
+
84
+ # Use OpenAI instead of Claude
85
+ testfix --provider openai pytest
86
+
87
+ # Use local Ollama (free, no API key)
88
+ testfix --provider ollama pytest
89
+
90
+ # Use Jest
91
+ testfix npm test
92
+
93
+ # Use Go test
94
+ testfix go test ./...
95
+
96
+ # Focus on a specific source file
97
+ testfix --file src/auth.py pytest
98
+ ```
99
+
100
+ ---
101
+
102
+ ## How it works
103
+
104
+ ```
105
+ testfix pytest
106
+
107
+ ├─ Attempt 1/5
108
+ │ ├─ Run: pytest
109
+ │ ├─ 3 failing tests found
110
+ │ ├─ 🤖 Asking Claude to fix 3 failure(s)…
111
+ │ ├─ Generated 1 file fix(es)
112
+ │ │ └─ src/auth.py (diff shown)
113
+ │ └─ ✔ Applied fix (backup: .testfix.bak)
114
+
115
+ ├─ Attempt 2/5
116
+ │ ├─ Run: pytest
117
+ │ └─ ✅ All tests pass!
118
+
119
+ └─ Exit 0
120
+ ```
121
+
122
+ 1. **Run** your tests with the command you provide
123
+ 2. **Parse** failures: extracts test name, location, error, traceback
124
+ 3. **Collect** relevant source files (from tracebacks + heuristics)
125
+ 4. **Ask** the AI to fix the source code — never modifies test files
126
+ 5. **Apply** the fix (with `.testfix.bak` backup)
127
+ 6. **Repeat** until tests pass or `--max-tries` is reached
128
+
129
+ ---
130
+
131
+ ## Options
132
+
133
+ ```
134
+ testfix [OPTIONS] TEST_COMMAND...
135
+
136
+ Options:
137
+ --max-tries N Max fix-and-retry iterations (default: 5)
138
+ --once Run once: test → fix → test again (--max-tries 2)
139
+ --dry-run Show diffs but don't apply fixes
140
+ --provider NAME LLM provider: claude, openai, ollama (default: claude)
141
+ --model NAME Model override (e.g. claude-sonnet-4-5, gpt-4o)
142
+ --file PATH Focus fixes on this source file
143
+ -v, --verbose Show full test runner stderr
144
+ --version Show version and exit
145
+ ```
146
+
147
+ ---
148
+
149
+ ## Pre-push hook
150
+
151
+ Add to `.git/hooks/pre-push`:
152
+
153
+ ```bash
154
+ #!/bin/bash
155
+ testfix --once --provider ollama pytest
156
+ ```
157
+
158
+ This runs tests, lets AI fix any failures, and blocks the push if tests are still failing.
159
+
160
+ ---
161
+
162
+ ## CI / GitHub Actions
163
+
164
+ ```yaml
165
+ - name: Run and auto-fix tests
166
+ run: testfix pytest
167
+ env:
168
+ ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
169
+ ```
170
+
171
+ Or use `--dry-run` in CI to just report what would be fixed without changing files:
172
+
173
+ ```yaml
174
+ - name: Check if AI can fix test failures
175
+ run: testfix --dry-run pytest || echo "Tests failing (see diff above)"
176
+ ```
177
+
178
+ ---
179
+
180
+ ## Supported test frameworks
181
+
182
+ | Framework | Command example |
183
+ |-----------|----------------|
184
+ | **pytest** | `testfix pytest` |
185
+ | **jest** | `testfix npx jest` |
186
+ | **vitest** | `testfix npx vitest` |
187
+ | **go test** | `testfix go test ./...` |
188
+ | **cargo test** | `testfix cargo test` |
189
+ | **rspec** | `testfix bundle exec rspec` |
190
+ | **any** | `testfix <your test command>` |
191
+
192
+ ---
193
+
194
+ ## Ecosystem
195
+
196
+ testfix is part of a suite of AI-powered developer CLI tools:
197
+
198
+ | Tool | Purpose |
199
+ |------|---------|
200
+ | **[critiq](https://github.com/faw21/critiq)** | AI code reviewer — catch issues before you push |
201
+ | **[testfix](https://github.com/faw21/testfix)** | AI test fixer — automatically fix failing tests |
202
+ | **[difftests](https://github.com/faw21/difftests)** | AI test generator — write tests for your diffs |
203
+ | **[gpr](https://github.com/faw21/gpr)** | AI PR description + commit message generator |
204
+ | **[gitbrief](https://github.com/faw21/gitbrief)** | Pack your codebase into LLM context |
205
+ | **[standup-ai](https://github.com/faw21/standup-ai)** | AI daily standup generator |
206
+ | **[changelog-ai](https://github.com/faw21/changelog-ai)** | AI CHANGELOG generator |
207
+ | **[prcat](https://github.com/faw21/prcat)** | AI PR reviewer for incoming PRs |
208
+ | **[chronicle](https://github.com/faw21/chronicle)** | Turn git history into stories |
209
+
210
+ ---
211
+
212
+ ## License
213
+
214
+ Business Source License 1.1 — free for non-commercial use. See [LICENSE](LICENSE).
@@ -0,0 +1,187 @@
1
+ # testfix
2
+
3
+ **AI-powered failing test auto-fixer.** Run your tests, let the AI fix the failures, repeat until green — all from one command.
4
+
5
+ ```bash
6
+ pip install testfix
7
+ testfix pytest # run, fix, retry up to 5× until tests pass
8
+ ```
9
+
10
+ > Supports pytest · jest · vitest · go test · cargo test
11
+
12
+ ---
13
+
14
+ ## Why testfix?
15
+
16
+ You've been there: you write a feature, run tests, and see 5 failures. Fixing each one manually means reading tracebacks, understanding what broke, writing a fix, re-running — and repeating. testfix automates that loop.
17
+
18
+ Unlike GitHub Copilot (needs IDE) or aider (interactive session), testfix is a **pure CLI tool**: non-interactive, pipeable, pre-push hook-ready.
19
+
20
+ ---
21
+
22
+ ## Installation
23
+
24
+ ```bash
25
+ pip install testfix
26
+ ```
27
+
28
+ Requires Python 3.9+. No configuration needed — just install and run.
29
+
30
+ ### API Keys
31
+
32
+ Set one of these (or use Ollama for free local inference):
33
+
34
+ ```bash
35
+ export ANTHROPIC_API_KEY=sk-ant-... # Claude (default)
36
+ export OPENAI_API_KEY=sk-... # OpenAI
37
+ # or use --provider ollama (no key needed, runs locally)
38
+ ```
39
+
40
+ ---
41
+
42
+ ## Quick Start
43
+
44
+ ```bash
45
+ # Run pytest and fix failures (up to 5 tries)
46
+ testfix pytest
47
+
48
+ # Fix a specific test file
49
+ testfix pytest tests/test_auth.py
50
+
51
+ # Run once: test → fix → test again
52
+ testfix --once pytest
53
+
54
+ # Preview fixes without applying them
55
+ testfix --dry-run pytest
56
+
57
+ # Use OpenAI instead of Claude
58
+ testfix --provider openai pytest
59
+
60
+ # Use local Ollama (free, no API key)
61
+ testfix --provider ollama pytest
62
+
63
+ # Use Jest
64
+ testfix npm test
65
+
66
+ # Use Go test
67
+ testfix go test ./...
68
+
69
+ # Focus on a specific source file
70
+ testfix --file src/auth.py pytest
71
+ ```
72
+
73
+ ---
74
+
75
+ ## How it works
76
+
77
+ ```
78
+ testfix pytest
79
+
80
+ ├─ Attempt 1/5
81
+ │ ├─ Run: pytest
82
+ │ ├─ 3 failing tests found
83
+ │ ├─ 🤖 Asking Claude to fix 3 failure(s)…
84
+ │ ├─ Generated 1 file fix(es)
85
+ │ │ └─ src/auth.py (diff shown)
86
+ │ └─ ✔ Applied fix (backup: .testfix.bak)
87
+
88
+ ├─ Attempt 2/5
89
+ │ ├─ Run: pytest
90
+ │ └─ ✅ All tests pass!
91
+
92
+ └─ Exit 0
93
+ ```
94
+
95
+ 1. **Run** your tests with the command you provide
96
+ 2. **Parse** failures: extracts test name, location, error, traceback
97
+ 3. **Collect** relevant source files (from tracebacks + heuristics)
98
+ 4. **Ask** the AI to fix the source code — never modifies test files
99
+ 5. **Apply** the fix (with `.testfix.bak` backup)
100
+ 6. **Repeat** until tests pass or `--max-tries` is reached
101
+
102
+ ---
103
+
104
+ ## Options
105
+
106
+ ```
107
+ testfix [OPTIONS] TEST_COMMAND...
108
+
109
+ Options:
110
+ --max-tries N Max fix-and-retry iterations (default: 5)
111
+ --once Run once: test → fix → test again (--max-tries 2)
112
+ --dry-run Show diffs but don't apply fixes
113
+ --provider NAME LLM provider: claude, openai, ollama (default: claude)
114
+ --model NAME Model override (e.g. claude-sonnet-4-5, gpt-4o)
115
+ --file PATH Focus fixes on this source file
116
+ -v, --verbose Show full test runner stderr
117
+ --version Show version and exit
118
+ ```
119
+
120
+ ---
121
+
122
+ ## Pre-push hook
123
+
124
+ Add to `.git/hooks/pre-push`:
125
+
126
+ ```bash
127
+ #!/bin/bash
128
+ testfix --once --provider ollama pytest
129
+ ```
130
+
131
+ This runs tests, lets AI fix any failures, and blocks the push if tests are still failing.
132
+
133
+ ---
134
+
135
+ ## CI / GitHub Actions
136
+
137
+ ```yaml
138
+ - name: Run and auto-fix tests
139
+ run: testfix pytest
140
+ env:
141
+ ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
142
+ ```
143
+
144
+ Or use `--dry-run` in CI to just report what would be fixed without changing files:
145
+
146
+ ```yaml
147
+ - name: Check if AI can fix test failures
148
+ run: testfix --dry-run pytest || echo "Tests failing (see diff above)"
149
+ ```
150
+
151
+ ---
152
+
153
+ ## Supported test frameworks
154
+
155
+ | Framework | Command example |
156
+ |-----------|----------------|
157
+ | **pytest** | `testfix pytest` |
158
+ | **jest** | `testfix npx jest` |
159
+ | **vitest** | `testfix npx vitest` |
160
+ | **go test** | `testfix go test ./...` |
161
+ | **cargo test** | `testfix cargo test` |
162
+ | **rspec** | `testfix bundle exec rspec` |
163
+ | **any** | `testfix <your test command>` |
164
+
165
+ ---
166
+
167
+ ## Ecosystem
168
+
169
+ testfix is part of a suite of AI-powered developer CLI tools:
170
+
171
+ | Tool | Purpose |
172
+ |------|---------|
173
+ | **[critiq](https://github.com/faw21/critiq)** | AI code reviewer — catch issues before you push |
174
+ | **[testfix](https://github.com/faw21/testfix)** | AI test fixer — automatically fix failing tests |
175
+ | **[difftests](https://github.com/faw21/difftests)** | AI test generator — write tests for your diffs |
176
+ | **[gpr](https://github.com/faw21/gpr)** | AI PR description + commit message generator |
177
+ | **[gitbrief](https://github.com/faw21/gitbrief)** | Pack your codebase into LLM context |
178
+ | **[standup-ai](https://github.com/faw21/standup-ai)** | AI daily standup generator |
179
+ | **[changelog-ai](https://github.com/faw21/changelog-ai)** | AI CHANGELOG generator |
180
+ | **[prcat](https://github.com/faw21/prcat)** | AI PR reviewer for incoming PRs |
181
+ | **[chronicle](https://github.com/faw21/chronicle)** | Turn git history into stories |
182
+
183
+ ---
184
+
185
+ ## License
186
+
187
+ Business Source License 1.1 — free for non-commercial use. See [LICENSE](LICENSE).
@@ -0,0 +1,50 @@
1
+ [build-system]
2
+ requires = ["hatchling"]
3
+ build-backend = "hatchling.build"
4
+
5
+ [project]
6
+ name = "testfix"
7
+ version = "0.1.0"
8
+ description = "AI-powered CLI to automatically fix failing tests — non-interactive, pipeable, CI-ready"
9
+ readme = "README.md"
10
+ requires-python = ">=3.9"
11
+ license = { text = "BSL-1.1" }
12
+ authors = [{ name = "faw21" }]
13
+ keywords = ["ai", "testing", "cli", "developer-tools", "automation", "pytest", "tdd"]
14
+ classifiers = [
15
+ "Development Status :: 3 - Alpha",
16
+ "Intended Audience :: Developers",
17
+ "Environment :: Console",
18
+ "Programming Language :: Python :: 3",
19
+ "Topic :: Software Development :: Testing",
20
+ "Topic :: Utilities",
21
+ ]
22
+ dependencies = [
23
+ "click>=8.0",
24
+ "anthropic>=0.25",
25
+ "openai>=1.0",
26
+ "rich>=13.0",
27
+ "python-dotenv>=1.0",
28
+ ]
29
+
30
+ [project.optional-dependencies]
31
+ dev = [
32
+ "pytest>=8.0",
33
+ "pytest-cov",
34
+ "pytest-mock",
35
+ "anthropic>=0.25",
36
+ "openai>=1.0",
37
+ ]
38
+
39
+ [project.scripts]
40
+ testfix = "testfix.cli:main"
41
+
42
+ [tool.hatch.build.targets.wheel]
43
+ packages = ["src/testfix"]
44
+
45
+ [tool.pytest.ini_options]
46
+ testpaths = ["tests"]
47
+ addopts = "--cov=testfix --cov-report=term-missing --cov-fail-under=80"
48
+
49
+ [tool.coverage.run]
50
+ source = ["src/testfix"]
@@ -0,0 +1,3 @@
1
+ """testfix — AI-powered failing test auto-fixer."""
2
+
3
+ __version__ = "0.1.0"
@@ -0,0 +1,234 @@
1
+ """testfix CLI — run tests, fix failures, repeat."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import sys
6
+ from pathlib import Path
7
+ from typing import Optional
8
+
9
+ import click
10
+ from rich.console import Console
11
+ from rich.panel import Panel
12
+ from rich.syntax import Syntax
13
+ from rich.text import Text
14
+
15
+ from . import __version__
16
+ from .fixer import apply_patches, generate_fixes
17
+ from .runner import RunResult, run_tests
18
+
19
+ console = Console()
20
+
21
+
22
+ # ── Helpers ───────────────────────────────────────────────────────────────────
23
+
24
+
25
+ def _print_run_summary(result: RunResult, attempt: int) -> None:
26
+ """Print a summary of the test run."""
27
+ if result.all_passed:
28
+ console.print(f"[bold green]✅ All tests pass![/] (attempt {attempt})")
29
+ else:
30
+ icon = "🔴"
31
+ console.print(
32
+ f"{icon} [bold red]{result.failed} failing[/], "
33
+ f"[green]{result.passed} passing[/] "
34
+ f"(attempt {attempt})"
35
+ )
36
+ for failure in result.failures[:5]:
37
+ loc = f" [dim]{failure.file_path}:{failure.line_number}[/]" if failure.file_path else ""
38
+ console.print(f" └─ {failure.test_name}{loc}")
39
+ if len(result.failures) > 5:
40
+ console.print(f" └─ [dim]… and {len(result.failures) - 5} more[/]")
41
+
42
+
43
+ def _print_diff(diff_lines: list[str], file_path: str) -> None:
44
+ """Print a colorised unified diff."""
45
+ diff_text = "".join(diff_lines)
46
+ if diff_text:
47
+ syntax = Syntax(diff_text, "diff", theme="monokai", line_numbers=False)
48
+ console.print(Panel(syntax, title=f"[bold]{file_path}[/]", border_style="dim"))
49
+
50
+
51
+ def _do_fix_cycle(
52
+ command: list[str],
53
+ *,
54
+ cwd: str,
55
+ max_tries: int,
56
+ provider: str,
57
+ model: Optional[str],
58
+ dry_run: bool,
59
+ focus_file: Optional[str],
60
+ verbose: bool,
61
+ ) -> int:
62
+ """
63
+ Core retry loop.
64
+
65
+ Returns:
66
+ 0 — all tests pass
67
+ 1 — still failing after max_tries
68
+ 2 — error (command not found etc.)
69
+ """
70
+ attempt = 0
71
+
72
+ while attempt < max_tries:
73
+ attempt += 1
74
+ console.rule(f"[dim]Attempt {attempt}/{max_tries}[/]")
75
+
76
+ # Run tests
77
+ result = run_tests(command, cwd=cwd)
78
+
79
+ if verbose and result.stderr:
80
+ console.print(f"[dim]{result.stderr[:500]}[/]")
81
+
82
+ _print_run_summary(result, attempt)
83
+
84
+ if result.exit_code == 127:
85
+ console.print(f"[bold red]Error:[/] {result.stderr}")
86
+ return 2
87
+
88
+ if result.all_passed:
89
+ return 0
90
+
91
+ if attempt >= max_tries:
92
+ console.print(
93
+ f"\n[yellow]⚠️ Still failing after {max_tries} attempt(s).[/] "
94
+ "Try increasing [bold]--max-tries[/] or switch to a stronger model."
95
+ )
96
+ return 1
97
+
98
+ # Generate fixes
99
+ console.print(f"\n[bold cyan]🤖 Asking {provider} to fix {result.failed} failure(s)…[/]")
100
+
101
+ try:
102
+ fix_result = generate_fixes(
103
+ result,
104
+ cwd=cwd,
105
+ provider=provider,
106
+ model=model,
107
+ focus_file=focus_file,
108
+ )
109
+ except Exception as exc:
110
+ console.print(f"[bold red]LLM error:[/] {exc}")
111
+ return 2
112
+
113
+ if not fix_result.patches:
114
+ console.print("[yellow]⚠️ AI could not suggest fixes.[/] Check the failures manually.")
115
+ return 1
116
+
117
+ console.print(f"[green]Generated {fix_result.files_changed} file fix(es)[/]")
118
+
119
+ for patch in fix_result.patches:
120
+ _print_diff(patch.diff_lines, patch.file_path)
121
+
122
+ if dry_run:
123
+ console.print("\n[bold yellow]--dry-run:[/] Not applying fixes. Exiting.")
124
+ return 1
125
+
126
+ # Apply
127
+ applied = apply_patches(fix_result.patches, cwd=cwd)
128
+ for f in applied:
129
+ console.print(f" [green]✔[/] Applied fix to [bold]{Path(f).name}[/] (backup: .testfix.bak)")
130
+
131
+ # Should not reach here
132
+ return 1
133
+
134
+
135
+ # ── CLI ───────────────────────────────────────────────────────────────────────
136
+
137
+
138
+ @click.command(
139
+ name="testfix",
140
+ context_settings={"ignore_unknown_options": True},
141
+ )
142
+ @click.argument("test_command", nargs=-1, required=True, type=click.UNPROCESSED)
143
+ @click.option(
144
+ "--max-tries",
145
+ default=5,
146
+ show_default=True,
147
+ help="Maximum fix-and-retry iterations.",
148
+ )
149
+ @click.option(
150
+ "--once",
151
+ is_flag=True,
152
+ help="Run tests once, fix once, run again — equivalent to --max-tries 2.",
153
+ )
154
+ @click.option(
155
+ "--dry-run",
156
+ is_flag=True,
157
+ help="Show proposed fixes as diffs but do not apply them.",
158
+ )
159
+ @click.option(
160
+ "--provider",
161
+ default="claude",
162
+ type=click.Choice(["claude", "openai", "ollama"], case_sensitive=False),
163
+ show_default=True,
164
+ help="LLM provider.",
165
+ )
166
+ @click.option(
167
+ "--model",
168
+ default=None,
169
+ help="Model name override (e.g. claude-sonnet-4-5, gpt-4o, qwen2.5:7b).",
170
+ )
171
+ @click.option(
172
+ "--file",
173
+ "focus_file",
174
+ default=None,
175
+ help="Focus fixes on this source file (relative to cwd).",
176
+ )
177
+ @click.option(
178
+ "--verbose",
179
+ "-v",
180
+ is_flag=True,
181
+ help="Show full stderr output from test runner.",
182
+ )
183
+ @click.version_option(__version__, "--version")
184
+ def main(
185
+ test_command: tuple,
186
+ max_tries: int,
187
+ once: bool,
188
+ dry_run: bool,
189
+ provider: str,
190
+ model: Optional[str],
191
+ focus_file: Optional[str],
192
+ verbose: bool,
193
+ ) -> None:
194
+ """
195
+ Run tests. If they fail, ask AI to fix them. Repeat until they pass.
196
+
197
+ \b
198
+ Examples:
199
+ testfix pytest
200
+ testfix pytest tests/test_auth.py
201
+ testfix --max-tries 3 pytest
202
+ testfix --dry-run pytest
203
+ testfix --provider ollama pytest
204
+ testfix --once npm test
205
+ """
206
+ from dotenv import load_dotenv
207
+ load_dotenv(override=True)
208
+
209
+ cwd = str(Path.cwd())
210
+ effective_max = 2 if once else max_tries
211
+ command = list(test_command)
212
+
213
+ console.print(
214
+ Panel(
215
+ f"[bold]testfix[/] v{__version__} "
216
+ f"[dim]command:[/] {' '.join(command)} "
217
+ f"[dim]provider:[/] {provider} "
218
+ f"[dim]max-tries:[/] {effective_max}",
219
+ border_style="blue",
220
+ )
221
+ )
222
+
223
+ exit_code = _do_fix_cycle(
224
+ command,
225
+ cwd=cwd,
226
+ max_tries=effective_max,
227
+ provider=provider,
228
+ model=model,
229
+ dry_run=dry_run,
230
+ focus_file=focus_file,
231
+ verbose=verbose,
232
+ )
233
+
234
+ sys.exit(exit_code)