@in-the-loop-labs/pair-review 1.3.3 → 1.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -12,7 +12,7 @@
12
12
 
13
13
  ## What is pair-review?
14
14
 
15
- pair-review is a local web application that transforms how you review code, especially code generated by AI coding agents like Claude Code, Cursor, and others. It helps you provide structured feedback and ensures quality through human oversight.
15
+ pair-review is a local web application for keeping humans in the loop with AI coding agents. Calling it an AI code review tool would be accurate but incomplete — it supports multiple workflows beyond automated review, from reviewing agent-generated code before committing, to judging AI suggestions instead of reading every line, to using AI to guide your attention during a thorough review. You pick what fits your situation.
16
16
 
17
17
  ### Two Core Value Propositions
18
18
 
@@ -33,9 +33,58 @@ pair-review is a local web application that transforms how you review code, espe
33
33
  - **Local-First**: All data and processing happens on your machine - no cloud dependencies
34
34
  - **GitHub-Familiar UI**: Interface feels instantly familiar to GitHub users
35
35
  - **Human-in-the-Loop**: AI suggests, you decide
36
- - **Multiple AI Providers**: Support for Claude, Gemini, Codex, Copilot, OpenCode, and Cursor. Use your existing subscription!
36
+ - **Multiple AI Providers**: Support for Claude, Gemini, Codex, Copilot, OpenCode, Cursor, and Pi. Use your existing subscription!
37
37
  - **Progressive**: Start simple with manual review, add AI analysis when you need it
38
38
 
39
+ ## Workflows
40
+
41
+ There are no hard boundaries between these — mix and match as needed.
42
+
43
+ ### 1. Local Review: Human Reviews Agent-Generated Code
44
+
45
+ **When to use:** You're working with a coding agent and want to review its changes before committing.
46
+
47
+ This is the core feedback loop workflow. When an agent generates code, open `pair-review` to review the uncommitted changes. With the GitHub-like UI, you can add comments at specific file and line locations, then copy that formatted feedback and paste it back into whatever coding agent you're using (or use MCP/skills to read comments directly into Claude Code).
48
+
49
+ Compared to giving feedback in chat, this feels like moving from a machete to a scalpel. Instead of trying to capture everything in one message, you can leave targeted comments at dozens of specific locations — and the agent addresses each one with surgical precision.
50
+
51
+ **How it works:**
52
+ 1. Run `pair-review --local` to open the diff UI
53
+ 2. Review changes in a familiar GitHub-like interface
54
+ 3. Add comments with specific file and line locations
55
+ 4. Copy formatted feedback and paste into your coding agent
56
+ 5. Iterate until you're satisfied
57
+
58
+ **Tips:**
59
+ - Stage previous changes in git, then only review new modifications in the next round
60
+ - Local mode only shows unstaged changes and untracked files (opinionated by design)
61
+ ### 2. Meta-Review: Judging AI Suggestions
62
+
63
+ **When to use:** You're not going to read every line of code. Let AI be your reader.
64
+
65
+ Instead of reviewing thousands of lines of code, you review a dozen AI suggestions. The AI reads the code; you review its recommendations. Each suggestion comes with enough context to evaluate it — even when you're not deeply familiar with the language or codebase.
66
+
67
+ Adopt suggestions you agree with, dismiss the rest, then feed adopted suggestions back to your coding agent. This is "supervised collaboration" — you stay in the loop without getting in the weeds.
68
+
69
+ **How it works:**
70
+ 1. Open `pair-review --local` or `pair-review <PR-URL>` and click **Run Analysis**
71
+ 2. AI performs three levels of review in parallel (see [Three-Level AI Analysis](#three-level-ai-analysis) below)
72
+ 3. Results are deduplicated and combined by an orchestration step
73
+ 4. Adopt suggestions you agree with, dismiss the rest
74
+ 5. Feed adopted suggestions back to your coding agent
75
+
76
+ ### 3. AI-Guided Review: When You're Accountable
77
+
78
+ **When to use:** You're reviewing code where someone is relying on your judgment. You're still reading the code — AI helps guide your attention and articulate feedback.
79
+
80
+ You're responsible for the review, but `pair-review` helps you be more thorough. Kick off the AI analysis and either wait for it to finish or start reading while it runs in the background. The AI suggestions guide you to areas worth attention and help you write clearer explanations. You can also do your own review first, then check whether the AI found the same things — a useful sanity check in both directions.
81
+
82
+ **How it works:**
83
+ 1. Run AI analysis on the PR (in background or wait for results)
84
+ 2. Read through the code with AI suggestions visible
85
+ 3. Adopt suggestions you agree with, dismiss the rest, add your own comments
86
+ 4. Submit as a rich, detailed review to GitHub
87
+
39
88
  ## Quick Start
40
89
 
41
90
  ### Installation
@@ -165,6 +214,7 @@ pair-review supports several environment variables for customizing behavior:
165
214
  | `PAIR_REVIEW_COPILOT_CMD` | Custom command to invoke Copilot CLI | `copilot` |
166
215
  | `PAIR_REVIEW_OPENCODE_CMD` | Custom command to invoke OpenCode CLI | `opencode` |
167
216
  | `PAIR_REVIEW_CURSOR_AGENT_CMD` | Custom command to invoke Cursor Agent CLI | `agent` |
217
+ | `PAIR_REVIEW_PI_CMD` | Custom command to invoke Pi CLI | `pi` |
168
218
  | `PAIR_REVIEW_MODEL` | Override the AI model to use (same as `--model` flag) | Provider default |
169
219
 
170
220
  **Note:** `GITHUB_TOKEN` is the standard environment variable used by many GitHub tools (gh CLI, GitHub Actions, etc.). When set, it takes precedence over the `github_token` field in the config file.
@@ -218,17 +268,18 @@ pair-review integrates with AI providers via their CLI tools:
218
268
  - **GitHub Copilot**: Uses Copilot CLI
219
269
  - **OpenCode**: Uses OpenCode CLI (requires model configuration)
220
270
  - **Cursor**: Uses Cursor Agent CLI (streaming output with sandbox mode)
271
+ - **Pi**: Uses Pi coding agent CLI (requires model configuration)
221
272
 
222
273
  You can select your preferred provider and model in the repository settings UI.
223
274
 
224
275
  #### Built-in vs. Configurable Providers
225
276
 
226
- Most providers (Claude, Gemini, Codex, Copilot) come with built-in model definitions. **OpenCode is different** - it has no built-in models and requires you to configure which models to use.
277
+ Most providers (Claude, Gemini, Codex, Copilot) come with built-in model definitions. **OpenCode and Pi are different** - they have no built-in models and require you to configure which models to use.
227
278
 
228
279
  #### Configuring Custom Models
229
280
 
230
281
  You can override provider settings and define custom models in your config file. This is useful for:
231
- - Adding models to OpenCode (required)
282
+ - Adding models to OpenCode or Pi (required for these providers)
232
283
  - Overriding default commands or arguments
233
284
  - Setting provider-specific environment variables
234
285
 
@@ -330,6 +381,15 @@ pair-review's AI analysis system examines your code changes at increasing levels
330
381
 
331
382
  This progressive approach keeps analysis focused while catching issues at every scope.
332
383
 
384
+ ### Customization
385
+
386
+ Tailor AI analysis to your team's standards and your current needs:
387
+
388
+ - **Repo-level instructions**: Always included when generating suggestions for a specific repo. Point to codebase best practices docs, highlight common review mistakes, or include other helpful resources. Reviews will actively cite this guidance when relevant.
389
+ - **Review-level instructions**: Customize individual reviews on the fly. Request deeper analysis with detailed code suggestions, ask for a "blockers only" final review, or adjust the focus for a particular set of changes.
390
+
391
+ There's a compounding benefit: if you run `pair-review` with the same coding agent you use for development — one already configured with your rules and instructions — it will actively search for violations and enforce them. The review reflects your standards, not generic best practices.
392
+
333
393
  ### Review Feedback Export
334
394
 
335
395
  The killer feature for AI coding agent workflows:
@@ -552,40 +612,6 @@ npm run dev
552
612
  - **AI Integration**: CLI-based adapter pattern for multiple providers
553
613
  - **Git**: Uses git worktrees for clean PR checkout
554
614
 
555
- ## Use Cases
556
-
557
- ### 1. Reviewing AI-Generated Code
558
-
559
- You're working with a coding agent on a new feature:
560
-
561
- 1. The coding agent generates code changes
562
- 2. Run `pair-review --local` to review
563
- 3. Add comments on issues or improvements needed
564
- 4. Copy the markdown feedback
565
- 5. Paste back to the agent: "Address my review feedback: [paste]"
566
- 6. The coding agent iterates based on your comments
567
- 7. Repeat until satisfied
568
-
569
- ### 2. Team Pull Request Review
570
-
571
- Standard GitHub PR review workflow:
572
-
573
- 1. Run `pair-review <PR-URL>`
574
- 2. Optional: Trigger AI analysis for initial insights
575
- 3. Review code, adopt useful AI suggestions
576
- 4. Add your own expert comments
577
- 5. Submit review to GitHub with approval status
578
-
579
- ### 3. Self-Review Before Committing
580
-
581
- Before creating a PR:
582
-
583
- 1. Make local changes
584
- 2. Run `pair-review --local`
585
- 3. Review your own changes (with or without AI)
586
- 4. Catch issues early
587
- 5. Commit with confidence
588
-
589
615
  ## FAQ
590
616
 
591
617
  **Q: Does my code get sent to the cloud?**
@@ -609,6 +635,9 @@ A: Try refreshing your browser. Many transient issues resolve with a simple page
609
635
  **Q: How do I use OpenCode as my AI provider?**
610
636
  A: OpenCode has no built-in models, so you must configure them in your `~/.pair-review/config.json`. Add a `providers.opencode.models` array with at least one model definition. See the [AI Provider Configuration](#ai-provider-configuration) section for a complete example.
611
637
 
638
+ **Q: How do I use Pi as my AI provider?**
639
+ A: Like OpenCode, Pi has no built-in models. Configure them in your `~/.pair-review/config.json` by adding a `providers.pi.models` array with at least one model definition. Pi supports many providers (Google, Anthropic, OpenAI, etc.) via its `--provider` and `--model` flags. See the [AI Provider Configuration](#ai-provider-configuration) section and `config.example.json` for examples.
640
+
612
641
  ## Contributing
613
642
 
614
643
  Contributions welcome! Please:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@in-the-loop-labs/pair-review",
3
- "version": "1.3.3",
3
+ "version": "1.4.1",
4
4
  "description": "Your AI-powered code review partner - Close the feedback loop with AI coding agents",
5
5
  "main": "src/server.js",
6
6
  "bin": {
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "pair-review",
3
- "version": "1.3.3",
3
+ "version": "1.4.1",
4
4
  "description": "pair-review app integration — Open PRs and local changes in the pair-review web UI, run server-side AI analysis, and address review feedback. Requires the pair-review MCP server.",
5
5
  "author": {
6
6
  "name": "in-the-loop-labs",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "code-critic",
3
- "version": "1.3.3",
3
+ "version": "1.4.1",
4
4
  "description": "AI-powered code review analysis — Run three-level AI analysis and implement-review-fix loops directly in your coding agent. Works standalone, no server required.",
5
5
  "author": {
6
6
  "name": "in-the-loop-labs",