mcpbr-cli 0.3.28 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +136 -48
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,11 +1,11 @@
1
1
  # mcpbr
2
2
 
3
3
  ```bash
4
- # Install via pip
5
- pip install mcpbr && mcpbr init && mcpbr run -c mcpbr.yaml -n 1 -v
4
+ # One-liner install (installs + runs quick test)
5
+ curl -sSL https://raw.githubusercontent.com/greynewell/mcpbr/main/install.sh | bash
6
6
 
7
- # Or via npm
8
- npm install -g mcpbr-cli && mcpbr init && mcpbr run -c mcpbr.yaml -n 1 -v
7
+ # Or install and run manually
8
+ pip install mcpbr && mcpbr run -n 1
9
9
  ```
10
10
 
11
11
  Benchmark your MCP server against real GitHub issues. One command, hard numbers.
@@ -60,11 +60,15 @@ mcpbr supports multiple software engineering benchmarks through a flexible abstr
60
60
  ### SWE-bench (Default)
61
61
  Real GitHub issues requiring bug fixes and patches. The agent generates unified diffs evaluated by running pytest test suites.
62
62
 
63
- - **Dataset**: [SWE-bench/SWE-bench_Lite](https://huggingface.co/datasets/SWE-bench/SWE-bench_Lite)
64
63
  - **Task**: Generate patches to fix bugs
65
64
  - **Evaluation**: Test suite pass/fail
66
65
  - **Pre-built images**: Available for most tasks
67
66
 
67
+ **Variants:**
68
+ - **swe-bench-verified** (default) - Manually validated test cases for higher quality evaluation ([SWE-bench/SWE-bench_Verified](https://huggingface.co/datasets/SWE-bench/SWE-bench_Verified))
69
+ - **swe-bench-lite** - 300 tasks, quick testing ([SWE-bench/SWE-bench_Lite](https://huggingface.co/datasets/SWE-bench/SWE-bench_Lite))
70
+ - **swe-bench-full** - 2,294 tasks, complete benchmark ([SWE-bench/SWE-bench](https://huggingface.co/datasets/SWE-bench/SWE-bench))
71
+
68
72
  ### CyberGym
69
73
  Security vulnerabilities requiring Proof-of-Concept (PoC) exploits. The agent generates exploits that trigger crashes in vulnerable code.
70
74
 
@@ -84,16 +88,16 @@ Large-scale MCP tool use evaluation across 45+ categories. Tests agent capabilit
84
88
  - **Learn more**: [MCPToolBench++ Paper](https://arxiv.org/pdf/2508.07575) | [GitHub](https://github.com/mcp-tool-bench/MCPToolBenchPP)
85
89
 
86
90
  ```bash
87
- # Run SWE-bench (default)
91
+ # Run SWE-bench Verified (default - manually validated tests)
88
92
  mcpbr run -c config.yaml
89
93
 
90
- # Run CyberGym at level 2
91
- mcpbr run -c config.yaml --benchmark cybergym --level 2
94
+ # Run SWE-bench Lite (300 tasks, quick testing)
95
+ mcpbr run -c config.yaml -b swe-bench-lite
92
96
 
93
- # Run MCPToolBench++
94
- mcpbr run -c config.yaml --benchmark mcptoolbench
97
+ # Run SWE-bench Full (2,294 tasks, complete benchmark)
98
+ mcpbr run -c config.yaml -b swe-bench-full
95
99
 
96
- # List available benchmarks
100
+ # List all available benchmarks
97
101
  mcpbr benchmarks
98
102
  ```
99
103
 
@@ -271,9 +275,13 @@ See the **[Examples README](examples/README.md)** for the complete guide.
271
275
  export ANTHROPIC_API_KEY="your-api-key"
272
276
  ```
273
277
 
274
- 2. **Generate a configuration file:**
278
+ 2. **Run mcpbr (config auto-created if missing):**
275
279
 
276
280
  ```bash
281
+ # Config is auto-created on first run
282
+ mcpbr run -n 1
283
+
284
+ # Or explicitly generate a config file first
277
285
  mcpbr init
278
286
  ```
279
287
 
@@ -311,55 +319,135 @@ mcpbr run --config config.yaml
311
319
 
312
320
  [![Claude Code Ready](https://img.shields.io/badge/Claude_Code-Ready-5865F2?style=flat&logo=anthropic)](https://claude.ai/download)
313
321
 
314
- mcpbr includes a built-in Claude Code plugin that makes Claude an expert at running benchmarks correctly. When you clone this repository, Claude Code automatically detects the plugin and gains specialized knowledge about mcpbr.
322
+ mcpbr includes a built-in Claude Code plugin that makes Claude an expert at running benchmarks correctly. The plugin provides specialized skills and knowledge about mcpbr configuration, execution, and troubleshooting.
315
323
 
316
- ### What This Means for You
324
+ ### Installation Options
317
325
 
318
- When using Claude Code in this repository, you can simply say:
326
+ You have three ways to enable the mcpbr plugin in Claude Code:
319
327
 
320
- - "Run the SWE-bench Lite benchmark"
321
- - "Generate a config for my MCP server"
322
- - "Run a quick test with 1 task"
328
+ #### Option 1: Clone Repository (Automatic Detection)
323
329
 
324
- Claude will automatically:
325
- - Verify Docker is running before starting
326
- - Check for required API keys
327
- - Generate valid configurations with proper `{workdir}` placeholders
328
- - Use correct CLI flags and options
329
- - Provide helpful troubleshooting when issues occur
330
+ When you clone this repository, Claude Code automatically detects and loads the plugin:
330
331
 
331
- ### Available Skills
332
+ ```bash
333
+ git clone https://github.com/greynewell/mcpbr.git
334
+ cd mcpbr
332
335
 
333
- The plugin includes three specialized skills:
336
+ # Plugin is now active - try asking Claude:
337
+ # "Run the SWE-bench Lite eval with 5 tasks"
338
+ ```
334
339
 
335
- 1. **run-benchmark**: Expert at running evaluations with proper validation
336
- - Checks prerequisites (Docker, API keys, config files)
337
- - Constructs valid `mcpbr run` commands
338
- - Handles errors gracefully with actionable feedback
340
+ **Best for**: Contributors, developers testing changes, or users who want the latest unreleased features.
339
341
 
340
- 2. **generate-config**: Generates valid mcpbr configuration files
341
- - Ensures `{workdir}` placeholder is included
342
- - Validates MCP server commands
343
- - Provides benchmark-specific templates
342
+ #### Option 2: npm Global Install (Planned for v0.4.0)
344
343
 
345
- 3. **swe-bench-lite**: Quick-start command for SWE-bench Lite
346
- - Pre-configured for 5-task evaluation
347
- - Includes sensible defaults for output files
348
- - Perfect for testing and demonstrations
344
+ Install the plugin globally via npm for use across any project:
349
345
 
350
- ### Getting Started with Claude Code
346
+ ```bash
347
+ # Planned for v0.4.0 (not yet released)
348
+ npm install -g @mcpbr/claude-code-plugin
349
+ ```
351
350
 
352
- Just clone the repository and start asking Claude to run benchmarks:
351
+ > **Note**: The npm package is not yet published. This installation method will be available in a future release. Track progress in [issue #265](https://github.com/greynewell/mcpbr/issues/265).
353
352
 
354
- ```bash
355
- git clone https://github.com/greynewell/mcpbr.git
356
- cd mcpbr
353
+ **Best for**: Users who want plugin features available in any directory.
357
354
 
358
- # In Claude Code, simply say:
359
- # "Run the SWE-bench Lite eval with 5 tasks"
360
- ```
355
+ #### Option 3: Claude Code Plugin Manager (Planned for v0.4.0)
356
+
357
+ Install via Claude Code's built-in plugin manager:
358
+
359
+ 1. Open Claude Code settings
360
+ 2. Navigate to Plugins > Browse
361
+ 3. Search for "mcpbr"
362
+ 4. Click Install
363
+
364
+ > **Note**: Plugin manager installation is not yet available. This installation method will be available after plugin marketplace submission. Track progress in [issue #267](https://github.com/greynewell/mcpbr/issues/267).
365
+
366
+ **Best for**: Users who prefer a GUI and want automatic updates.
367
+
368
+ ### Installation Comparison
369
+
370
+ | Method | Availability | Auto-updates | Works Anywhere | Latest Features |
371
+ |--------|-------------|--------------|----------------|-----------------|
372
+ | Clone Repository | Available now | Manual (git pull) | No (repo only) | Yes (unreleased) |
373
+ | npm Global Install | Planned (not yet released) | Via npm | Yes | Yes (published) |
374
+ | Plugin Manager | Planned (not yet released) | Automatic | Yes | Yes (published) |
375
+
376
+ ### What You Get
377
+
378
+ The plugin includes three specialized skills that enhance Claude's ability to work with mcpbr:
379
+
380
+ #### 1. run-benchmark
381
+ Expert at running evaluations with proper validation and error handling.
382
+
383
+ **Capabilities**:
384
+ - Validates prerequisites (Docker running, API keys set, config files exist)
385
+ - Constructs correct `mcpbr run` commands with appropriate flags
386
+ - Handles errors gracefully with actionable troubleshooting steps
387
+ - Monitors progress and provides meaningful status updates
388
+
389
+ **Example interactions**:
390
+ - "Run the SWE-bench Lite benchmark with 10 tasks"
391
+ - "Evaluate my MCP server using CyberGym level 2"
392
+ - "Test my config with a single task"
393
+
394
+ #### 2. generate-config
395
+ Generates valid mcpbr configuration files with benchmark-specific templates.
396
+
397
+ **Capabilities**:
398
+ - Ensures required `{workdir}` placeholder is included in MCP server args
399
+ - Validates MCP server command syntax
400
+ - Provides templates for different benchmarks (SWE-bench, CyberGym, MCPToolBench++)
401
+ - Suggests appropriate timeouts and concurrency settings
402
+
403
+ **Example interactions**:
404
+ - "Generate a config for the filesystem MCP server"
405
+ - "Create a config for testing my custom MCP server"
406
+ - "Set up a CyberGym evaluation config"
407
+
408
+ #### 3. swe-bench-lite
409
+ Quick-start command for running SWE-bench Lite evaluations.
410
+
411
+ **Capabilities**:
412
+ - Pre-configured for 5-task evaluation (fast testing)
413
+ - Includes sensible defaults for output files and logging
414
+ - Perfect for demonstrations and initial testing
415
+ - Automatically sets up verbose output for debugging
416
+
417
+ **Example interactions**:
418
+ - "Run a quick SWE-bench Lite test"
419
+ - "Show me how mcpbr works"
420
+ - "Test the filesystem server"
421
+
422
+ ### Benefits
423
+
424
+ When using Claude Code with the mcpbr plugin active, Claude will automatically:
425
+
426
+ - Verify Docker is running before starting evaluations
427
+ - Check for required API keys (`ANTHROPIC_API_KEY`)
428
+ - Generate valid configurations with proper `{workdir}` placeholders
429
+ - Use correct CLI flags and avoid deprecated options
430
+ - Provide contextual troubleshooting when issues occur
431
+ - Follow mcpbr best practices for optimal results
432
+
433
+ ### Troubleshooting
434
+
435
+ **Plugin not detected in cloned repository**:
436
+ - Ensure you're in the repository root directory
437
+ - Verify the `claude-code.json` file exists in the repo
438
+ - Try restarting Claude Code
439
+
440
+ **Skills not appearing**:
441
+ - Check Claude Code version (requires v2.0+)
442
+ - Verify plugin is listed in Settings > Plugins
443
+ - Try running `/reload-plugins` in Claude Code
444
+
445
+ **Commands failing**:
446
+ - Ensure mcpbr is installed: `pip install mcpbr`
447
+ - Verify Docker is running: `docker info`
448
+ - Check API key is set: `echo $ANTHROPIC_API_KEY`
361
449
 
362
- The bundled plugin ensures Claude makes no silly mistakes and follows best practices automatically.
450
+ For more help, see the [troubleshooting guide](https://greynewell.github.io/mcpbr/troubleshooting/) or [open an issue](https://github.com/greynewell/mcpbr/issues).
363
451
 
364
452
  ## Configuration
365
453
 
@@ -509,7 +597,7 @@ Run SWE-bench evaluation with the configured MCP server.
509
597
 
510
598
  | Option | Short | Description |
511
599
  |--------|-------|-------------|
512
- | `--config PATH` | `-c` | Path to YAML configuration file (required) |
600
+ | `--config PATH` | `-c` | Path to YAML configuration file (default: `mcpbr.yaml`, auto-created if missing) |
513
601
  | `--model TEXT` | `-m` | Override model from config |
514
602
  | `--benchmark TEXT` | `-b` | Override benchmark from config (`swe-bench`, `cybergym`, or `mcptoolbench`) |
515
603
  | `--level INTEGER` | | Override CyberGym difficulty level (0-3) |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mcpbr-cli",
3
- "version": "0.3.28",
3
+ "version": "0.4.0",
4
4
  "description": "Model Context Protocol Benchmark Runner - CLI tool for evaluating MCP servers",
5
5
  "keywords": [
6
6
  "mcpbr",