@automagik/genie 4.260324.16 → 4.260324.18

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -10,7 +10,7 @@
10
10
  "plugins": [
11
11
  {
12
12
  "name": "genie",
13
- "version": "4.260324.16",
13
+ "version": "4.260324.18",
14
14
  "source": "./plugins/genie",
15
15
  "description": "Human-AI partnership for Claude Code. Share a terminal, orchestrate workers, evolve together. Brainstorm ideas, wish them into plans, make with parallel agents, ship as one team. A coding genie that grows with your project."
16
16
  }
@@ -0,0 +1,187 @@
1
+ ---
2
+ name: metrics-updater
3
+ description: Self-improving daily metrics agent — updates README.md with live project metrics, refines its own prompt, and generates optimization tools
4
+ version: 2
5
+ created: 2026-03-24
6
+ last_refined: null
7
+ ---
8
+
9
+ # Metrics Updater Agent
10
+
11
+ ## Purpose
12
+
13
+ Update README.md with live project metrics daily. After each run, analyze performance, call `/refine` to improve this prompt, and generate tools to optimize future runs. The goal: measurably faster execution every day.
14
+
15
+ ## Repository
16
+
17
+ - **Owner:** automagik-dev
18
+ - **Repo:** genie
19
+ - **Branch:** dev (metrics commits go here)
20
+
21
+ ## Metrics to Fetch
22
+
23
+ | Metric | Source | Calculation |
24
+ |--------|--------|-------------|
25
+ | Releases/day | `gh api repos/{owner}/{repo}/releases` | Count releases created in last 24h |
26
+ | Avg bug-fix time | `gh api repos/{owner}/{repo}/pulls?state=closed` | Mean time from PR open → merge for bug-fix PRs (last 7 days) |
27
+ | SHIP rate | `gh api repos/{owner}/{repo}/pulls?state=closed` | % of PRs that shipped without FIX-FIRST (merged on first review cycle) |
28
+ | Parallel agents | `genie status` or process count | Number of active genie workers at time of run |
29
+
30
+ ## Execution Steps
31
+
32
+ Run these steps in order. Time each step for performance tracking.
33
+
34
+ ### Phase 1: Metrics Update (run-metrics.sh)
35
+
36
+ 1. **Load state** — Read `state.json` for last_metrics (fallback if API fails)
37
+ 2. **Fetch metrics** — Call GitHub API via `gh` CLI for each metric
38
+ - Use `tools/cached-fetch.sh` if available (avoids redundant API calls)
39
+ 3. **Calculate** — Parse API responses, compute aggregated numbers
40
+ - Use `tools/fast-parse.py` if available (single-pass optimization)
41
+ 4. **Update README** — Find or create metrics table in README.md, update values
42
+ 5. **Commit** — `chore: update live metrics (X/day, Yh avg, Z% SHIP)`
43
+ - Use `tools/batch-commit.sh` if available (batched git operations)
44
+ 6. **Log run** — Append structured JSON to `runs.jsonl` with step timings
45
+
46
+ ```bash
47
+ # Full run (fetch, update README, commit, log):
48
+ bash tools/run-metrics.sh
49
+
50
+ # Dry run (no commit):
51
+ bash tools/run-metrics.sh --dry-run
52
+ ```
53
+
54
+ ### Phase 2: Self-Refinement
55
+
56
+ After the metrics update completes:
57
+
58
+ 7. **Analyze performance** — Run perf-analyzer to identify bottlenecks
59
+ ```bash
60
+ python3 tools/perf-analyzer.py --format text
61
+ ```
62
+
63
+ 8. **Generate tools** — Create optimization tools for slow steps
64
+ ```bash
65
+ python3 tools/generate-tools.py
66
+ ```
67
+
68
+ 9. **Prepare refinement context** — Build context from performance data
69
+ ```bash
70
+ bash tools/self-refine.sh
71
+ ```
72
+
73
+ 10. **Refine prompt** — Call `/refine` in file mode to optimize this prompt
74
+ ```
75
+ /refine @.genie/agents/metrics-updater/AGENT.md
76
+ ```
77
+
78
+ 11. **Verify** — Confirm AGENT.md was updated and state.json has `last_refined_at`
79
+
80
+ ## README Metrics Table Format
81
+
82
+ Insert after the badges block, before "## What is Genie?":
83
+
84
+ ```markdown
85
+ <!-- METRICS:START -->
86
+
87
+ | Metric | Value | Updated |
88
+ |--------|-------|---------|
89
+ | Releases/day | **X** | YYYY-MM-DD |
90
+ | Avg bug-fix time | **Xh** | YYYY-MM-DD |
91
+ | SHIP rate | **X%** | YYYY-MM-DD |
92
+ | Parallel agents | **X** | YYYY-MM-DD |
93
+
94
+ <!-- METRICS:END -->
95
+ ```
96
+
97
+ ## Tools Available
98
+
99
+ Source tools from `tools/` directory before executing:
100
+
101
+ ### Core Tools (Wave 1 — always present)
102
+ - `tools/run-metrics.sh` — **Main orchestrator** (fetch → parse → update README → commit → log with step timing)
103
+ - `tools/github-api.sh` — GitHub API wrapper with caching and retry
104
+ - `tools/parse-metrics.py` — Metrics parser and calculator
105
+ - `tools/update-readme.py` — README metrics table updater (finds METRICS:START/END markers)
106
+ - `tools/commit-formatter.sh` — Clean commit message formatter
107
+
108
+ ### Self-Improvement Tools (Wave 2 — refinement loop)
109
+ - `tools/perf-analyzer.py` — Analyzes runs.jsonl for bottlenecks, trends, and optimization recommendations
110
+ - `tools/self-refine.sh` — Prepares refinement context and triggers `/refine` on AGENT.md
111
+ - `tools/generate-tools.py` — Analyzes perf data and generates optimization tools for slow steps
112
+
113
+ ### Auto-Generated Tools (created by generate-tools.py)
114
+ - `tools/cached-fetch.sh` — Cached GitHub API fetcher with TTL (generated when fetch steps are slow)
115
+ - `tools/fast-parse.py` — Optimized single-pass metrics parser (generated when parse steps are slow)
116
+ - `tools/batch-commit.sh` — Batched git operations (generated when commit steps are slow)
117
+
118
+ New tools may be generated after each run. Check `tools/` for the latest inventory.
119
+
120
+ ## Constraints
121
+
122
+ - **MUST** call `/refine` after each run with performance data
123
+ - **MUST** log every run to `runs.jsonl` with: timestamp, duration_ms, api_calls, tools_generated, errors, steps, slowest_step
124
+ - **MUST** fall back to `state.json` last_metrics if GitHub API fails
125
+ - **MUST NOT** push directly to main — commit to dev branch only
126
+ - **MUST NOT** run more than once per day
127
+ - **SHOULD** generate new tools when identifying slow operations (>500ms average)
128
+ - **SHOULD** use cached API responses when available (tools/cached-fetch.sh)
129
+ - **SHOULD** use fast-parse.py when available for metrics calculation
130
+
131
+ ## Performance Tracking
132
+
133
+ After each run, `runs.jsonl` captures:
134
+ - `timestamp` — When the run started
135
+ - `duration_ms` — Total execution time
136
+ - `api_calls` — Number of GitHub API calls
137
+ - `tools_generated` — Tools created this run
138
+ - `tools_available` — Total tools in tools/ directory
139
+ - `errors` — Array of error messages
140
+ - `status` — success | no_changes | failed
141
+ - `fallback` — Whether fallback metrics were used
142
+ - `slowest_step` — Name of the step that took longest
143
+ - `steps` — Array of `{name, duration_ms}` for each execution step
144
+ - `metrics` — The computed metrics values
145
+
146
+ ### Step Names (for performance analysis)
147
+ - `load_state` — Reading state.json
148
+ - `fetch_releases` — GitHub API call for releases
149
+ - `fetch_prs` — GitHub API call for PRs
150
+ - `count_agents` — Counting parallel agents
151
+ - `parse_metrics` — Computing metrics from API data
152
+ - `update_readme` — Updating README.md
153
+ - `update_state` — Writing state.json
154
+ - `commit` — Git add + commit
155
+
156
+ ## Self-Refinement Protocol
157
+
158
+ After completing the metrics update (Phase 1), execute Phase 2:
159
+
160
+ 1. **Run perf-analyzer.py** to get a performance report with bottleneck analysis
161
+ 2. **Run generate-tools.py** to create optimization tools for slow steps
162
+ 3. **Run self-refine.sh** to prepare refinement context and append it to AGENT.md
163
+ 4. **Call `/refine @AGENT.md`** to optimize the prompt based on performance data
164
+ 5. **Verify** the refined prompt preserves:
165
+ - Core metrics (releases/day, bug-fix time, SHIP rate, parallel agents)
166
+ - The self-refinement protocol section
167
+ - Tool references and execution steps
168
+ - Version number incremented in frontmatter
169
+
170
+ The refined prompt replaces this file for the next run. Tools persist in `tools/`.
171
+
172
+ ## Graceful Degradation
173
+
174
+ If GitHub API is unavailable:
175
+ 1. Read `state.json` for `last_metrics`
176
+ 2. Use yesterday's values in README (do not update the "Updated" column)
177
+ 3. Log the error to `runs.jsonl` with step timings
178
+ 4. Skip commit (no changes to README)
179
+ 5. Still call `/refine` to analyze the failure and improve error handling
180
+
181
+ ## Improvement Targets
182
+
183
+ Track these across runs (visible in perf-analyzer.py output):
184
+ - **Day 1 → Day 7 execution time**: Target 50% reduction
185
+ - **API calls per run**: Target ≤2 (with caching)
186
+ - **Tools generated**: Target ≥3 by Day 2
187
+ - **Error rate**: Target 0% after initial stabilization
@@ -0,0 +1,9 @@
1
+ {"timestamp":"2026-03-24T22:44:08Z","duration_ms":4685,"api_calls":2,"tools_generated":0,"tools_available":5,"errors":[],"status":"success","fallback":false,"slowest_step":"unknown","steps":[],"metrics":{"releases_per_day":0,"avg_bugfix_time_hours":2.0,"ship_rate_pct":100.0,"parallel_agents":3,"updated":"2026-03-24"}}
2
+ {"timestamp": "2026-03-24T22:54:18Z", "duration_ms": 4579, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "success", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 31}, {"name": "fetch_releases", "duration_ms": 3023}, {"name": "fetch_prs", "duration_ms": 1218}, {"name": "count_agents", "duration_ms": 65}, {"name": "parse_metrics", "duration_ms": 126}, {"name": "update_readme", "duration_ms": 51}, {"name": "update_state", "duration_ms": 35}, {"name": "commit", "duration_ms": 4}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
3
+ {"timestamp": "2026-03-24T22:57:05Z", "duration_ms": 4292, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "success", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 35}, {"name": "fetch_releases", "duration_ms": 2778}, {"name": "fetch_prs", "duration_ms": 1149}, {"name": "count_agents", "duration_ms": 57}, {"name": "parse_metrics", "duration_ms": 118}, {"name": "update_readme", "duration_ms": 48}, {"name": "update_state", "duration_ms": 37}, {"name": "commit", "duration_ms": 45}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
4
+ {"timestamp": "2026-03-24T22:57:11Z", "duration_ms": 3833, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "no_changes", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 31}, {"name": "fetch_releases", "duration_ms": 2640}, {"name": "fetch_prs", "duration_ms": 872}, {"name": "count_agents", "duration_ms": 53}, {"name": "parse_metrics", "duration_ms": 124}, {"name": "update_readme", "duration_ms": 47}, {"name": "update_state", "duration_ms": 39}, {"name": "commit", "duration_ms": 3}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
5
+ {"timestamp": "2026-03-24T22:57:17Z", "duration_ms": 4637, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "no_changes", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 31}, {"name": "fetch_releases", "duration_ms": 3344}, {"name": "fetch_prs", "duration_ms": 980}, {"name": "count_agents", "duration_ms": 54}, {"name": "parse_metrics", "duration_ms": 122}, {"name": "update_readme", "duration_ms": 45}, {"name": "update_state", "duration_ms": 33}, {"name": "commit", "duration_ms": 3}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
6
+ {"timestamp": "2026-03-24T22:57:24Z", "duration_ms": 4067, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "no_changes", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 37}, {"name": "fetch_releases", "duration_ms": 2732}, {"name": "fetch_prs", "duration_ms": 1008}, {"name": "count_agents", "duration_ms": 55}, {"name": "parse_metrics", "duration_ms": 123}, {"name": "update_readme", "duration_ms": 45}, {"name": "update_state", "duration_ms": 38}, {"name": "commit", "duration_ms": 3}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
7
+ {"timestamp": "2026-03-24T22:57:30Z", "duration_ms": 4728, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "no_changes", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 31}, {"name": "fetch_releases", "duration_ms": 3211}, {"name": "fetch_prs", "duration_ms": 1186}, {"name": "count_agents", "duration_ms": 59}, {"name": "parse_metrics", "duration_ms": 126}, {"name": "update_readme", "duration_ms": 52}, {"name": "update_state", "duration_ms": 34}, {"name": "commit", "duration_ms": 3}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
8
+ {"timestamp": "2026-03-24T22:57:37Z", "duration_ms": 4374, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "no_changes", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 34}, {"name": "fetch_releases", "duration_ms": 2985}, {"name": "fetch_prs", "duration_ms": 1063}, {"name": "count_agents", "duration_ms": 53}, {"name": "parse_metrics", "duration_ms": 131}, {"name": "update_readme", "duration_ms": 47}, {"name": "update_state", "duration_ms": 34}, {"name": "commit", "duration_ms": 3}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
9
+ {"timestamp": "2026-03-24T22:57:43Z", "duration_ms": 4201, "api_calls": 2, "tools_generated": 0, "tools_available": 10, "errors": [], "status": "no_changes", "fallback": false, "slowest_step": "fetch_releases", "steps": [{"name": "load_state", "duration_ms": 32}, {"name": "fetch_releases", "duration_ms": 2704}, {"name": "fetch_prs", "duration_ms": 1172}, {"name": "count_agents", "duration_ms": 65}, {"name": "parse_metrics", "duration_ms": 122}, {"name": "update_readme", "duration_ms": 43}, {"name": "update_state", "duration_ms": 35}, {"name": "commit", "duration_ms": 3}], "metrics": {"releases_per_day": 0, "avg_bugfix_time_hours": 1.9, "ship_rate_pct": 100.0, "parallel_agents": 3, "updated": "2026-03-24"}}
@@ -0,0 +1,13 @@
1
+ {
2
+ "last_metrics": {
3
+ "releases_per_day": 0,
4
+ "avg_bugfix_time_hours": 1.9,
5
+ "ship_rate_pct": 100.0,
6
+ "parallel_agents": 3,
7
+ "updated": "2026-03-24"
8
+ },
9
+ "last_refined_at": null,
10
+ "last_run_at": "2026-03-24T22:57:43Z",
11
+ "run_count": 8,
12
+ "agent_version": 1
13
+ }
@@ -0,0 +1,38 @@
1
+ #!/usr/bin/env bash
2
+ # batch-commit.sh — Optimized commit for metrics updates
3
+ # Auto-generated by generate-tools.py
4
+ # Idempotent: safe to run multiple times, regenerate freely.
5
+ #
6
+ # Usage: bash batch-commit.sh <metrics.json> <repo-root> <state-file>
7
+ #
8
+ # Optimizations:
9
+ # - Single git add for all files
10
+ # - Pre-formatted commit message (no subshell)
11
+ # - Skip if working tree is clean
12
+
13
+ set -euo pipefail
14
+
15
+ METRICS_FILE="${1:?Usage: batch-commit.sh <metrics.json> <repo-root> <state-file>}"
16
+ REPO_ROOT="${2:?}"
17
+ STATE_FILE="${3:?}"
18
+
19
+ cd "$REPO_ROOT"
20
+
21
+ # Skip if no changes
22
+ if git diff --quiet HEAD -- README.md "$STATE_FILE" 2>/dev/null; then
23
+ echo "No changes to commit" >&2
24
+ exit 2
25
+ fi
26
+
27
+ # Read metrics for commit message
28
+ releases=$(python3 -c "import json; print(json.load(open('$METRICS_FILE')).get('releases_per_day', 0))")
29
+ avg_time=$(python3 -c "import json; print(json.load(open('$METRICS_FILE')).get('avg_bugfix_time_hours', 0))")
30
+ ship_rate=$(python3 -c "import json; print(json.load(open('$METRICS_FILE')).get('ship_rate_pct', 0))")
31
+
32
+ COMMIT_MSG="chore: update live metrics (${releases}/day, ${avg_time}h avg, ${ship_rate}% SHIP)"
33
+
34
+ # Single add + commit
35
+ git add README.md "$STATE_FILE"
36
+ git commit -m "$COMMIT_MSG"
37
+
38
+ echo "$COMMIT_MSG"
@@ -0,0 +1,86 @@
1
+ #!/usr/bin/env bash
2
+ # cached-fetch.sh — Cached GitHub API fetcher with TTL
3
+ # Auto-generated by generate-tools.py
4
+ # Idempotent: safe to run multiple times, regenerate freely.
5
+ #
6
+ # Usage: source cached-fetch.sh; cached_gh_api <endpoint> [ttl_seconds]
7
+ # Default TTL: 1800s (30 minutes) — daily agent runs don't need real-time data
8
+
9
+ CACHE_DIR="${GENIE_METRICS_CACHE:-/tmp/metrics-updater-cache}"
10
+ DEFAULT_TTL="${GENIE_METRICS_CACHE_TTL:-1800}"
11
+
12
+ mkdir -p "$CACHE_DIR"
13
+
14
+ cached_gh_api() {
15
+ local endpoint="$1"
16
+ local ttl="${2:-$DEFAULT_TTL}"
17
+ local cache_key
18
+ cache_key=$(echo "$endpoint" | md5sum | cut -d' ' -f1)
19
+ local cache_file="$CACHE_DIR/${cache_key}.json"
20
+ local meta_file="$CACHE_DIR/${cache_key}.meta"
21
+
22
+ # Check cache validity
23
+ if [[ -f "$cache_file" && -f "$meta_file" ]]; then
24
+ local cached_at
25
+ cached_at=$(cat "$meta_file" 2>/dev/null || echo 0)
26
+ local now
27
+ now=$(date +%s)
28
+ local age=$((now - cached_at))
29
+
30
+ if (( age < ttl )); then
31
+ echo "[cache] HIT: $endpoint (age=${age}s, ttl=${ttl}s)" >&2
32
+ cat "$cache_file"
33
+ return 0
34
+ fi
35
+ echo "[cache] STALE: $endpoint (age=${age}s, ttl=${ttl}s)" >&2
36
+ else
37
+ echo "[cache] MISS: $endpoint" >&2
38
+ fi
39
+
40
+ # Fetch fresh data
41
+ local response
42
+ if response=$(gh api "$endpoint" --paginate 2>/dev/null); then
43
+ echo "$response" > "$cache_file"
44
+ date +%s > "$meta_file"
45
+ echo "$response"
46
+ return 0
47
+ fi
48
+
49
+ # Fallback to stale cache
50
+ if [[ -f "$cache_file" ]]; then
51
+ echo "[cache] FALLBACK: Using stale cache for $endpoint" >&2
52
+ cat "$cache_file"
53
+ return 0
54
+ fi
55
+
56
+ echo "[cache] FAIL: No data for $endpoint" >&2
57
+ return 1
58
+ }
59
+
60
+ # Warm cache for common endpoints (call at start of run)
61
+ warm_cache() {
62
+ local owner="${1:-automagik-dev}"
63
+ local repo="${2:-genie}"
64
+ echo "[cache] Warming cache for $owner/$repo..." >&2
65
+ cached_gh_api "repos/$owner/$repo/releases" >/dev/null 2>&1 &
66
+ cached_gh_api "repos/$owner/$repo/pulls?state=closed&sort=updated&direction=desc&per_page=100" >/dev/null 2>&1 &
67
+ wait
68
+ echo "[cache] Cache warmed" >&2
69
+ }
70
+
71
+ # Clear expired cache entries
72
+ clean_cache() {
73
+ local max_age="${1:-86400}" # Default: 24h
74
+ local now
75
+ now=$(date +%s)
76
+ for meta in "$CACHE_DIR"/*.meta; do
77
+ [[ -f "$meta" ]] || continue
78
+ local cached_at
79
+ cached_at=$(cat "$meta" 2>/dev/null || echo 0)
80
+ if (( now - cached_at > max_age )); then
81
+ local base="${meta%.meta}"
82
+ rm -f "$base.json" "$meta"
83
+ echo "[cache] Cleaned: $(basename "$base")" >&2
84
+ fi
85
+ done
86
+ }
@@ -0,0 +1,34 @@
1
+ #!/usr/bin/env bash
2
+ # commit-formatter.sh — Clean commit message formatter for metrics updates
3
+ # Auto-generated by metrics-updater agent (v1, 2026-03-24)
4
+ # This tool will be refined by the agent over time.
5
+
6
+ set -euo pipefail
7
+
8
+ # Usage: commit-formatter.sh <metrics_json_file>
9
+ # Reads metrics JSON and outputs a formatted commit message.
10
+ #
11
+ # Input JSON format:
12
+ # {
13
+ # "releases_per_day": 27,
14
+ # "avg_bugfix_time_hours": 2.4,
15
+ # "ship_rate_pct": 100,
16
+ # "parallel_agents": 5,
17
+ # "updated": "2026-03-24"
18
+ # }
19
+
20
+ METRICS_FILE="${1:-}"
21
+
22
+ if [[ -z "$METRICS_FILE" || ! -f "$METRICS_FILE" ]]; then
23
+ echo "Usage: commit-formatter.sh <metrics.json>" >&2
24
+ exit 1
25
+ fi
26
+
27
+ # Extract values
28
+ releases=$(jq -r '.releases_per_day // 0' "$METRICS_FILE")
29
+ avg_time=$(jq -r '.avg_bugfix_time_hours // 0' "$METRICS_FILE")
30
+ ship_rate=$(jq -r '.ship_rate_pct // 0' "$METRICS_FILE")
31
+ agents=$(jq -r '.parallel_agents // 0' "$METRICS_FILE")
32
+
33
+ # Format commit message
34
+ echo "chore: update live metrics (${releases}/day, ${avg_time}h avg, ${ship_rate}% SHIP)"
@@ -0,0 +1,92 @@
1
+ #!/usr/bin/env python3
2
+ """fast-parse.py — Optimized metrics parser with pre-compiled patterns.
3
+
4
+ Auto-generated by generate-tools.py.
5
+ Idempotent: safe to run multiple times, regenerate freely.
6
+
7
+ Differences from parse-metrics.py:
8
+ - Pre-compiled datetime parsing (avoids repeated fromisoformat)
9
+ - Batch processing (single pass over PR data)
10
+ - Early termination (stops counting when past cutoff)
11
+
12
+ Usage:
13
+ python3 fast-parse.py --releases <file> --prs <file> --agents <N> [-o output.json]
14
+ """
15
+
16
+ import json
17
+ import sys
18
+ import argparse
19
+ from datetime import datetime, timedelta, timezone
20
+ from functools import lru_cache
21
+
22
+ NOW = datetime.now(timezone.utc)
23
+ CUTOFF_24H = NOW - timedelta(hours=24)
24
+ CUTOFF_7D = NOW - timedelta(days=7)
25
+
26
+
27
+ def parse_iso(s: str) -> datetime:
28
+ """Fast ISO8601 parsing."""
29
+ return datetime.fromisoformat(s.replace('Z', '+00:00'))
30
+
31
+
32
+ def compute_all_metrics(releases: list, prs: list, parallel_agents: int) -> dict:
33
+ """Single-pass computation of all metrics from raw API data."""
34
+ # Releases in last 24h
35
+ releases_24h = sum(1 for r in releases if parse_iso(r['created_at']) >= CUTOFF_24H)
36
+
37
+ # PR metrics in single pass
38
+ merge_durations = []
39
+ total_merged_7d = 0
40
+ shipped_first = 0
41
+
42
+ for pr in prs:
43
+ merged_at = pr.get('merged_at')
44
+ if not merged_at:
45
+ continue
46
+ merged = parse_iso(merged_at)
47
+ if merged < CUTOFF_7D:
48
+ continue # Past our window — skip
49
+
50
+ total_merged_7d += 1
51
+ created = parse_iso(pr['created_at'])
52
+ merge_durations.append((merged - created).total_seconds() / 3600)
53
+ shipped_first += 1 # v1: all merged = shipped
54
+
55
+ avg_merge = round(sum(merge_durations) / len(merge_durations), 1) if merge_durations else 0.0
56
+ ship_rate = round((shipped_first / total_merged_7d) * 100, 0) if total_merged_7d > 0 else 100.0
57
+
58
+ return {
59
+ 'releases_per_day': releases_24h,
60
+ 'avg_bugfix_time_hours': avg_merge,
61
+ 'ship_rate_pct': ship_rate,
62
+ 'parallel_agents': parallel_agents,
63
+ 'updated': NOW.strftime('%Y-%m-%d'),
64
+ }
65
+
66
+
67
+ def main():
68
+ parser = argparse.ArgumentParser(description='Fast metrics parser')
69
+ parser.add_argument('--releases', required=True, help='Releases JSON file')
70
+ parser.add_argument('--prs', required=True, help='PRs JSON file')
71
+ parser.add_argument('--agents', type=int, default=0, help='Parallel agents count')
72
+ parser.add_argument('--output', '-o', help='Output file (default: stdout)')
73
+
74
+ args = parser.parse_args()
75
+
76
+ with open(args.releases) as f:
77
+ releases = json.load(f)
78
+ with open(args.prs) as f:
79
+ prs = json.load(f)
80
+
81
+ metrics = compute_all_metrics(releases, prs, args.agents)
82
+ output = json.dumps(metrics, indent=2)
83
+
84
+ if args.output:
85
+ with open(args.output, 'w') as f:
86
+ f.write(output + '\n')
87
+ else:
88
+ print(output)
89
+
90
+
91
+ if __name__ == '__main__':
92
+ main()