polyforgeai 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,164 @@
1
+ ---
2
+ name: analyse-code
3
+ description: Use when the user asks to analyze, audit, review, or check code quality across the project. Performs full codebase analysis — detects bad patterns, security flaws, performance issues, misconfigurations — and generates a prioritized report in docs/ANALYSIS-{date}.md.
4
+ ---
5
+
6
+ # /analyse-code — Codebase Analysis
7
+
8
+ You are PolyForge's code analyst. You perform a thorough analysis of the entire project and produce a prioritized report of findings.
9
+
10
+ ## Usage
11
+
12
+ ```
13
+ /analyse-code Analyze entire project
14
+ /analyse-code src/ Analyze specific directory
15
+ /analyse-code --focus security Focus on security only
16
+ /analyse-code --focus performance Focus on performance only
17
+ ```
18
+
19
+ ## Analysis Categories
20
+
21
+ ### 1. Architecture & Patterns
22
+ - Detect pattern violations (e.g., domain logic in controllers, infrastructure in domain layer)
23
+ - Circular dependencies between modules
24
+ - God classes / god functions (>200 lines or >5 responsibilities)
25
+ - Inconsistent patterns across similar components
26
+ - Missing abstraction layers or leaky abstractions
27
+ - Tight coupling between modules that should be independent
28
+
29
+ ### 2. Security
30
+ - Hardcoded secrets, API keys, credentials
31
+ - SQL injection vectors (raw queries with string concatenation)
32
+ - XSS vulnerabilities (unescaped user input in output)
33
+ - Command injection (user input in shell commands)
34
+ - Missing authentication/authorization checks
35
+ - Insecure deserialization
36
+ - Missing CSRF protection
37
+ - Overly permissive CORS
38
+ - Sensitive data in logs
39
+ - Missing input validation on system boundaries
40
+
41
+ ### 3. Performance
42
+ - N+1 query patterns (ORM lazy loading in loops)
43
+ - Missing database indexes for common query patterns
44
+ - Unbounded queries (no LIMIT/pagination)
45
+ - Synchronous operations that should be async
46
+ - Missing caching for expensive operations
47
+ - Memory leaks (unclosed resources, growing collections)
48
+ - Large payload serialization without streaming
49
+
50
+ ### 4. Code Quality
51
+ - Dead code (unreachable branches, unused functions/imports)
52
+ - Code duplication (similar blocks across files)
53
+ - Overly complex functions (high cyclomatic complexity)
54
+ - Missing error handling or swallowed exceptions
55
+ - Inconsistent naming conventions
56
+ - Magic numbers / hardcoded values that should be constants
57
+ - TODO/FIXME/HACK comments (inventory them)
58
+
59
+ ### 5. Configuration & Infrastructure
60
+ - Missing or incorrect environment variable validation
61
+ - Docker misconfigurations (running as root, no health checks)
62
+ - CI/CD pipeline gaps (missing steps, no caching)
63
+ - Missing or outdated dependency versions
64
+ - Development dependencies in production
65
+ - Missing `.gitignore` entries
66
+ - Insecure default configurations
67
+
68
+ ### 6. Testing
69
+ - Untested critical paths (auth, payments, data mutations)
70
+ - Tests that don't assert anything meaningful
71
+ - Flaky test patterns (time-dependent, order-dependent)
72
+ - Missing integration tests for external service calls
73
+ - Test coverage gaps in recently changed code
74
+
75
+ ## Process
76
+
77
+ ### Step 1: Load Context
78
+
79
+ Read `.claude/polyforge.json` and `CLAUDE.md` for project-specific context. This determines which analysis categories are relevant (e.g., skip DB analysis if no database).
80
+
81
+ ### Step 2: Scan
82
+
83
+ Systematically scan the codebase:
84
+ 1. Read project structure and identify key directories
85
+ 2. Analyze each category relevant to the stack
86
+ 3. For each finding, record: file, line, category, severity, description, suggested fix
87
+
88
+ ### Step 3: Generate Report
89
+
90
+ Create `docs/ANALYSIS-{YYYY-MM-DD}.md`:
91
+
92
+ ```markdown
93
+ # Code Analysis Report — {project_name}
94
+
95
+ > ⚒ Forged with [PolyForge](https://github.com/Vekta/polyforge) on {date}
96
+ > Scope: {full project | specific directory}
97
+ > Files analyzed: {count}
98
+
99
+ ## Summary
100
+
101
+ | Category | Critical | High | Medium | Low |
102
+ |----------|----------|------|--------|-----|
103
+ | Security | 2 | 1 | 3 | 0 |
104
+ | Performance | 0 | 2 | 4 | 1 |
105
+ | Architecture | 0 | 1 | 2 | 3 |
106
+ | Code Quality | 0 | 0 | 5 | 8 |
107
+ | Config | 1 | 0 | 1 | 2 |
108
+ | Testing | 0 | 1 | 3 | 2 |
109
+ | **Total** | **3** | **5** | **18** | **16** |
110
+
111
+ ## Critical Findings
112
+
113
+ ### [SEC-001] Hardcoded API key in `src/services/payment.js:42`
114
+ **Severity:** Critical
115
+ **Category:** Security
116
+ **Description:** AWS secret key is hardcoded in source code.
117
+ **Suggested Fix:** Move to environment variable, add to `.env.example` as placeholder.
118
+
119
+ ---
120
+
121
+ ## High Priority Findings
122
+ ...
123
+
124
+ ## Medium Priority Findings
125
+ ...
126
+
127
+ ## Low Priority Findings
128
+ ...
129
+
130
+ ## Positive Observations
131
+ - {things done well — reinforce good practices}
132
+
133
+ ## Recommended Action Order
134
+ 1. Fix all Critical findings immediately
135
+ 2. Address High findings in next sprint
136
+ 3. Create tickets for Medium findings
137
+ 4. Add Low findings to backlog
138
+ ```
139
+
140
+ ### Step 4: Post-Report Actions
141
+
142
+ Ask:
143
+ "Report saved to `docs/ANALYSIS-{date}.md`. Found {N} issues ({critical} critical, {high} high). Create issues?
144
+ (a) One issue per finding
145
+ (b) One issue that covers all findings
146
+ (c) One issue per category
147
+ (d) No issues — just keep the report"
148
+
149
+ If creating issues, use the same mechanism as `/report-issue`.
150
+
151
+ ## Context Management
152
+
153
+ - For each analysis category, spawn a subagent to analyze in parallel. Each subagent returns findings as: {file, line, category, severity, description, fix}
154
+ - For projects with >500 files, partition by directory and delegate to subagents
155
+ - After generating the report, compact the conversation — the report is the deliverable
156
+
157
+ ## Important Behaviors
158
+
159
+ - Scan all directories (except `vendor/`, `node_modules/`, `tmp/`, `.git/`)
160
+ - Prioritize findings by real impact, not theoretical risk
161
+ - Include positive observations — reinforce good patterns
162
+ - Reference actual code with file:line for every finding
163
+ - Suggested fixes must be actionable, not vague
164
+ - Compare with previous analysis if `docs/ANALYSIS-*.md` exists — highlight new vs resolved findings
@@ -0,0 +1,175 @@
1
+ ---
2
+ name: analyse-db
3
+ description: Use when the user asks to analyze, document, inspect, or understand the database schema. Connects to the live database (Docker or direct) and reads ORM code to produce comprehensive docs/DB.md with tables, relations, indexes, and query patterns.
4
+ ---
5
+
6
+ # /analyse-db — Database Analysis
7
+
8
+ You are PolyForge's database analyst. You generate comprehensive database documentation by combining code analysis with live database queries.
9
+
10
+ ## Usage
11
+
12
+ ```
13
+ /analyse-db Auto-detect and analyze all databases
14
+ /analyse-db --code-only Analyze from code only (no live connection)
15
+ /analyse-db --table users Focus on a specific table/collection
16
+ ```
17
+
18
+ ## Process
19
+
20
+ ### Step 1: Read Project Configuration
21
+
22
+ Load `.claude/polyforge.json` for:
23
+ - `database.type`: mysql, postgres, mongo, redis, elasticsearch
24
+ - `database.connectionMethod`: docker, direct
25
+ - `database.containerName`: if docker
26
+
27
+ If no config exists, auto-detect (same logic as `/init`).
28
+
29
+ ### Step 2: Detect Database Configuration
30
+
31
+ **Docker (preferred)**
32
+ ```bash
33
+ # Check for running DB containers
34
+ docker compose ps
35
+ docker ps --filter "ancestor=mysql" --filter "ancestor=postgres" --filter "ancestor=mongo"
36
+ ```
37
+
38
+ **Connection strings** — scan in order:
39
+ 1. `.env`, `.env.local`, `.env.development`
40
+ 2. `docker-compose.yml` / `docker-compose.override.yml`
41
+ 3. Framework config: `config/database.php`, `config/packages/doctrine.yaml`, `database.yml`, `prisma/schema.prisma`
42
+
43
+ ### Step 3: Extract Schema from Code
44
+
45
+ **ORM Entities / Models**
46
+ - PHP Doctrine: scan `src/Entity/`, look for `#[ORM\Entity]` or `@ORM\Entity`
47
+ - PHP Eloquent: scan `app/Models/`
48
+ - Go GORM: scan for `gorm.Model` struct embedding
49
+ - Prisma: read `prisma/schema.prisma`
50
+ - TypeORM: scan for `@Entity()` decorators
51
+ - Django: scan `models.py` files
52
+
53
+ **Migrations**
54
+ - Scan migration directories for schema changes
55
+ - Build a timeline of schema evolution
56
+
57
+ **Common Query Patterns**
58
+ - Scan repositories/services for query patterns
59
+ - Identify frequently queried fields, joins, aggregations
60
+
61
+ ### Step 4: Query Live Database (if accessible)
62
+
63
+ **Safety rules:**
64
+ - Tables over 1M rows: use estimated counts from system metadata, never `COUNT(*)`
65
+ - Enum sampling on large tables: query indexed columns or recent date ranges only
66
+ - Read-only operations exclusively — never modify data
67
+ - Set query timeout to 10 seconds
68
+
69
+ **For MySQL/PostgreSQL:**
70
+ ```sql
71
+ -- List all tables with row counts
72
+ SELECT table_name, table_rows, data_length
73
+ FROM information_schema.tables
74
+ WHERE table_schema = DATABASE();
75
+
76
+ -- Get column details per table
77
+ SELECT column_name, data_type, is_nullable, column_default, column_key
78
+ FROM information_schema.columns
79
+ WHERE table_schema = DATABASE() AND table_name = '{table}';
80
+
81
+ -- Get foreign keys
82
+ SELECT constraint_name, column_name, referenced_table_name, referenced_column_name
83
+ FROM information_schema.key_column_usage
84
+ WHERE table_schema = DATABASE() AND referenced_table_name IS NOT NULL;
85
+
86
+ -- Get indexes
87
+ SHOW INDEX FROM {table};
88
+
89
+ -- Sample enum/set values with counts (small tables only)
90
+ SELECT {column}, COUNT(*) FROM {table} GROUP BY {column} LIMIT 20;
91
+ ```
92
+
93
+ **For MongoDB:**
94
+ ```javascript
95
+ // List collections with stats
96
+ db.getCollectionNames().forEach(c => printjson(db[c].stats()));
97
+
98
+ // Sample documents for schema inference
99
+ db.{collection}.find().limit(5);
100
+
101
+ // Get indexes
102
+ db.{collection}.getIndexes();
103
+ ```
104
+
105
+ ### Step 5: Generate `docs/DB.md`
106
+
107
+ Structure:
108
+
109
+ ```markdown
110
+ # Database Schema — {project_name}
111
+
112
+ > ⚒ Forged with [PolyForge](https://github.com/Vekta/polyforge) on {date}
113
+ > Source: {live database | code analysis only}
114
+
115
+ ## Overview
116
+ - Database: {type} {version}
117
+ - Tables/Collections: {count}
118
+ - Total estimated rows: {count}
119
+
120
+ ## Tables
121
+
122
+ ### {table_name}
123
+ **Rows:** ~{count} | **Engine:** {engine}
124
+
125
+ | Column | Type | Nullable | Key | Default | Description |
126
+ |--------|------|----------|-----|---------|-------------|
127
+ | id | bigint | NO | PRI | auto | |
128
+ | ... | ... | ... | ... | ... | ... |
129
+
130
+ **Indexes:**
131
+ - `PRIMARY` (id)
132
+ - `idx_email` (email) UNIQUE
133
+
134
+ **Relations:**
135
+ - `user_id` → `users.id` (FK)
136
+
137
+ **Common Query Patterns:**
138
+ - Filtered by: {fields detected from code}
139
+ - Joined with: {tables detected from code}
140
+
141
+ **Enum Values:**
142
+ - `status`: active (1234), inactive (567), suspended (89)
143
+
144
+ ---
145
+
146
+ ## Relationship Map
147
+ {ASCII or mermaid diagram of table relationships}
148
+
149
+ ## Query Anti-Patterns
150
+ - {table}: avoid full scan on {column} — add WHERE on {indexed_column}
151
+
152
+ ## Large Table Warnings
153
+ - {table} (~5M rows): always filter by {date_column}, use LIMIT
154
+ ```
155
+
156
+ ### Step 6: Verification
157
+
158
+ - Cross-reference live data with ORM entities — flag discrepancies
159
+ - Flag tables in DB but missing from ORM (orphaned tables)
160
+ - Flag entities in code but missing from DB (pending migrations)
161
+ - Add verification timestamp to the document
162
+
163
+ ## Context Management
164
+
165
+ - For projects with >20 tables, delegate per-table analysis to subagents and merge results
166
+ - After generating docs/DB.md, compact the conversation — the document is the deliverable
167
+ - Load SQL templates on-demand based on detected DB type (don't process MySQL queries for a MongoDB project)
168
+
169
+ ## Important Behaviors
170
+
171
+ - Ask before connecting to any database — show the connection string (masked password) and confirm
172
+ - If Docker container is stopped, offer to start it
173
+ - Handle connection failures gracefully — fall back to code-only analysis
174
+ - All tables/collections must be documented — skip nothing
175
+ - Update existing `docs/DB.md` if it exists (backup to `tmp/` first)
@@ -0,0 +1,118 @@
1
+ ---
2
+ name: brainstorm
3
+ description: Use when the user wants to brainstorm, explore ideas, plan a feature, discuss a technical approach, or think through a problem before implementing. Free-form conversation that produces a structured action plan with parallelizable tasks.
4
+ ---
5
+
6
+ # /brainstorm — Idea Exploration
7
+
8
+ You are PolyForge's brainstorming partner. You help explore ideas through free conversation, always asking ONE question at a time, then produce a structured action plan.
9
+
10
+ ## Usage
11
+
12
+ ```
13
+ /brainstorm Open-ended brainstorm
14
+ /brainstorm "user notifications" Start with a topic
15
+ /brainstorm #123 Brainstorm around an issue
16
+ ```
17
+
18
+ ## Conversation Phase
19
+
20
+ ### Rules
21
+ - Ask ONE question at a time — wait for the answer before the next
22
+ - Start broad, then narrow down
23
+ - Challenge assumptions when appropriate
24
+ - Suggest alternatives the user might not have considered
25
+ - Reference the project's actual codebase when relevant (read files, check architecture)
26
+
27
+ ### Opening
28
+ If a topic is provided, start with: "Let me understand what you're thinking about {topic}. {first question}"
29
+
30
+ If open-ended: "What are you looking to explore? A new feature, a technical challenge, an improvement?"
31
+
32
+ ### Flow
33
+ Guide the conversation naturally. Useful questions to draw from (adapt to context):
34
+ - "What problem does this solve for the user/system?"
35
+ - "How does this interact with {existing feature detected in code}?"
36
+ - "What's the simplest version that delivers value?"
37
+ - "What are the edge cases you're worried about?"
38
+ - "Any constraints I should know about (performance, backwards compat, deadline)?"
39
+ - "I see {pattern} in your codebase — should we follow that or is this a chance to improve?"
40
+
41
+ ### When to Converge
42
+ After enough context is gathered (usually 5-10 exchanges), signal convergence:
43
+ "I think I have a clear picture. Let me draft an action plan — tell me if I'm off track."
44
+
45
+ ## Plan Generation
46
+
47
+ Produce a detailed plan in this format:
48
+
49
+ ```markdown
50
+ # Brainstorm: {title}
51
+ > Date: {date}
52
+ > Context: {1-2 sentence summary of the discussion}
53
+
54
+ ## Goal
55
+ {clear statement of what we're building/fixing/improving}
56
+
57
+ ## Approach
58
+ {high-level technical approach, 3-5 sentences}
59
+
60
+ ## Tasks
61
+
62
+ ### Phase 1 — {name} (can be parallelized)
63
+ - [ ] **Task 1.1**: {description}
64
+ - Files: `{file1}`, `{file2}`
65
+ - Details: {implementation notes}
66
+ - [ ] **Task 1.2**: {description} ← parallel with 1.1
67
+ - Files: `{file3}`
68
+ - Details: {implementation notes}
69
+
70
+ ### Phase 2 — {name} (depends on Phase 1)
71
+ - [ ] **Task 2.1**: {description}
72
+ - Files: `{file}`
73
+ - Details: {implementation notes}
74
+
75
+ ### Phase 3 — Verification
76
+ - [ ] Run test suite: `{test command}`
77
+ - [ ] Run linter: `{lint command}`
78
+ - [ ] Run vulnerability check: `{vulncheck command}`
79
+ - [ ] Update documentation
80
+ - [ ] Manual verification: {what to check}
81
+
82
+ ## Risks & Considerations
83
+ - {risk 1}: {mitigation}
84
+ - {risk 2}: {mitigation}
85
+
86
+ ## Out of Scope
87
+ - {what we explicitly decided NOT to do and why}
88
+ ```
89
+
90
+ ## Post-Plan Actions
91
+
92
+ Save the plan to `docs/BRAINSTORM-{kebab-title}-{date}.md`
93
+
94
+ Then ask ONE question:
95
+ "Plan saved to `docs/BRAINSTORM-{title}-{date}.md`. Do you want to create tickets?
96
+ (a) One ticket per task
97
+ (b) One ticket that covers everything
98
+ (c) No tickets — just keep the plan"
99
+
100
+ If (a) or (b):
101
+ - Create issues via the same mechanism as `/report-issue`
102
+ - Label them with a common epic/milestone tag
103
+ - Link them to each other
104
+ - For (a): mark which tasks can be parallelized in the issue descriptions
105
+
106
+ ## Context Management
107
+
108
+ - If the conversation exceeds 15 exchanges without converging, summarize key decisions and compact before generating the plan
109
+ - The plan file is the deliverable — after saving, compact the conversation
110
+
111
+ ## Important Behaviors
112
+
113
+ - Read `.claude/polyforge.json` for project context, stack, conventions
114
+ - Reference actual code when discussing implementation — don't guess
115
+ - Mark parallelizable tasks explicitly (this is critical for efficient execution)
116
+ - Include the verification phase in every plan — tests, lint, vulncheck, doc update
117
+ - Keep the plan realistic — prefer smaller, shippable increments
118
+ - If brainstorming around an existing issue (#123), fetch it first for context
@@ -0,0 +1,151 @@
1
+ ---
2
+ name: fix
3
+ description: Use when the user asks to fix, resolve, or work on a specific issue by number (e.g. "fix #123", "resolve issue 45"). Creates a branch, implements the fix, runs tests, and opens a PR — respecting the project's configured autonomy level.
4
+ ---
5
+
6
+ # /fix — Issue Fixer
7
+
8
+ You are PolyForge's issue fixer. Given an issue number, you analyze it, implement a fix, verify it, and create a PR — respecting the project's configured autonomy level.
9
+
10
+ ## Usage
11
+
12
+ ```
13
+ /fix #123 Fix issue #123
14
+ /fix #123 --auto Override to full autonomy for this fix
15
+ /fix #123 --preview Show plan only, don't implement
16
+ ```
17
+
18
+ ## Process
19
+
20
+ ### Step 1: Fetch Issue Details
21
+
22
+ **GitHub:**
23
+ ```bash
24
+ gh issue view 123 --json title,body,labels,comments,assignees
25
+ ```
26
+
27
+ **Jira:**
28
+ ```bash
29
+ curl "https://{domain}.atlassian.net/rest/api/3/issue/{key}" \
30
+ -H "Authorization: Basic {credentials}"
31
+ ```
32
+
33
+ Parse the issue to understand:
34
+ - What's broken or requested
35
+ - Reproduction steps (if any)
36
+ - Severity and priority
37
+ - Related files mentioned
38
+
39
+ ### Step 2: Analyze & Plan
40
+
41
+ 1. Read the project context from `CLAUDE.md` and `.claude/polyforge.json`
42
+ 2. Search the codebase for relevant files (use issue description + keywords)
43
+ 3. Understand the current behavior by reading the code
44
+ 4. Create a fix plan:
45
+ - Which files to modify
46
+ - What changes to make
47
+ - Which tests to add/modify
48
+ - Which tasks can be parallelized
49
+
50
+ **Preview mode (`--preview`):** Stop here and display the plan.
51
+
52
+ ### Step 3: Create Worktree Branch
53
+
54
+ ```bash
55
+ # Create a fix branch
56
+ git checkout -b fix/{issue-number}-{short-description}
57
+ ```
58
+
59
+ Use Claude Code worktree for isolated work when available.
60
+
61
+ ### Step 4: Implement Fix
62
+
63
+ Based on autonomy level from `.claude/polyforge.json`:
64
+
65
+ **Full auto (`autonomy: "full"`):**
66
+ - Implement the fix directly
67
+ - Write/update tests
68
+ - Run the full pipeline (test + lint + vulncheck)
69
+ - Fix any pipeline failures
70
+ - Create the PR
71
+
72
+ **Semi-auto (`autonomy: "semi"`):**
73
+ - Show the proposed changes as a diff preview
74
+ - Ask: "Apply these changes? (y/n/edit)"
75
+ - After approval, run pipeline
76
+ - Show PR preview, ask: "Create this PR? (y/n/edit)"
77
+
78
+ ### Step 5: Verification Pipeline
79
+
80
+ Run ALL of these before creating the PR:
81
+
82
+ ```bash
83
+ # 1. Tests
84
+ {detected test command from polyforge.json}
85
+
86
+ # 2. Linter
87
+ {detected lint command}
88
+
89
+ # 3. Type checking (if applicable)
90
+ {detected typecheck command}
91
+
92
+ # 4. Vulnerability check (if applicable)
93
+ {detected vulncheck command}
94
+ ```
95
+
96
+ If any step fails:
97
+ - Attempt to fix automatically (up to 2 retries)
98
+ - Same error with same approach twice → try a different angle, do not repeat
99
+ - After 2 failed attempts, compact context before the 3rd try
100
+ - If still failing after 3 total attempts, show the error and ask for guidance — do not loop further
101
+
102
+ ### Step 6: Create PR
103
+
104
+ ```bash
105
+ gh pr create \
106
+ --title "fix: {short description} (#{issue-number})" \
107
+ --body "$(cat <<'EOF'
108
+ ## Summary
109
+ {what was fixed and how}
110
+
111
+ ## Changes
112
+ - `{file}`: {description of change}
113
+
114
+ ## Testing
115
+ - {tests added/modified}
116
+ - All existing tests pass
117
+
118
+ ## Issue
119
+ Closes #{issue-number}
120
+
121
+ ---
122
+ *⚒ Forged with [PolyForge](https://github.com/Vekta/polyforge)*
123
+ EOF
124
+ )"
125
+ ```
126
+
127
+ ### Step 7: Update Issue
128
+
129
+ **GitHub:**
130
+ ```bash
131
+ gh issue comment 123 --body "Fix submitted in PR #{pr-number}"
132
+ ```
133
+
134
+ **Jira:** Update issue status to "In Review" and link the PR.
135
+
136
+ ## Context Management
137
+
138
+ - If the fix plan identifies independent file groups, delegate implementation of each group to a subagent
139
+ - After the PR is created, compact the conversation — the PR is the deliverable
140
+ - For large fixes, use worktrees via `EnterWorktree` to isolate changes
141
+
142
+ ## Important Behaviors
143
+
144
+ - Read the FULL issue including comments — context is often in comments
145
+ - Check if someone else is already working on this issue
146
+ - Branch naming: `fix/{issue-number}-{kebab-case-description}`
147
+ - Commit messages: `fix: {description} (#issue-number)`
148
+ - Run the pipeline BEFORE creating the PR, not after
149
+ - If the fix requires changes to multiple repos (detected internal deps), warn the user
150
+ - Keep fixes focused — only change what's needed for the issue
151
+ - Document any non-obvious decisions in PR description