@garethdaine/agentops 0.9.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/plugin.json +10 -0
- package/LICENSE +21 -0
- package/README.md +410 -0
- package/agents/architecture-researcher.md +115 -0
- package/agents/code-critic.md +190 -0
- package/agents/delegation-router.md +40 -0
- package/agents/feature-researcher.md +117 -0
- package/agents/interrogator.md +11 -0
- package/agents/pitfalls-researcher.md +112 -0
- package/agents/plan-validator.md +173 -0
- package/agents/proposer.md +61 -0
- package/agents/security-reviewer.md +189 -0
- package/agents/skill-builder.md +43 -0
- package/agents/spec-compliance-reviewer.md +154 -0
- package/agents/stack-researcher.md +89 -0
- package/commands/build.md +766 -0
- package/commands/code-analysis.md +39 -0
- package/commands/code-field.md +22 -0
- package/commands/compliance-check.md +34 -0
- package/commands/configure.md +178 -0
- package/commands/cost-report.md +17 -0
- package/commands/enterprise/adr.md +78 -0
- package/commands/enterprise/brainstorm.md +461 -0
- package/commands/enterprise/design.md +203 -0
- package/commands/enterprise/dev-setup.md +136 -0
- package/commands/enterprise/docker-dev.md +229 -0
- package/commands/enterprise/e2e.md +233 -0
- package/commands/enterprise/feature.md +218 -0
- package/commands/enterprise/gap-analysis.md +204 -0
- package/commands/enterprise/handover.md +195 -0
- package/commands/enterprise/herd.md +152 -0
- package/commands/enterprise/knowledge.md +173 -0
- package/commands/enterprise/onboard.md +86 -0
- package/commands/enterprise/qa-check.md +80 -0
- package/commands/enterprise/reason.md +196 -0
- package/commands/enterprise/review.md +177 -0
- package/commands/enterprise/scaffold.md +153 -0
- package/commands/enterprise/status-report.md +101 -0
- package/commands/enterprise/tech-catalog.md +170 -0
- package/commands/enterprise/test-gen.md +138 -0
- package/commands/evolve.md +39 -0
- package/commands/flags.md +44 -0
- package/commands/interrogate.md +263 -0
- package/commands/lesson.md +15 -0
- package/commands/lessons.md +10 -0
- package/commands/plan.md +44 -0
- package/commands/prune.md +27 -0
- package/commands/star.md +17 -0
- package/commands/supply-chain-scan.md +44 -0
- package/commands/unicode-scan.md +63 -0
- package/commands/verify.md +41 -0
- package/commands/workflow.md +436 -0
- package/hooks/ai-guardrails.sh +114 -0
- package/hooks/audit-log.sh +26 -0
- package/hooks/auto-delegate.sh +45 -0
- package/hooks/auto-evolve.sh +22 -0
- package/hooks/auto-lesson.sh +26 -0
- package/hooks/auto-plan.sh +59 -0
- package/hooks/auto-test.sh +46 -0
- package/hooks/auto-verify.sh +30 -0
- package/hooks/budget-check.sh +24 -0
- package/hooks/code-field-preamble.sh +30 -0
- package/hooks/compliance-gate.sh +50 -0
- package/hooks/content-trust.sh +22 -0
- package/hooks/credential-redact.sh +23 -0
- package/hooks/delegation-trust.sh +15 -0
- package/hooks/detect-test-run.sh +19 -0
- package/hooks/enforcement-lib.sh +60 -0
- package/hooks/evolve-gate.sh +32 -0
- package/hooks/evolve-lib.sh +32 -0
- package/hooks/exfiltration-check.sh +67 -0
- package/hooks/failure-collector.sh +27 -0
- package/hooks/feature-flags.sh +67 -0
- package/hooks/file-provenance.sh +31 -0
- package/hooks/flag-utils.sh +36 -0
- package/hooks/hooks.json +145 -0
- package/hooks/injection-scan.sh +58 -0
- package/hooks/integrity-verify.sh +91 -0
- package/hooks/lessons-check.sh +17 -0
- package/hooks/lockfile-audit.sh +109 -0
- package/hooks/patterns-lib.sh +22 -0
- package/hooks/plan-gate.sh +18 -0
- package/hooks/redact-lib.sh +15 -0
- package/hooks/runtime-mode.sh +56 -0
- package/hooks/session-cleanup.sh +74 -0
- package/hooks/skill-validator.sh +28 -0
- package/hooks/standards-enforce.sh +106 -0
- package/hooks/star-gate.sh +93 -0
- package/hooks/star-preamble.sh +10 -0
- package/hooks/telemetry.sh +33 -0
- package/hooks/todo-prune.sh +84 -0
- package/hooks/unicode-firewall.sh +122 -0
- package/hooks/unicode-lib.sh +66 -0
- package/hooks/unicode-scan-session.sh +96 -0
- package/hooks/validate-command.sh +103 -0
- package/hooks/validate-env.sh +51 -0
- package/hooks/validate-path.sh +81 -0
- package/package.json +40 -0
- package/settings.json +6 -0
- package/templates/ai-config/tool-standards.md +56 -0
- package/templates/architecture/api-first.md +192 -0
- package/templates/architecture/auth-patterns.md +302 -0
- package/templates/architecture/caching-strategy.md +359 -0
- package/templates/architecture/database-patterns.md +347 -0
- package/templates/architecture/event-driven.md +252 -0
- package/templates/architecture/integration-patterns.md +185 -0
- package/templates/architecture/multi-tenancy.md +104 -0
- package/templates/architecture/service-boundaries.md +200 -0
- package/templates/build/brief-template.md +86 -0
- package/templates/build/summary-template.md +100 -0
- package/templates/build/task-plan-template.md +133 -0
- package/templates/communication/effort-estimate.md +54 -0
- package/templates/communication/incident-response.md +59 -0
- package/templates/communication/post-mortem.md +109 -0
- package/templates/communication/risk-register.md +43 -0
- package/templates/communication/sprint-demo-checklist.md +64 -0
- package/templates/communication/stakeholder-presentation-outline.md +84 -0
- package/templates/communication/technical-proposal.md +77 -0
- package/templates/delivery/deployment/deployment-checklist.md +49 -0
- package/templates/delivery/design/solution-design-checklist.md +37 -0
- package/templates/delivery/discovery/stakeholder-questions.md +33 -0
- package/templates/delivery/handover/knowledge-transfer-checklist.md +75 -0
- package/templates/delivery/handover/operational-runbook.md +117 -0
- package/templates/delivery/handover/support-escalation-matrix.md +56 -0
- package/templates/delivery/implementation/blocker-escalation-template.md +55 -0
- package/templates/delivery/implementation/sprint-planning-template.md +49 -0
- package/templates/delivery/implementation/task-decomposition-guide.md +59 -0
- package/templates/delivery/qa/test-plan-template.md +76 -0
- package/templates/delivery/qa/test-results-template.md +55 -0
- package/templates/delivery/qa/uat-signoff-template.md +44 -0
- package/templates/governance/codeowners.md +60 -0
- package/templates/integration/adapter-pattern.md +160 -0
- package/templates/scaffolds/env-validation.md +85 -0
- package/templates/scaffolds/error-handling.md +171 -0
- package/templates/scaffolds/graceful-shutdown.md +139 -0
- package/templates/scaffolds/health-check.md +109 -0
- package/templates/scaffolds/structured-logging.md +134 -0
- package/templates/standards/engineering-standards.md +413 -0
- package/templates/standards/standards-checklist.md +125 -0
- package/templates/tech-catalog.json +663 -0
- package/templates/utilities/project-detection.md +75 -0
- package/templates/utilities/requirements-collection.md +68 -0
- package/templates/utilities/template-rendering.md +81 -0
- package/templates/workflows/architecture-decision.md +90 -0
- package/templates/workflows/bug-investigation.md +83 -0
- package/templates/workflows/feature-implementation.md +80 -0
- package/templates/workflows/refactoring.md +83 -0
- package/templates/workflows/spike-exploration.md +82 -0
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
# Project Detection Utility
|
|
2
|
+
|
|
3
|
+
When this utility is invoked, analyse the current project to detect the technology stack in use. Follow these steps exactly:
|
|
4
|
+
|
|
5
|
+
## Detection Steps
|
|
6
|
+
|
|
7
|
+
1. **Package Manager & Runtime**
|
|
8
|
+
- Check for `package.json` → Node.js project
|
|
9
|
+
- Check for `requirements.txt` / `pyproject.toml` / `setup.py` → Python project
|
|
10
|
+
- Check for `go.mod` → Go project
|
|
11
|
+
- Check for `Cargo.toml` → Rust project
|
|
12
|
+
- Check for `Gemfile` → Ruby project
|
|
13
|
+
- Check for `pom.xml` / `build.gradle` → Java/Kotlin project
|
|
14
|
+
- Check for `*.csproj` / `*.sln` → .NET project
|
|
15
|
+
|
|
16
|
+
2. **Frontend Framework** (if Node.js)
|
|
17
|
+
- Read `package.json` dependencies for: next, remix, astro, nuxt, vite, create-react-app, angular, svelte, vue
|
|
18
|
+
- Check for `next.config.*`, `remix.config.*`, `astro.config.*`, `vite.config.*`, `angular.json`
|
|
19
|
+
|
|
20
|
+
3. **Backend Framework** (if Node.js)
|
|
21
|
+
- Read `package.json` for: express, fastify, hono, nestjs, koa, @hapi/hapi
|
|
22
|
+
- Check for `nest-cli.json`
|
|
23
|
+
|
|
24
|
+
4. **Database & ORM**
|
|
25
|
+
- Check for `prisma/schema.prisma` → Prisma
|
|
26
|
+
- Check for `drizzle.config.*` → Drizzle
|
|
27
|
+
- Read `package.json` for: typeorm, sequelize, knex, mongoose, pg, mysql2, better-sqlite3
|
|
28
|
+
- Check for `docker-compose.yml` and look for database services (postgres, mysql, redis, mongo)
|
|
29
|
+
|
|
30
|
+
5. **Authentication**
|
|
31
|
+
- Read `package.json` for: next-auth, passport, jsonwebtoken, jose, @auth/core, clerk, supabase
|
|
32
|
+
- Check for auth-related directories: `src/auth/`, `lib/auth/`, `app/api/auth/`
|
|
33
|
+
|
|
34
|
+
6. **Cloud & Infrastructure**
|
|
35
|
+
- Check for `serverless.yml` / `serverless.ts` → Serverless Framework
|
|
36
|
+
- Check for `terraform/` or `*.tf` → Terraform
|
|
37
|
+
- Check for `cdk.json` → AWS CDK
|
|
38
|
+
- Check for `Dockerfile`, `docker-compose.yml`
|
|
39
|
+
- Check for `.github/workflows/` → GitHub Actions CI/CD
|
|
40
|
+
- Check for `vercel.json`, `netlify.toml`, `fly.toml`
|
|
41
|
+
|
|
42
|
+
7. **Testing**
|
|
43
|
+
- Read `package.json` for: jest, vitest, mocha, cypress, playwright, @testing-library/*
|
|
44
|
+
- Check for `jest.config.*`, `vitest.config.*`, `playwright.config.*`, `cypress.config.*`
|
|
45
|
+
|
|
46
|
+
8. **Code Quality**
|
|
47
|
+
- Check for `.eslintrc*`, `eslint.config.*`, `.prettierrc*`, `biome.json`
|
|
48
|
+
- Check for `tsconfig.json` → TypeScript
|
|
49
|
+
- Check for `.editorconfig`
|
|
50
|
+
|
|
51
|
+
## Output Format
|
|
52
|
+
|
|
53
|
+
Present findings as a structured summary:
|
|
54
|
+
|
|
55
|
+
```
|
|
56
|
+
## Detected Stack
|
|
57
|
+
|
|
58
|
+
| Layer | Technology | Confidence |
|
|
59
|
+
|-------|-----------|------------|
|
|
60
|
+
| Runtime | Node.js 20 | High |
|
|
61
|
+
| Frontend | Next.js 14 (App Router) | High |
|
|
62
|
+
| Backend | Next.js API Routes | High |
|
|
63
|
+
| Database | PostgreSQL (via Prisma) | High |
|
|
64
|
+
| Auth | NextAuth.js v5 | Medium |
|
|
65
|
+
| Cloud | Vercel | High |
|
|
66
|
+
| CI/CD | GitHub Actions | High |
|
|
67
|
+
| Testing | Vitest + Playwright | High |
|
|
68
|
+
| Language | TypeScript (strict) | High |
|
|
69
|
+
|
|
70
|
+
### Key Observations
|
|
71
|
+
- [Notable patterns, conventions, or architecture decisions detected]
|
|
72
|
+
- [Any gaps or areas where the stack is unclear]
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
If no project is detected (empty directory), report: "No existing project detected. This is a greenfield project."
|
|
@@ -0,0 +1,68 @@
|
|
|
1
|
+
# Requirements Collection Utility
|
|
2
|
+
|
|
3
|
+
When this utility is invoked, use structured prompting to gather requirements from the user. Follow the framework below to ensure complete, unambiguous requirements.
|
|
4
|
+
|
|
5
|
+
## Collection Framework
|
|
6
|
+
|
|
7
|
+
### Step 1: Context Setting
|
|
8
|
+
Before asking questions, summarise what you already know:
|
|
9
|
+
- What the user has stated so far
|
|
10
|
+
- What you've detected from the project (via project-detection utility)
|
|
11
|
+
- What assumptions you're making
|
|
12
|
+
|
|
13
|
+
### Step 2: Structured Questions
|
|
14
|
+
Present questions in grouped sections. Use numbered multi-choice where possible to reduce friction.
|
|
15
|
+
|
|
16
|
+
**Format for multi-choice questions:**
|
|
17
|
+
```
|
|
18
|
+
**[Category]: [Question]**
|
|
19
|
+
1. Option A — brief description
|
|
20
|
+
2. Option B — brief description
|
|
21
|
+
3. Option C — brief description
|
|
22
|
+
4. Other — specify
|
|
23
|
+
|
|
24
|
+
Your choice:
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Format for open-ended questions:**
|
|
28
|
+
```
|
|
29
|
+
**[Category]: [Question]**
|
|
30
|
+
Context: [Why this matters for the decision]
|
|
31
|
+
Default: [Suggested default if the user doesn't have a preference]
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
### Step 3: Decision Tree Logic
|
|
35
|
+
After each answer, determine if follow-up questions are needed:
|
|
36
|
+
- If user picks a frontend framework → ask about routing strategy, state management
|
|
37
|
+
- If user picks a database → ask about ORM preference, migration strategy
|
|
38
|
+
- If user picks auth → ask about provider, session strategy, role model
|
|
39
|
+
- If user picks cloud → ask about deployment model, scaling requirements
|
|
40
|
+
|
|
41
|
+
### Step 4: Confirmation
|
|
42
|
+
Before proceeding, present a complete summary of all collected requirements and ask for confirmation:
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
## Requirements Summary
|
|
46
|
+
|
|
47
|
+
| Decision | Choice | Notes |
|
|
48
|
+
|----------|--------|-------|
|
|
49
|
+
| Project type | Full-stack web app | — |
|
|
50
|
+
| Frontend | Next.js 14 | App Router |
|
|
51
|
+
| Backend | Next.js API Routes + tRPC | Type-safe API |
|
|
52
|
+
| Database | PostgreSQL | Managed (Supabase) |
|
|
53
|
+
| ORM | Prisma | With migrations |
|
|
54
|
+
| Auth | NextAuth.js v5 | Google + GitHub OAuth |
|
|
55
|
+
| Cloud | Vercel | Edge runtime where possible |
|
|
56
|
+
| Monorepo | No | Single package |
|
|
57
|
+
|
|
58
|
+
Does this look correct? (yes / make changes)
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
## Guidelines
|
|
62
|
+
|
|
63
|
+
- Never assume a technology choice — always ask
|
|
64
|
+
- Offer sensible defaults based on detected stack and common patterns
|
|
65
|
+
- Keep questions concise — respect the user's time
|
|
66
|
+
- Group related decisions together (don't ask about database migrations before confirming database choice)
|
|
67
|
+
- If the user says "you decide" or "whatever you recommend", state your recommendation with reasoning and ask for confirmation
|
|
68
|
+
- Maximum 3 rounds of questions before presenting the summary
|
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
# Template Rendering Utility
|
|
2
|
+
|
|
3
|
+
When generating files from templates, follow these variable substitution and conditional rendering rules.
|
|
4
|
+
|
|
5
|
+
## Variable Substitution
|
|
6
|
+
|
|
7
|
+
Templates use `{{variable_name}}` syntax for placeholder values. When rendering a template:
|
|
8
|
+
|
|
9
|
+
1. Collect all required variables from the context (project detection, requirements collection, or user input)
|
|
10
|
+
2. Replace every `{{variable_name}}` with the actual value
|
|
11
|
+
3. Flag any unreplaced variables as errors — never leave `{{...}}` in generated output
|
|
12
|
+
|
|
13
|
+
### Standard Variables
|
|
14
|
+
|
|
15
|
+
| Variable | Source | Example |
|
|
16
|
+
|----------|--------|---------|
|
|
17
|
+
| `{{project_name}}` | User input or directory name | `acme-portal` |
|
|
18
|
+
| `{{project_description}}` | User input | `Customer portal for Acme Corp` |
|
|
19
|
+
| `{{framework}}` | Requirements collection | `nextjs` |
|
|
20
|
+
| `{{framework_version}}` | Requirements or latest stable | `14` |
|
|
21
|
+
| `{{database}}` | Requirements collection | `postgresql` |
|
|
22
|
+
| `{{orm}}` | Requirements collection | `prisma` |
|
|
23
|
+
| `{{auth_strategy}}` | Requirements collection | `nextauth` |
|
|
24
|
+
| `{{cloud_provider}}` | Requirements collection | `vercel` |
|
|
25
|
+
| `{{node_version}}` | Detected or default | `20` |
|
|
26
|
+
| `{{package_manager}}` | Detected or default | `pnpm` |
|
|
27
|
+
| `{{author_name}}` | Git config or user input | `Gareth Daine` |
|
|
28
|
+
| `{{year}}` | Current year | `2026` |
|
|
29
|
+
| `{{date}}` | Current date | `2026-03-17` |
|
|
30
|
+
|
|
31
|
+
## Conditional Sections
|
|
32
|
+
|
|
33
|
+
Templates use conditional blocks for stack-dependent content:
|
|
34
|
+
|
|
35
|
+
```
|
|
36
|
+
{{#if database}}
|
|
37
|
+
## Database Setup
|
|
38
|
+
... database-specific content ...
|
|
39
|
+
{{/if}}
|
|
40
|
+
|
|
41
|
+
{{#if auth_strategy}}
|
|
42
|
+
## Authentication
|
|
43
|
+
... auth-specific content ...
|
|
44
|
+
{{/if}}
|
|
45
|
+
|
|
46
|
+
{{#unless monorepo}}
|
|
47
|
+
## Single Package Setup
|
|
48
|
+
... single-package content ...
|
|
49
|
+
{{/unless}}
|
|
50
|
+
|
|
51
|
+
{{#eq framework "nextjs"}}
|
|
52
|
+
## Next.js Configuration
|
|
53
|
+
... Next.js-specific content ...
|
|
54
|
+
{{/eq}}
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
### Rendering Rules
|
|
58
|
+
|
|
59
|
+
1. `{{#if variable}}...{{/if}}` — Include block only if variable is truthy (non-empty, not "none", not "false")
|
|
60
|
+
2. `{{#unless variable}}...{{/unless}}` — Include block only if variable is falsy
|
|
61
|
+
3. `{{#eq variable "value"}}...{{/eq}}` — Include block only if variable equals the specified value
|
|
62
|
+
4. `{{#neq variable "value"}}...{{/neq}}` — Include block only if variable does not equal value
|
|
63
|
+
|
|
64
|
+
## Rendering Process
|
|
65
|
+
|
|
66
|
+
1. **Resolve variables** — Build a complete variable map from all available sources
|
|
67
|
+
2. **Process conditionals** — Evaluate all conditional blocks, removing blocks whose conditions are false
|
|
68
|
+
3. **Substitute variables** — Replace all `{{variable}}` placeholders with values
|
|
69
|
+
4. **Validate output** — Check for any remaining `{{...}}` markers; if found, report missing variables
|
|
70
|
+
5. **Format output** — Ensure proper indentation and whitespace in the final rendered content
|
|
71
|
+
|
|
72
|
+
## Usage in Commands
|
|
73
|
+
|
|
74
|
+
Enterprise commands reference this utility by including rendered templates in their output. The rendering happens inline — Claude reads the template, collects the variables, and produces the final output in a single pass. There is no separate rendering engine; Claude IS the rendering engine.
|
|
75
|
+
|
|
76
|
+
Example flow in a command:
|
|
77
|
+
1. Run project detection (templates/utilities/project-detection.md)
|
|
78
|
+
2. Run requirements collection (templates/utilities/requirements-collection.md)
|
|
79
|
+
3. Read the relevant template file
|
|
80
|
+
4. Apply variable substitution and conditional rendering per this spec
|
|
81
|
+
5. Write the rendered output to the target location
|
|
@@ -0,0 +1,90 @@
|
|
|
1
|
+
# Workflow Template: Architecture Decision
|
|
2
|
+
|
|
3
|
+
## Context Gathering
|
|
4
|
+
|
|
5
|
+
1. **Problem statement**
|
|
6
|
+
- What architectural question needs answering?
|
|
7
|
+
- What triggered this decision? (new feature, scaling issue, tech debt)
|
|
8
|
+
- What is the current architecture in the affected area?
|
|
9
|
+
|
|
10
|
+
2. **Constraints**
|
|
11
|
+
- Timeline constraints
|
|
12
|
+
- Team skill constraints
|
|
13
|
+
- Budget/infrastructure constraints
|
|
14
|
+
- Compatibility requirements
|
|
15
|
+
- Compliance/regulatory requirements
|
|
16
|
+
|
|
17
|
+
3. **Quality attributes**
|
|
18
|
+
- Which matter most? (performance, scalability, maintainability, security, developer experience)
|
|
19
|
+
- What are the measurable targets?
|
|
20
|
+
|
|
21
|
+
## Analysis Steps
|
|
22
|
+
|
|
23
|
+
### Step 1: Context Documentation
|
|
24
|
+
- Document the current state
|
|
25
|
+
- Identify the forces driving this decision
|
|
26
|
+
- List all constraints and requirements
|
|
27
|
+
|
|
28
|
+
### Step 2: Options Generation
|
|
29
|
+
Generate minimum 3 distinct options:
|
|
30
|
+
- **Option A:** The conservative approach (low risk, incremental)
|
|
31
|
+
- **Option B:** The balanced approach (moderate risk, good ROI)
|
|
32
|
+
- **Option C:** The ambitious approach (higher risk, higher reward)
|
|
33
|
+
- Additional options as relevant
|
|
34
|
+
|
|
35
|
+
### Step 3: Trade-off Analysis
|
|
36
|
+
For each option, evaluate against:
|
|
37
|
+
- Feasibility (1-5)
|
|
38
|
+
- Risk (1-5)
|
|
39
|
+
- Time to implement (1-5)
|
|
40
|
+
- Long-term quality (1-5)
|
|
41
|
+
- Team alignment (1-5)
|
|
42
|
+
|
|
43
|
+
### Step 4: Recommendation
|
|
44
|
+
- Select the recommended option with rationale
|
|
45
|
+
- Address each major trade-off
|
|
46
|
+
- Define implementation implications
|
|
47
|
+
- Identify risks and mitigations
|
|
48
|
+
|
|
49
|
+
## Output Format (ADR)
|
|
50
|
+
|
|
51
|
+
```
|
|
52
|
+
# ADR-[number]: [Title]
|
|
53
|
+
|
|
54
|
+
## Status
|
|
55
|
+
Proposed | Accepted | Deprecated | Superseded
|
|
56
|
+
|
|
57
|
+
## Context
|
|
58
|
+
[What is the issue motivating this decision?]
|
|
59
|
+
|
|
60
|
+
## Decision
|
|
61
|
+
[What is the change that we're proposing?]
|
|
62
|
+
|
|
63
|
+
## Consequences
|
|
64
|
+
|
|
65
|
+
### Positive
|
|
66
|
+
- [benefit]
|
|
67
|
+
|
|
68
|
+
### Negative
|
|
69
|
+
- [drawback]
|
|
70
|
+
|
|
71
|
+
### Neutral
|
|
72
|
+
- [observation]
|
|
73
|
+
|
|
74
|
+
## Alternatives Considered
|
|
75
|
+
|
|
76
|
+
### [Alternative 1]
|
|
77
|
+
- Rejected because: [reason]
|
|
78
|
+
|
|
79
|
+
### [Alternative 2]
|
|
80
|
+
- Rejected because: [reason]
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## Quality Checks
|
|
84
|
+
|
|
85
|
+
- [ ] Problem statement is clear and specific
|
|
86
|
+
- [ ] At least 3 options were considered
|
|
87
|
+
- [ ] Trade-offs are honestly assessed (no straw-man alternatives)
|
|
88
|
+
- [ ] Recommendation has clear rationale
|
|
89
|
+
- [ ] Consequences (positive AND negative) are documented
|
|
90
|
+
- [ ] Implementation path is actionable
|
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
# Workflow Template: Bug Investigation
|
|
2
|
+
|
|
3
|
+
## Context Gathering
|
|
4
|
+
|
|
5
|
+
1. **Symptom capture**
|
|
6
|
+
- What is the observed behaviour?
|
|
7
|
+
- What is the expected behaviour?
|
|
8
|
+
- When did this start happening?
|
|
9
|
+
- Is it reproducible? How often?
|
|
10
|
+
- What environment (dev/staging/prod)?
|
|
11
|
+
|
|
12
|
+
2. **Reproduction steps**
|
|
13
|
+
- Exact steps to reproduce
|
|
14
|
+
- Required preconditions
|
|
15
|
+
- Input data that triggers the bug
|
|
16
|
+
- Error messages or logs
|
|
17
|
+
|
|
18
|
+
3. **Impact assessment**
|
|
19
|
+
- How many users are affected?
|
|
20
|
+
- Is there a workaround?
|
|
21
|
+
- What is the business impact?
|
|
22
|
+
- Priority: Critical / High / Medium / Low
|
|
23
|
+
|
|
24
|
+
## Analysis Steps
|
|
25
|
+
|
|
26
|
+
### Step 1: Reproduce
|
|
27
|
+
- Follow the reproduction steps exactly
|
|
28
|
+
- Capture logs, errors, and stack traces
|
|
29
|
+
- Identify the exact point of failure
|
|
30
|
+
- Note any environmental factors
|
|
31
|
+
|
|
32
|
+
### Step 2: Isolate
|
|
33
|
+
- Narrow down to the specific file/function
|
|
34
|
+
- Check recent changes (git log, git blame)
|
|
35
|
+
- Identify if this is a regression or a latent bug
|
|
36
|
+
- Determine the root cause (not just the symptom)
|
|
37
|
+
|
|
38
|
+
### Step 3: Root Cause Analysis
|
|
39
|
+
- What is the actual cause of the bug?
|
|
40
|
+
- Why wasn't this caught earlier?
|
|
41
|
+
- Are there similar patterns elsewhere that might have the same issue?
|
|
42
|
+
- Is this a code bug, a data bug, or a configuration bug?
|
|
43
|
+
|
|
44
|
+
### Step 4: Fix Proposal
|
|
45
|
+
- Describe the fix approach
|
|
46
|
+
- Assess risk of the fix (could it break other things?)
|
|
47
|
+
- Identify what tests need to be added
|
|
48
|
+
- Consider if a broader refactor is needed
|
|
49
|
+
|
|
50
|
+
### Step 5: Implementation
|
|
51
|
+
- Apply the fix
|
|
52
|
+
- Add regression test that proves the fix works
|
|
53
|
+
- Verify the fix doesn't break existing tests
|
|
54
|
+
- Check for similar issues in related code
|
|
55
|
+
|
|
56
|
+
## Output Format
|
|
57
|
+
|
|
58
|
+
```
|
|
59
|
+
## Bug Report
|
|
60
|
+
|
|
61
|
+
### Symptom
|
|
62
|
+
[What was observed]
|
|
63
|
+
|
|
64
|
+
### Root Cause
|
|
65
|
+
[What actually caused it]
|
|
66
|
+
|
|
67
|
+
### Fix Applied
|
|
68
|
+
[What was changed and why]
|
|
69
|
+
|
|
70
|
+
### Regression Test
|
|
71
|
+
[Test added to prevent recurrence]
|
|
72
|
+
|
|
73
|
+
### Related Risks
|
|
74
|
+
[Anything else that might be affected]
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
## Quality Checks
|
|
78
|
+
|
|
79
|
+
- [ ] Root cause identified (not just symptom patched)
|
|
80
|
+
- [ ] Regression test added
|
|
81
|
+
- [ ] Existing tests still pass
|
|
82
|
+
- [ ] Fix handles edge cases
|
|
83
|
+
- [ ] No performance regression introduced
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
# Workflow Template: Feature Implementation
|
|
2
|
+
|
|
3
|
+
## Context Gathering
|
|
4
|
+
|
|
5
|
+
Before starting implementation, collect and verify:
|
|
6
|
+
|
|
7
|
+
1. **Existing codebase state**
|
|
8
|
+
- What framework/stack is in use? (run project detection)
|
|
9
|
+
- What patterns does the existing code follow?
|
|
10
|
+
- What testing framework is configured?
|
|
11
|
+
- What are the naming conventions?
|
|
12
|
+
|
|
13
|
+
2. **Feature requirements**
|
|
14
|
+
- What exactly should this feature do?
|
|
15
|
+
- Who is the end user?
|
|
16
|
+
- What are the acceptance criteria?
|
|
17
|
+
- Are there performance requirements?
|
|
18
|
+
- Are there security considerations?
|
|
19
|
+
|
|
20
|
+
3. **Integration context**
|
|
21
|
+
- What existing code will this feature interact with?
|
|
22
|
+
- Are there external APIs or services involved?
|
|
23
|
+
- What database tables/models are affected?
|
|
24
|
+
- Are there UI components to create or modify?
|
|
25
|
+
|
|
26
|
+
## Analysis Steps
|
|
27
|
+
|
|
28
|
+
### Step 1: Codebase Analysis
|
|
29
|
+
- Identify all files relevant to this feature
|
|
30
|
+
- Map the data flow for the feature
|
|
31
|
+
- Identify shared utilities and patterns to reuse
|
|
32
|
+
- Flag any technical debt that might affect implementation
|
|
33
|
+
|
|
34
|
+
### Step 2: Architecture Proposal
|
|
35
|
+
- Define the component boundaries
|
|
36
|
+
- Choose appropriate patterns (repository, service, controller, etc.)
|
|
37
|
+
- Plan the data model changes (if any)
|
|
38
|
+
- Define the API contract (if applicable)
|
|
39
|
+
|
|
40
|
+
### Step 3: Implementation Plan
|
|
41
|
+
- Break down into ordered tasks with dependencies
|
|
42
|
+
- Estimate complexity per task (S/M/L)
|
|
43
|
+
- Identify risk points (security, performance, integration)
|
|
44
|
+
- Define test strategy per component
|
|
45
|
+
|
|
46
|
+
### Step 4: Implementation
|
|
47
|
+
For each task in order:
|
|
48
|
+
1. Announce the task before starting
|
|
49
|
+
2. Write the implementation code
|
|
50
|
+
3. Write the corresponding tests
|
|
51
|
+
4. Verify the code compiles/passes linting
|
|
52
|
+
5. Confirm integration with existing code
|
|
53
|
+
|
|
54
|
+
### Step 5: Verification
|
|
55
|
+
- Run all tests (new and existing)
|
|
56
|
+
- Check for TypeScript errors
|
|
57
|
+
- Verify lint compliance
|
|
58
|
+
- Manual smoke test if applicable
|
|
59
|
+
|
|
60
|
+
## Output Format
|
|
61
|
+
|
|
62
|
+
### Implementation Summary
|
|
63
|
+
```
|
|
64
|
+
Feature: [name]
|
|
65
|
+
Tasks completed: [N/N]
|
|
66
|
+
Files created: [list]
|
|
67
|
+
Files modified: [list]
|
|
68
|
+
Tests added: [count]
|
|
69
|
+
Test status: All passing / [failures]
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
## Quality Checks
|
|
73
|
+
|
|
74
|
+
- [ ] All new functions have JSDoc/TSDoc comments
|
|
75
|
+
- [ ] Error cases are handled (not just happy path)
|
|
76
|
+
- [ ] Input validation at system boundaries
|
|
77
|
+
- [ ] No hardcoded values (use env/config)
|
|
78
|
+
- [ ] No console.log left in production code (use logger)
|
|
79
|
+
- [ ] Tests cover both success and failure paths
|
|
80
|
+
- [ ] Existing tests still pass
|
|
@@ -0,0 +1,83 @@
|
|
|
1
|
+
# Workflow Template: Safe Refactoring
|
|
2
|
+
|
|
3
|
+
## Context Gathering
|
|
4
|
+
|
|
5
|
+
1. **Current state**
|
|
6
|
+
- What code needs refactoring?
|
|
7
|
+
- What is wrong with the current implementation?
|
|
8
|
+
- What patterns is it using vs what patterns should it use?
|
|
9
|
+
|
|
10
|
+
2. **Target state**
|
|
11
|
+
- What should the code look like after refactoring?
|
|
12
|
+
- What patterns should be applied?
|
|
13
|
+
- What quality attributes should improve? (readability, performance, maintainability)
|
|
14
|
+
|
|
15
|
+
3. **Constraints**
|
|
16
|
+
- What can't change? (public APIs, database schema, external contracts)
|
|
17
|
+
- What is the risk tolerance?
|
|
18
|
+
- Is there test coverage for the affected code?
|
|
19
|
+
|
|
20
|
+
## Analysis Steps
|
|
21
|
+
|
|
22
|
+
### Step 1: Assess Current State
|
|
23
|
+
- Read all code to be refactored
|
|
24
|
+
- Map dependencies (what depends on this code?)
|
|
25
|
+
- Check test coverage (are there tests for this?)
|
|
26
|
+
- Identify the public interface (what must not change?)
|
|
27
|
+
|
|
28
|
+
### Step 2: Define Target State
|
|
29
|
+
- Describe the desired end state
|
|
30
|
+
- Identify which patterns to apply
|
|
31
|
+
- Define what "done" looks like
|
|
32
|
+
- Estimate the size of change
|
|
33
|
+
|
|
34
|
+
### Step 3: Plan Incremental Steps
|
|
35
|
+
Break the refactor into small, safe increments. Each increment must:
|
|
36
|
+
- Be independently deployable
|
|
37
|
+
- Not break existing tests
|
|
38
|
+
- Have a clear rollback path
|
|
39
|
+
|
|
40
|
+
### Step 4: Execute Incrementally
|
|
41
|
+
For each step:
|
|
42
|
+
1. Make the change
|
|
43
|
+
2. Run tests — they must pass
|
|
44
|
+
3. Verify no regressions
|
|
45
|
+
4. Commit (if appropriate)
|
|
46
|
+
|
|
47
|
+
### Step 5: Verify
|
|
48
|
+
- All existing tests pass
|
|
49
|
+
- New tests added for new patterns
|
|
50
|
+
- Public API unchanged (or migration documented)
|
|
51
|
+
- Performance not degraded
|
|
52
|
+
|
|
53
|
+
## Output Format
|
|
54
|
+
|
|
55
|
+
```
|
|
56
|
+
## Refactoring Summary
|
|
57
|
+
|
|
58
|
+
### Before
|
|
59
|
+
[Description of the original state]
|
|
60
|
+
|
|
61
|
+
### After
|
|
62
|
+
[Description of the refactored state]
|
|
63
|
+
|
|
64
|
+
### Changes Made
|
|
65
|
+
| Step | Change | Tests |
|
|
66
|
+
|------|--------|-------|
|
|
67
|
+
| 1 | [change] | Passing |
|
|
68
|
+
| 2 | [change] | Passing |
|
|
69
|
+
|
|
70
|
+
### Quality Improvement
|
|
71
|
+
- Readability: [improved/unchanged]
|
|
72
|
+
- Maintainability: [improved/unchanged]
|
|
73
|
+
- Performance: [improved/unchanged]
|
|
74
|
+
- Test coverage: [from X% to Y%]
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
## Quality Checks
|
|
78
|
+
|
|
79
|
+
- [ ] All existing tests pass after each step
|
|
80
|
+
- [ ] No public API changes (or documented)
|
|
81
|
+
- [ ] Code is simpler/clearer than before
|
|
82
|
+
- [ ] No new dependencies introduced unnecessarily
|
|
83
|
+
- [ ] Performance is equal or better
|
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
# Workflow Template: Spike / Time-boxed Exploration
|
|
2
|
+
|
|
3
|
+
## Context Gathering
|
|
4
|
+
|
|
5
|
+
1. **Hypothesis**
|
|
6
|
+
- What are we trying to learn or prove?
|
|
7
|
+
- What question will this spike answer?
|
|
8
|
+
- What does success look like?
|
|
9
|
+
|
|
10
|
+
2. **Time box**
|
|
11
|
+
- Maximum time allocated for this spike
|
|
12
|
+
- What happens if we don't find an answer in time?
|
|
13
|
+
|
|
14
|
+
3. **Scope boundaries**
|
|
15
|
+
- What is in scope for exploration?
|
|
16
|
+
- What is explicitly out of scope?
|
|
17
|
+
- What deliverables are expected at the end?
|
|
18
|
+
|
|
19
|
+
## Analysis Steps
|
|
20
|
+
|
|
21
|
+
### Step 1: Define the Hypothesis
|
|
22
|
+
State clearly:
|
|
23
|
+
- "We believe [approach] will [outcome]"
|
|
24
|
+
- "We will know this is true when [measurable result]"
|
|
25
|
+
- "Time box: [duration]"
|
|
26
|
+
|
|
27
|
+
### Step 2: Plan the Exploration
|
|
28
|
+
- List specific things to try/investigate
|
|
29
|
+
- Order by likelihood of providing answers
|
|
30
|
+
- Identify dependencies or prerequisites
|
|
31
|
+
- Set checkpoint times
|
|
32
|
+
|
|
33
|
+
### Step 3: Execute
|
|
34
|
+
- Try each approach in order
|
|
35
|
+
- Document findings as you go (don't rely on memory)
|
|
36
|
+
- If an approach fails, note WHY and move on
|
|
37
|
+
- Stop when the time box expires, even if incomplete
|
|
38
|
+
|
|
39
|
+
### Step 4: Capture Findings
|
|
40
|
+
Document everything learned, including dead ends.
|
|
41
|
+
|
|
42
|
+
## Output Format
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
## Spike Report
|
|
46
|
+
|
|
47
|
+
### Hypothesis
|
|
48
|
+
[What we set out to learn]
|
|
49
|
+
|
|
50
|
+
### Time Box
|
|
51
|
+
[Duration] | Started: [time] | Ended: [time]
|
|
52
|
+
|
|
53
|
+
### Findings
|
|
54
|
+
|
|
55
|
+
#### Approach 1: [Name]
|
|
56
|
+
- **Result:** Success / Partial / Failed
|
|
57
|
+
- **Evidence:** [What we observed]
|
|
58
|
+
- **Notes:** [Key learnings]
|
|
59
|
+
|
|
60
|
+
#### Approach 2: [Name]
|
|
61
|
+
- **Result:** Success / Partial / Failed
|
|
62
|
+
- **Evidence:** [What we observed]
|
|
63
|
+
- **Notes:** [Key learnings]
|
|
64
|
+
|
|
65
|
+
### Recommendation
|
|
66
|
+
[Based on findings, what should we do next?]
|
|
67
|
+
|
|
68
|
+
### Open Questions
|
|
69
|
+
- [What we still don't know]
|
|
70
|
+
|
|
71
|
+
### Artefacts
|
|
72
|
+
- [Links to prototype code, diagrams, or docs produced]
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
## Quality Checks
|
|
76
|
+
|
|
77
|
+
- [ ] Hypothesis was clearly stated before starting
|
|
78
|
+
- [ ] Time box was respected
|
|
79
|
+
- [ ] All approaches tried were documented (including failures)
|
|
80
|
+
- [ ] Findings are specific and evidence-based
|
|
81
|
+
- [ ] Recommendation is actionable
|
|
82
|
+
- [ ] Open questions are captured for follow-up
|