@forwardimpact/schema 0.8.2 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,8 +1,9 @@
1
1
  # yaml-language-server: $schema=https://www.forwardimpact.team/schema/json/capability.schema.json
2
2
 
3
+ id: scale
3
4
  name: Scale
4
5
  emojiIcon: 📐
5
- ordinalRank: 4
6
+ ordinalRank: 7
6
7
  description: |
7
8
  Building systems that grow gracefully.
8
9
  Encompasses architecture, code quality, testing, performance,
@@ -44,6 +45,264 @@ managementResponsibilities:
44
45
  architecture governance, and represent technical priorities at executive
45
46
  level
46
47
  skills:
48
+ - id: architecture_design
49
+ name: Architecture & Design
50
+ human:
51
+ description:
52
+ Ability to design software systems that are scalable, maintainable, and
53
+ fit for purpose. In the AI era, this includes designing systems that
54
+ effectively leverage AI capabilities while maintaining human oversight.
55
+ levelDescriptions:
56
+ awareness:
57
+ You understand basic architectural concepts (separation of concerns,
58
+ modularity, coupling) and can read architecture diagrams. You follow
59
+ established patterns with guidance.
60
+ foundational:
61
+ You explain and apply common patterns (MVC, microservices,
62
+ event-driven) to familiar problems. You contribute to design
63
+ discussions and identify when existing patterns don't fit.
64
+ working:
65
+ You design components and services independently for moderate
66
+ complexity. You make appropriate trade-off decisions, document design
67
+ rationale, and consider AI integration points in your designs.
68
+ practitioner:
69
+ You design complex multi-component systems end-to-end, evaluate
70
+ architectural options for large initiatives across teams, guide
71
+ technical decisions for your area, and mentor engineers on
72
+ architecture. You balance elegance with delivery needs.
73
+ expert:
74
+ You define architecture standards and patterns across the business
75
+ unit. You innovate on approaches to large-scale challenges, shape
76
+ AI-integrated system design, and are recognized externally as an
77
+ architecture authority.
78
+ agent:
79
+ name: architecture-design
80
+ description: |
81
+ Guide for designing software systems and making architectural
82
+ decisions.
83
+ useWhen: |
84
+ Asked to design a system, evaluate architecture options, or make
85
+ structural decisions about code organization.
86
+ stages:
87
+ specify:
88
+ focus: |
89
+ Define system requirements and constraints before design.
90
+ Clarify functional and non-functional requirements.
91
+ readChecklist:
92
+ - Document functional requirements and use cases
93
+ - Identify non-functional requirements (scale, latency,
94
+ availability)
95
+ - Document system constraints and integration points
96
+ - Identify stakeholders and their concerns
97
+ - Mark ambiguities with [NEEDS CLARIFICATION]
98
+ confirmChecklist:
99
+ - Functional requirements are documented
100
+ - Non-functional requirements are specified
101
+ - Constraints are identified
102
+ - Stakeholder concerns are understood
103
+ plan:
104
+ focus: |
105
+ Understand requirements and identify key architectural decisions.
106
+ Document trade-offs and design rationale.
107
+ readChecklist:
108
+ - Clarify functional and non-functional requirements
109
+ - Identify constraints (existing systems, team skills, timeline)
110
+ - Document key decisions and trade-offs
111
+ - Design for anticipated change
112
+ confirmChecklist:
113
+ - Requirements are clearly understood
114
+ - Key decisions are documented with rationale
115
+ - Trade-offs are explicit
116
+ - Failure modes are considered
117
+ onboard:
118
+ focus: |
119
+ Set up the development environment for the planned
120
+ architecture. Install frameworks, configure project
121
+ structure, and verify tooling.
122
+ readChecklist:
123
+ - Install planned frameworks and dependencies
124
+ - Create project structure matching architecture design
125
+ - Configure Mermaid rendering for architecture docs
126
+ - Set up ADR directory for decision records
127
+ - Configure linter and formatter for the project
128
+ confirmChecklist:
129
+ - Project structure reflects architectural boundaries
130
+ - All planned frameworks installed and importable
131
+ - Documentation tooling renders diagrams correctly
132
+ - Linter and formatter configured and passing
133
+ - Build system compiles without errors
134
+ code:
135
+ focus: |
136
+ Implement architecture with clear boundaries and interfaces.
137
+ Ensure components can evolve independently.
138
+ readChecklist:
139
+ - Define clear interfaces between components
140
+ - Implement with appropriate patterns
141
+ - Document design decisions in code
142
+ - Test architectural boundaries
143
+ confirmChecklist:
144
+ - Dependencies are minimal and explicit
145
+ - Interfaces are well-defined
146
+ - Design patterns are documented
147
+ - Architecture tests pass
148
+ review:
149
+ focus: |
150
+ Validate architecture meets requirements and is maintainable.
151
+ Ensure scalability and security are addressed.
152
+ readChecklist:
153
+ - Verify architecture meets requirements
154
+ - Review for scalability concerns
155
+ - Check security implications
156
+ - Validate documentation completeness
157
+ confirmChecklist:
158
+ - Scalability requirements are addressed
159
+ - Security implications are reviewed
160
+ - Architecture is documented
161
+ - Design is maintainable
162
+ deploy:
163
+ focus: |
164
+ Deploy architecture and verify it performs as designed
165
+ in production environment.
166
+ readChecklist:
167
+ - Deploy system components to production
168
+ - Verify architectural boundaries work under load
169
+ - Monitor performance against requirements
170
+ - Document any operational learnings
171
+ confirmChecklist:
172
+ - System deployed successfully
173
+ - Performance meets requirements
174
+ - Monitoring confirms design assumptions
175
+ - Operational procedures are documented
176
+ toolReferences:
177
+ - name: Mermaid
178
+ url: https://mermaid.js.org/intro/
179
+ simpleIcon: mermaid
180
+ description: Text-based diagramming for architecture documentation
181
+ useWhen: Creating architecture diagrams in markdown or documentation
182
+ - name: ADR Tools
183
+ url: https://adr.github.io/
184
+ simpleIcon: task
185
+ description: Architecture Decision Records tooling
186
+ useWhen: Documenting architectural decisions and trade-offs
187
+ instructions: |
188
+ ## Step 1: Identify Key Decisions
189
+
190
+ Architecture is decisions that are hard to change later.
191
+ Before coding, document: data storage (SQL vs NoSQL),
192
+ service boundaries (monolith vs services), and communication
193
+ patterns (sync HTTP vs async events).
194
+
195
+ ## Step 2: Visualize with Mermaid Diagrams
196
+
197
+ Create architecture diagrams in markdown for version control.
198
+ Use C4 diagram types for different levels of detail (context,
199
+ container, component).
200
+
201
+ ## Step 3: Document Trade-offs with ADRs
202
+
203
+ Create an Architecture Decision Record for each significant
204
+ choice. Include status, context, decision, and consequences
205
+ (both positive and negative).
206
+
207
+ ## Step 4: Structure Code with Clear Boundaries
208
+
209
+ Organize code so each domain is independent. Use api, service,
210
+ repository, and types layers within each module. Cross-cutting
211
+ concerns go in a shared directory.
212
+
213
+ ## Step 5: Define Explicit Interfaces
214
+
215
+ Define contracts at domain boundaries. Hide implementations
216
+ behind interfaces so modules can evolve independently.
217
+ installScript: |
218
+ set -e
219
+ npm install -g @mermaid-js/mermaid-cli
220
+ npm install -g adr-log
221
+ command -v mmdc
222
+ implementationReference: |
223
+ ## Mermaid Architecture Diagram
224
+
225
+ ```mermaid
226
+ graph TB
227
+ subgraph "API Layer"
228
+ API[API Gateway]
229
+ end
230
+ subgraph "Domain Modules"
231
+ Users[Users Module]
232
+ Orders[Orders Module]
233
+ end
234
+ subgraph "Data Layer"
235
+ DB[(PostgreSQL)]
236
+ end
237
+ API --> Users
238
+ API --> Orders
239
+ Users --> DB
240
+ Orders --> DB
241
+ ```
242
+
243
+ ## ADR Template
244
+
245
+ ```markdown
246
+ # ADR 001: Start with Modular Monolith
247
+
248
+ ## Status
249
+ Accepted
250
+
251
+ ## Context
252
+ We need to deliver quickly but maintain future flexibility.
253
+
254
+ ## Decision
255
+ Start with a modular monolith with clear domain boundaries.
256
+
257
+ ## Consequences
258
+ - (+) Faster initial development
259
+ - (+) Easier refactoring while boundaries evolve
260
+ - (-) Must enforce boundaries through code review
261
+ ```
262
+
263
+ ## Module Structure
264
+
265
+ ```
266
+ src/
267
+ modules/
268
+ users/
269
+ api.ts
270
+ service.ts
271
+ repository.ts
272
+ types.ts
273
+ orders/
274
+ api.ts
275
+ service.ts
276
+ repository.ts
277
+ types.ts
278
+ shared/
279
+ ```
280
+
281
+ ## Interface at Domain Boundary
282
+
283
+ ```javascript
284
+ // Define contracts at domain boundaries
285
+ class OrderServiceImpl {
286
+ constructor(repo) { this.repo = repo; }
287
+ async create(order) { return this.repo.save(order); }
288
+ async get(id) { return this.repo.findById(id); }
289
+ }
290
+ ```
291
+
292
+ ## Verification
293
+
294
+ Your architecture is sound when:
295
+ - Each module can be understood without reading others
296
+ - Dependencies flow inward (api -> service -> repository)
297
+ - ADRs explain "why" for each major decision
298
+ - Diagrams render correctly in your documentation
299
+
300
+ ## When to Extract Services
301
+
302
+ Extract only when you have evidence:
303
+ - Independent scaling is required (measured, not guessed)
304
+ - Different deployment cadences are blocking teams
305
+ - Team boundaries naturally align with service boundaries
47
306
  - id: cloud_platforms
48
307
  name: Cloud Platforms
49
308
  human:
@@ -92,17 +351,19 @@ skills:
92
351
  - Cost constraints are defined
93
352
  - Performance requirements are clear
94
353
  plan:
95
- focus: Selecting and designing cloud architecture
354
+ focus: |
355
+ Select appropriate cloud services and design for availability,
356
+ security, and cost efficiency.
96
357
  readChecklist:
97
- - Evaluate service options for the use case
98
- - Plan multi-AZ deployment for availability
99
- - Design IAM roles with least privilege
100
- - Estimate costs and plan resource sizing
358
+ - Identify service requirements
359
+ - Select appropriate cloud services
360
+ - Plan for high availability
361
+ - Consider security and cost
101
362
  confirmChecklist:
102
363
  - Service selection matches requirements
103
- - Architecture designed for availability
104
- - Security approach documented
105
- - Cost estimate prepared
364
+ - Availability approach planned
365
+ - Security model defined
366
+ - Cost controls considered
106
367
  onboard:
107
368
  focus: |
108
369
  Set up cloud development environment. Configure CLI tools,
@@ -121,29 +382,34 @@ skills:
121
382
  - Environment variables configured securely
122
383
  - Infrastructure as code directory structure created
123
384
  code:
124
- focus: Implementing cloud infrastructure and deployments
385
+ focus: |
386
+ Implement cloud infrastructure with security best practices.
387
+ Use infrastructure as code for reproducibility.
125
388
  readChecklist:
126
- - Define infrastructure as code (Terraform/CloudFormation)
127
- - Configure security groups and network policies
128
- - Set up encryption at rest and in transit
129
- - Implement monitoring and alerting
389
+ - Define infrastructure as code
390
+ - Configure security groups and IAM
391
+ - Set up monitoring and alerting
392
+ - Implement deployment automation
130
393
  confirmChecklist:
131
- - Infrastructure defined as code
394
+ - Multi-AZ deployment for availability
132
395
  - Security groups properly configured
133
- - Encryption enabled for data
134
- - Monitoring and alerting in place
396
+ - IAM follows least privilege
397
+ - Data encrypted at rest and in transit
398
+ - Infrastructure defined as code
135
399
  review:
136
- focus: Validating cloud configuration and security
400
+ focus: |
401
+ Validate security, availability, and operational readiness.
402
+ Ensure cost controls are in place.
137
403
  readChecklist:
138
- - Verify IAM follows least privilege
139
- - Check multi-AZ deployment
140
- - Validate cost controls are in place
141
- - Review security configuration
404
+ - Verify security configuration
405
+ - Test availability and failover
406
+ - Review cost projections
407
+ - Validate monitoring coverage
142
408
  confirmChecklist:
143
- - IAM permissions are minimal
144
- - Multi-AZ deployment confirmed
409
+ - Security review completed
410
+ - Monitoring and alerting in place
145
411
  - Cost controls established
146
- - Security review complete
412
+ - Operational runbooks exist
147
413
  deploy:
148
414
  focus: |
149
415
  Deploy cloud infrastructure and verify production readiness.
@@ -158,21 +424,226 @@ skills:
158
424
  - Failover tested in production
159
425
  - Monitoring is operational
160
426
  - Cost tracking is active
427
+ toolReferences:
428
+ - name: Terraform
429
+ url: https://developer.hashicorp.com/terraform/docs
430
+ simpleIcon: terraform
431
+ description:
432
+ Infrastructure as code tool for provisioning cloud resources
433
+ useWhen: Defining and managing cloud infrastructure as code
434
+ - name: AWS Lambda
435
+ url: https://docs.aws.amazon.com/lambda/
436
+ simpleIcon: task
437
+ description: Serverless compute service for event-driven applications
438
+ useWhen: Building event-driven functions without managing servers
439
+ - name: Amazon EventBridge
440
+ url: https://docs.aws.amazon.com/eventbridge/
441
+ simpleIcon: task
442
+ description: Serverless event bus for application integration
443
+ useWhen: Building event-driven architectures or integrating AWS services
444
+ - name: Amazon DynamoDB
445
+ url: https://docs.aws.amazon.com/dynamodb/
446
+ simpleIcon: task
447
+ description:
448
+ Serverless NoSQL database with single-digit millisecond latency
449
+ useWhen:
450
+ Building serverless applications requiring fast key-value or document
451
+ storage
452
+ - name: AWS Step Functions
453
+ url: https://docs.aws.amazon.com/step-functions/
454
+ simpleIcon: task
455
+ description:
456
+ Serverless workflow orchestration for coordinating AWS services
457
+ useWhen:
458
+ Orchestrating multi-step serverless workflows with error handling,
459
+ retries, and state management
460
+ instructions: |
461
+ ## Step 1: Choose Your Architecture Pattern
462
+
463
+ Event-driven serverless works best when workload is
464
+ unpredictable or spiky, you want pay-per-use pricing,
465
+ and components can operate independently.
466
+
467
+ ## Step 2: Define Infrastructure with Terraform
468
+
469
+ Use Terraform to define cloud resources declaratively.
470
+ Define Lambda functions, DynamoDB tables, and IAM roles
471
+ in HCL. Apply with terraform init, plan, apply.
472
+
473
+ ## Step 3: Define the Event Flow
474
+
475
+ Wire EventBridge rules to trigger Lambda functions based
476
+ on event patterns. Define source, detail-type, and target.
477
+
478
+ ## Step 4: Implement Lambda Functions
479
+
480
+ Initialize SDK clients outside the handler for reuse across
481
+ invocations. Keep handlers focused on a single responsibility.
482
+
483
+ ## Step 5: Configure IAM (Least Privilege)
484
+
485
+ Grant only the specific actions needed (e.g., dynamodb:GetItem,
486
+ dynamodb:PutItem) on the specific resource ARN. Never use
487
+ wildcard permissions.
488
+
489
+ ## Step 6: Orchestrate with Step Functions
490
+
491
+ Use Step Functions to coordinate multi-step workflows. Define
492
+ state machines in Amazon States Language. Use Step Functions
493
+ for workflows that need error handling, retries, parallel
494
+ execution, or human approval gates.
495
+ installScript: |
496
+ set -e
497
+ brew tap hashicorp/tap
498
+ brew install hashicorp/tap/terraform
499
+ terraform --version
500
+ command -v aws
161
501
  implementationReference: |
162
- ## Service Categories
163
-
164
- | Category | Services | Use Case |
165
- |----------|----------|----------|
166
- | Compute | EC2, ECS, Lambda | VMs, Containers, Serverless |
167
- | Storage | S3, EBS, EFS | Objects, Blocks, Files |
168
- | Database | RDS, DynamoDB | SQL, NoSQL |
169
- | Messaging | SQS, SNS, Kinesis | Queues, Pub/Sub, Streaming |
170
-
171
- ## Cloud-Native Principles
172
- - Design for failure
173
- - Use managed services
174
- - Automate everything
175
- - Monitor and alert
502
+ ## Terraform Lambda + DynamoDB
503
+
504
+ ```hcl
505
+ terraform {
506
+ required_providers {
507
+ aws = { source = "hashicorp/aws", version = "~> 5.0" }
508
+ }
509
+ }
510
+
511
+ resource "aws_lambda_function" "process_order" {
512
+ filename = "lambda.zip"
513
+ function_name = "process-order"
514
+ role = aws_iam_role.lambda_role.arn
515
+ handler = "handler.process_order"
516
+ runtime = "python3.11"
517
+ environment {
518
+ variables = { TABLE_NAME = aws_dynamodb_table.orders.name }
519
+ }
520
+ }
521
+
522
+ resource "aws_dynamodb_table" "orders" {
523
+ name = "orders"
524
+ billing_mode = "PAY_PER_REQUEST"
525
+ hash_key = "pk"
526
+ range_key = "sk"
527
+ attribute { name = "pk", type = "S" }
528
+ attribute { name = "sk", type = "S" }
529
+ }
530
+ ```
531
+
532
+ ## EventBridge Rule
533
+
534
+ ```hcl
535
+ resource "aws_cloudwatch_event_rule" "order_created" {
536
+ name = "order-created"
537
+ event_pattern = jsonencode({
538
+ source = ["orders"]
539
+ detail-type = ["OrderCreated"]
540
+ })
541
+ }
542
+
543
+ resource "aws_cloudwatch_event_target" "process_order" {
544
+ rule = aws_cloudwatch_event_rule.order_created.name
545
+ target_id = "ProcessOrder"
546
+ arn = aws_lambda_function.process_order.arn
547
+ }
548
+ ```
549
+
550
+ ## Lambda Handler
551
+
552
+ ```python
553
+ import os, boto3
554
+
555
+ dynamodb = boto3.resource('dynamodb')
556
+ table = dynamodb.Table(os.environ['TABLE_NAME'])
557
+
558
+ def process_order(event, context):
559
+ order = event['detail']
560
+ table.put_item(Item={
561
+ 'pk': f"ORDER#{order['id']}",
562
+ 'sk': 'METADATA',
563
+ 'status': 'processing',
564
+ **order
565
+ })
566
+ return {'statusCode': 200}
567
+ ```
568
+
569
+ ## IAM Least Privilege
570
+
571
+ ```hcl
572
+ resource "aws_iam_role_policy" "lambda_dynamodb" {
573
+ name = "dynamodb-access"
574
+ role = aws_iam_role.lambda_role.id
575
+ policy = jsonencode({
576
+ Version = "2012-10-17"
577
+ Statement = [{
578
+ Effect = "Allow"
579
+ Action = ["dynamodb:GetItem", "dynamodb:PutItem"]
580
+ Resource = aws_dynamodb_table.orders.arn
581
+ }]
582
+ })
583
+ }
584
+ ```
585
+
586
+ ## Verification
587
+
588
+ ```bash
589
+ terraform apply
590
+ aws events put-events --entries '[{"Source":"orders","DetailType":"OrderCreated","Detail":"{\"id\":\"123\"}"}]'
591
+ ```
592
+
593
+ Check CloudWatch Logs for function execution.
594
+
595
+ ## Step Functions State Machine
596
+
597
+ ```hcl
598
+ resource "aws_sfn_state_machine" "order_workflow" {
599
+ name = "order-processing"
600
+ role_arn = aws_iam_role.sfn_role.arn
601
+
602
+ definition = jsonencode({
603
+ StartAt = "ValidateOrder"
604
+ States = {
605
+ ValidateOrder = {
606
+ Type = "Task"
607
+ Resource = aws_lambda_function.validate_order.arn
608
+ Next = "ProcessPayment"
609
+ Retry = [{
610
+ ErrorEquals = ["States.TaskFailed"]
611
+ IntervalSeconds = 2
612
+ MaxAttempts = 3
613
+ BackoffRate = 2.0
614
+ }]
615
+ Catch = [{
616
+ ErrorEquals = ["States.ALL"]
617
+ Next = "OrderFailed"
618
+ }]
619
+ }
620
+ ProcessPayment = {
621
+ Type = "Task"
622
+ Resource = aws_lambda_function.process_payment.arn
623
+ Next = "FulfillOrder"
624
+ }
625
+ FulfillOrder = {
626
+ Type = "Task"
627
+ Resource = aws_lambda_function.fulfill_order.arn
628
+ End = true
629
+ }
630
+ OrderFailed = {
631
+ Type = "Fail"
632
+ Cause = "Order processing failed"
633
+ }
634
+ }
635
+ })
636
+ }
637
+ ```
638
+
639
+ ## When to Use Step Functions
640
+
641
+ | Use Case | Approach |
642
+ |----------|----------|
643
+ | Simple event → action | EventBridge + Lambda |
644
+ | Multi-step with retries | Step Functions |
645
+ | Long-running workflows | Step Functions (Express or Standard) |
646
+ | Parallel fan-out | Step Functions Parallel state |
176
647
  - id: code_quality
177
648
  name: Code Quality & Review
178
649
  human:
@@ -205,10 +676,12 @@ skills:
205
676
  engineering.
206
677
  agent:
207
678
  name: code-quality-review
208
- description: Guide for writing quality code and conducting code reviews.
679
+ description: |
680
+ Guide for reviewing code quality, identifying issues, and suggesting
681
+ improvements.
209
682
  useWhen: |
210
- Reviewing code, checking for best practices, or verifying AI-generated
211
- code before committing.
683
+ Asked to review code, check for best practices, or conduct code
684
+ reviews.
212
685
  stages:
213
686
  specify:
214
687
  focus: |
@@ -226,17 +699,19 @@ skills:
226
699
  - Test requirements are specified
227
700
  - Review approach matches risk level
228
701
  plan:
229
- focus: Planning for quality before implementation
702
+ focus: |
703
+ Understand code review scope and establish review criteria.
704
+ Consider what quality standards apply.
230
705
  readChecklist:
231
- - Review project coding conventions
232
- - Plan testing strategy for the feature
233
- - Identify edge cases to handle
234
- - Consider error handling approach
706
+ - Identify code review scope
707
+ - Understand applicable standards
708
+ - Plan review approach
709
+ - Consider risk level
235
710
  confirmChecklist:
236
- - Coding conventions understood
237
- - Testing strategy defined
238
- - Edge cases identified
239
- - Error handling planned
711
+ - Review scope is clear
712
+ - Standards are understood
713
+ - Review approach is planned
714
+ - Risk level is assessed
240
715
  onboard:
241
716
  focus: |
242
717
  Set up code quality tooling. Configure linters, formatters,
@@ -256,29 +731,33 @@ skills:
256
731
  - Editor integration works (format-on-save, inline errors)
257
732
  - CI pipeline includes quality checks
258
733
  code:
259
- focus: Writing and testing quality code
734
+ focus: |
735
+ Write clean, maintainable, tested code. Follow project
736
+ conventions and ensure adequate coverage.
260
737
  readChecklist:
261
- - Follow project coding conventions
262
- - Write tests alongside implementation
263
- - Handle error conditions appropriately
264
- - Self-review before requesting review
738
+ - Write readable, well-structured code
739
+ - Add appropriate tests
740
+ - Follow project conventions
741
+ - Document non-obvious logic
265
742
  confirmChecklist:
266
- - Code follows project conventions
743
+ - Code compiles and passes all tests
267
744
  - Changes are covered by tests
268
- - Error handling is appropriate
269
- - Self-review completed
745
+ - Code follows project conventions
746
+ - No unnecessary complexity
270
747
  review:
271
- focus: Verifying code quality and correctness
748
+ focus: |
749
+ Verify correctness, maintainability, and adherence to
750
+ standards. Ensure no code is shipped that isn't understood.
272
751
  readChecklist:
273
752
  - Verify code does what it claims
274
- - Check test coverage is adequate
275
- - Evaluate maintainability
276
- - Ensure no code you don't understand
753
+ - Check test coverage
754
+ - Review for maintainability
755
+ - Confirm style compliance
277
756
  confirmChecklist:
278
- - Code compiles and passes all tests
279
757
  - No obvious security vulnerabilities
280
- - No unnecessary complexity
758
+ - Error handling is appropriate
281
759
  - Documentation updated if needed
760
+ - No code you don't fully understand
282
761
  deploy:
283
762
  focus: |
284
763
  Merge and deploy reviewed code. Verify quality checks pass
@@ -293,13 +772,157 @@ skills:
293
772
  - CI pipeline passes all checks
294
773
  - No regressions detected
295
774
  - Deployment verified
775
+ toolReferences:
776
+ - name: ESLint
777
+ url: https://eslint.org/docs/latest/
778
+ simpleIcon: eslint
779
+ description: Pluggable JavaScript/TypeScript linting utility
780
+ useWhen:
781
+ Enforcing code style and catching errors in JavaScript/TypeScript
782
+ projects
783
+ - name: Ruff
784
+ url: https://docs.astral.sh/ruff/
785
+ simpleIcon: ruff
786
+ description: Extremely fast Python linter and formatter
787
+ useWhen: Enforcing code style and catching errors in Python projects
788
+ - name: Prettier
789
+ url: https://prettier.io/docs/en/
790
+ simpleIcon: prettier
791
+ description: Opinionated code formatter
792
+ useWhen:
793
+ Enforcing consistent code formatting across JavaScript/TypeScript
794
+ codebases
795
+ - name: SonarQube
796
+ url: https://docs.sonarsource.com/sonarqube/latest/
797
+ simpleIcon: sonarqubeserver
798
+ description: Code quality and security analysis platform
799
+ useWhen:
800
+ Analyzing code quality metrics, identifying code smells, or tracking
801
+ quality gates
802
+ - name: Playwright
803
+ url: https://playwright.dev/docs/intro
804
+ simpleIcon: playwright
805
+ description: End-to-end testing framework for web applications
806
+ useWhen:
807
+ Writing and running end-to-end tests to verify full application
808
+ behavior across browsers
809
+ instructions: |
810
+ ## Step 1: Set Up Linting
811
+
812
+ Configure ESLint for JavaScript/TypeScript using flat config.
813
+ For Python, configure Ruff in pyproject.toml with appropriate
814
+ rule selections (E, F, I at minimum).
815
+
816
+ ## Step 2: Configure Formatting
817
+
818
+ Set up Prettier for JS/TS with consistent settings. Enable
819
+ format-on-save in your editor. Ruff handles Python formatting.
820
+
821
+ ## Step 3: Add Pre-commit Hooks
822
+
823
+ Install husky and lint-staged to run linting and formatting
824
+ automatically on staged files before each commit.
825
+
826
+ ## Step 4: Set Up Quality Gates (Optional)
827
+
828
+ Configure SonarQube to analyze code quality metrics. Set
829
+ quality gates for technical debt ratio, code smells, and
830
+ maintainability rating.
831
+
832
+ ## Step 5: Add End-to-End Tests
833
+
834
+ Use Playwright to write E2E tests that verify full application
835
+ behavior in a real browser. Test critical user flows end-to-end
836
+ to catch integration issues that unit tests miss.
837
+ installScript: |
838
+ set -e
839
+ npm install -D eslint prettier husky lint-staged
840
+ pip install ruff
841
+ npx playwright install --with-deps chromium
842
+ npx eslint --version
843
+ ruff --version
844
+ npx playwright --version
296
845
  implementationReference: |
297
- ## Review Checklist
846
+ ## ESLint Flat Config
847
+
848
+ ```javascript
849
+ // eslint.config.js
850
+ export default [
851
+ { rules: { "no-unused-vars": "error", "no-console": "warn" } }
852
+ ];
853
+ ```
854
+
855
+ ## Ruff Config
856
+
857
+ ```toml
858
+ # pyproject.toml
859
+ [tool.ruff]
860
+ line-length = 88
861
+ select = ["E", "F", "I"]
862
+ ```
863
+
864
+ ## Prettier Config
298
865
 
299
- 1. **Correctness**: Does it work as intended?
300
- 2. **Tests**: Is it properly tested?
301
- 3. **Maintainability**: Will it be easy to change?
302
- 4. **Style**: Does it follow conventions?
866
+ ```json
867
+ { "semi": true, "singleQuote": true, "trailingComma": "es5" }
868
+ ```
869
+
870
+ ## Pre-commit Hooks
871
+
872
+ ```bash
873
+ npx husky init
874
+ echo "npx lint-staged" > .husky/pre-commit
875
+ ```
876
+
877
+ ```json
878
+ "lint-staged": {
879
+ "*.{js,ts}": ["eslint --fix", "prettier --write"],
880
+ "*.py": ["ruff check --fix", "ruff format"]
881
+ }
882
+ ```
883
+
884
+ ## Code Review Checklist
885
+
886
+ 1. **Correctness** — Does it do what it claims?
887
+ 2. **Tests** — Are changes covered? Edge cases?
888
+ 3. **Readability** — Can you understand it in 6 months?
889
+ 4. **No surprises** — Side effects documented?
890
+
891
+ ## Playwright E2E Tests
892
+
893
+ ```javascript
894
+ // tests/app.spec.js
895
+ import { test, expect } from '@playwright/test';
896
+
897
+ test('homepage loads and displays content', async ({ page }) => {
898
+ await page.goto('/');
899
+ await expect(page.locator('h1')).toBeVisible();
900
+ });
901
+
902
+ test('user can submit form', async ({ page }) => {
903
+ await page.goto('/form');
904
+ await page.fill('[name="email"]', 'test@example.com');
905
+ await page.click('button[type="submit"]');
906
+ await expect(page.locator('.success')).toBeVisible();
907
+ });
908
+ ```
909
+
910
+ ```javascript
911
+ // playwright.config.js
912
+ import { defineConfig } from '@playwright/test';
913
+ export default defineConfig({
914
+ testDir: './tests',
915
+ use: { baseURL: 'http://localhost:3000' },
916
+ webServer: { command: 'npm run dev', port: 3000 },
917
+ });
918
+ ```
919
+
920
+ ## Verification
921
+
922
+ Run locally before commit:
923
+ ```bash
924
+ npx eslint . && npx prettier --check . && npm test
925
+ ```
303
926
  - id: data_modeling
304
927
  name: Data Modeling
305
928
  human:
@@ -353,17 +976,19 @@ skills:
353
976
  - Performance requirements are specified
354
977
  - Compliance needs are clear
355
978
  plan:
356
- focus: Understanding data requirements and designing schema
979
+ focus: |
980
+ Understand data requirements and select appropriate storage
981
+ technology. Plan schema with query patterns in mind.
357
982
  readChecklist:
358
- - Gather data requirements and access patterns
359
- - Choose appropriate storage technology
360
- - Design normalized schema
361
- - Plan indexing strategy
983
+ - Identify data requirements and access patterns
984
+ - Select appropriate storage technology
985
+ - Plan normalization approach
986
+ - Design indexing strategy
362
987
  confirmChecklist:
363
988
  - Requirements understood
364
- - Storage technology selected
365
- - Schema design documented
366
- - Index strategy planned
989
+ - Appropriate storage technology selected
990
+ - Schema design planned
991
+ - Query patterns identified
367
992
  onboard:
368
993
  focus: |
369
994
  Set up the database environment. Install ORM/query tools,
@@ -379,32 +1004,36 @@ skills:
379
1004
  - Database running locally and accepting connections
380
1005
  - ORM/client configured and connected
381
1006
  - Migration tooling initialized and working
382
- - Test data can be seeded and queried
383
1007
  - Database credentials stored securely in .env
1008
+ - Schema management directory structure created
384
1009
  code:
385
- focus: Implementing schema and migrations
1010
+ focus: |
1011
+ Implement schema with appropriate normalization and indexes.
1012
+ Plan safe migrations for existing data.
386
1013
  readChecklist:
387
- - Create database migrations
388
- - Implement schema changes
1014
+ - Create schema with appropriate normalization
389
1015
  - Add indexes for query patterns
390
- - Write efficient queries
1016
+ - Implement safe migration plan
1017
+ - Document data model
391
1018
  confirmChecklist:
392
- - Schema implemented correctly
1019
+ - Schema normalized appropriately
393
1020
  - Indexes support query patterns
394
- - Migrations are reversible
395
- - Queries are optimized
1021
+ - Migration plan is safe
1022
+ - Backward compatibility maintained
396
1023
  review:
397
- focus: Validating schema design and performance
1024
+ focus: |
1025
+ Validate schema meets requirements and migrations are safe.
1026
+ Ensure data integrity is maintained.
398
1027
  readChecklist:
399
- - Verify schema matches requirements
400
- - Check migration safety
401
- - Validate query performance
402
- - Review backward compatibility
1028
+ - Test query performance
1029
+ - Verify migration safety
1030
+ - Check data integrity
1031
+ - Review documentation
403
1032
  confirmChecklist:
404
- - Schema meets requirements
405
- - Migrations tested on production-like data
406
- - Query performance acceptable
407
- - Backward compatibility maintained
1033
+ - Query performance validated
1034
+ - Migration tested on production-like data
1035
+ - Data integrity verified
1036
+ - Documentation complete
408
1037
  deploy:
409
1038
  focus: |
410
1039
  Deploy schema changes to production safely.
@@ -419,12 +1048,624 @@ skills:
419
1048
  - Data integrity verified
420
1049
  - Performance meets requirements
421
1050
  - Rollback procedure tested
1051
+ toolReferences:
1052
+ - name: PostgreSQL
1053
+ url: https://www.postgresql.org/docs/
1054
+ simpleIcon: postgresql
1055
+ description: Advanced open source relational database
1056
+ useWhen:
1057
+ Building applications requiring ACID transactions and complex queries
1058
+ - name: Prisma
1059
+ url: https://www.prisma.io/docs/
1060
+ simpleIcon: prisma
1061
+ description:
1062
+ Type-safe ORM for Node.js and TypeScript which works well with
1063
+ Supabase
1064
+ useWhen:
1065
+ Building type-safe database access layers in Node.js applications
1066
+ instructions: |
1067
+ ## Step 1: Create a Prisma User in Supabase
1068
+
1069
+ In the Supabase SQL Editor, create a dedicated Prisma user
1070
+ with appropriate privileges. Grant usage, create, and all
1071
+ permissions on the public schema.
1072
+
1073
+ ## Step 2: Initialize Prisma Project
1074
+
1075
+ Install Prisma and TypeScript dependencies. Run prisma init
1076
+ to scaffold the schema and .env file.
1077
+
1078
+ ## Step 3: Configure Supabase Connection
1079
+
1080
+ Get your Supavisor Session pooler string from Supabase
1081
+ Dashboard. Use port 5432 for session mode (migrations).
1082
+ For serverless, also configure transaction mode (port 6543).
1083
+
1084
+ ## Step 4: Define Your Schema
1085
+
1086
+ Define models with relationships, indexes, and constraints
1087
+ in prisma/schema.prisma. Use uuid for IDs and add timestamps.
1088
+
1089
+ ## Step 5: Run Migrations
1090
+
1091
+ Run prisma migrate dev to create tables and generate the
1092
+ client. Verify tables in Supabase Dashboard.
1093
+
1094
+ ## Step 6: Use the Prisma Client
1095
+
1096
+ Import PrismaClient and use type-safe queries. Always
1097
+ disconnect in finally blocks and handle errors.
1098
+ installScript: |
1099
+ set -e
1100
+ npm install prisma @prisma/client typescript ts-node --save-dev
1101
+ npx prisma init
1102
+ npx prisma --version
1103
+ implementationReference: |
1104
+ ## Supabase Prisma User Setup
1105
+
1106
+ ```sql
1107
+ create user "prisma" with password 'your_secure_password' bypassrls createdb;
1108
+ grant "prisma" to "postgres";
1109
+ grant usage on schema public to prisma;
1110
+ grant create on schema public to prisma;
1111
+ grant all on all tables in schema public to prisma;
1112
+ grant all on all routines in schema public to prisma;
1113
+ grant all on all sequences in schema public to prisma;
1114
+ ```
1115
+
1116
+ ## Connection Strings
1117
+
1118
+ ```env
1119
+ # Session mode (migrations)
1120
+ DATABASE_URL="postgres://prisma.[REF]:[PASS]@[REGION].pooler.supabase.com:5432/postgres"
1121
+
1122
+ # Transaction mode (serverless)
1123
+ DATABASE_URL="postgres://prisma.[REF]:[PASS]@[REGION].pooler.supabase.com:6543/postgres?pgbouncer=true"
1124
+ DIRECT_URL="postgres://prisma.[REF]:[PASS]@[REGION].pooler.supabase.com:5432/postgres"
1125
+ ```
1126
+
1127
+ ## Schema Example
1128
+
1129
+ ```prisma
1130
+ generator client {
1131
+ provider = "prisma-client-js"
1132
+ }
1133
+
1134
+ datasource db {
1135
+ provider = "postgresql"
1136
+ url = env("DATABASE_URL")
1137
+ directUrl = env("DIRECT_URL")
1138
+ }
1139
+
1140
+ model User {
1141
+ id String @id @default(uuid())
1142
+ email String @unique
1143
+ name String?
1144
+ posts Post[]
1145
+ createdAt DateTime @default(now())
1146
+ }
1147
+
1148
+ model Post {
1149
+ id String @id @default(uuid())
1150
+ title String
1151
+ content String?
1152
+ published Boolean @default(false)
1153
+ author User? @relation(fields: [authorId], references: [id])
1154
+ authorId String?
1155
+ createdAt DateTime @default(now())
1156
+ }
1157
+ ```
1158
+
1159
+ ## Prisma Client Usage
1160
+
1161
+ ```javascript
1162
+ import { PrismaClient } from '@prisma/client';
1163
+ const prisma = new PrismaClient();
1164
+
1165
+ const user = await prisma.user.create({
1166
+ data: {
1167
+ email: 'alice@example.com',
1168
+ name: 'Alice',
1169
+ posts: { create: { title: 'Hello World' } }
1170
+ },
1171
+ include: { posts: true }
1172
+ });
1173
+ ```
1174
+
1175
+ ## Verification
1176
+
1177
+ - Migrations apply without errors
1178
+ - Tables visible in Supabase Dashboard
1179
+ - Queries return expected data
1180
+ - Connection pooling works in production
1181
+ - id: devops
1182
+ name: DevOps & CI/CD
1183
+ human:
1184
+ description:
1185
+ Building and maintaining deployment pipelines, infrastructure, and
1186
+ operational practices
1187
+ levelDescriptions:
1188
+ awareness:
1189
+ You understand CI/CD concepts (build, test, deploy) and can trigger
1190
+ and monitor pipelines others have built. You follow deployment
1191
+ procedures.
1192
+ foundational:
1193
+ You configure basic CI/CD pipelines, understand containerization, and
1194
+ can troubleshoot common build and deployment failures.
1195
+ working:
1196
+ You build complete CI/CD pipelines end-to-end, manage infrastructure
1197
+ as code, implement monitoring, and design deployment strategies for
1198
+ your services.
1199
+ practitioner:
1200
+ You design deployment strategies for complex multi-service systems
1201
+ across teams, optimize pipeline performance and reliability, define
1202
+ DevOps practices for your area, and mentor engineers on
1203
+ infrastructure.
1204
+ expert:
1205
+ You shape DevOps culture and practices across the business unit. You
1206
+ introduce innovative approaches to deployment and infrastructure,
1207
+ solve large-scale DevOps challenges, and are recognized externally.
1208
+ agent:
1209
+ name: devops-cicd
1210
+ description: |
1211
+ Guide for building CI/CD pipelines, managing infrastructure as code,
1212
+ and implementing deployment best practices.
1213
+ useWhen: |
1214
+ Setting up pipelines, containerizing applications, or configuring
1215
+ infrastructure.
1216
+ stages:
1217
+ specify:
1218
+ focus: |
1219
+ Define CI/CD and infrastructure requirements.
1220
+ Clarify deployment strategy and operational needs.
1221
+ readChecklist:
1222
+ - Document deployment frequency requirements
1223
+ - Identify rollback and recovery requirements
1224
+ - Specify monitoring and alerting needs
1225
+ - Define security and compliance constraints
1226
+ - Mark ambiguities with [NEEDS CLARIFICATION]
1227
+ confirmChecklist:
1228
+ - Deployment requirements are documented
1229
+ - Recovery requirements are specified
1230
+ - Monitoring needs are identified
1231
+ - Compliance constraints are clear
1232
+ plan:
1233
+ focus: |
1234
+ Plan CI/CD pipeline architecture and infrastructure requirements.
1235
+ Consider deployment strategies and monitoring needs.
1236
+ readChecklist:
1237
+ - Define pipeline stages (build, test, deploy)
1238
+ - Identify infrastructure requirements
1239
+ - Plan deployment strategy (rolling, blue-green, canary)
1240
+ - Consider monitoring and alerting needs
1241
+ - Plan secret management approach
1242
+ confirmChecklist:
1243
+ - Pipeline architecture is documented
1244
+ - Deployment strategy is chosen and justified
1245
+ - Infrastructure requirements are identified
1246
+ - Monitoring approach is defined
1247
+ onboard:
1248
+ focus: |
1249
+ Set up CI/CD and infrastructure tooling. Install build
1250
+ tools, configure container runtime, and verify pipeline
1251
+ connectivity.
1252
+ readChecklist:
1253
+ - Install container runtime (Colima/Docker)
1254
+ - Install build tool (Nixpacks) and verify it works
1255
+ - Configure GitHub Actions workflow directory structure
1256
+ - Set up repository secrets for CI/CD pipeline
1257
+ - Configure container registry authentication
1258
+ - Verify local container builds succeed
1259
+ confirmChecklist:
1260
+ - Container runtime running and responsive
1261
+ - Build tool creates images from project source
1262
+ - GitHub Actions workflow files are valid YAML
1263
+ - Repository secrets configured for deployment
1264
+ - Container registry push/pull works
1265
+ - Local build-test-run cycle completes successfully
1266
+ code:
1267
+ focus: |
1268
+ Implement CI/CD pipelines and infrastructure as code. Follow
1269
+ best practices for containerization and deployment automation.
1270
+ readChecklist:
1271
+ - Configure CI/CD pipeline stages
1272
+ - Implement infrastructure as code (Terraform)
1273
+ - Create Dockerfiles with security best practices
1274
+ - Set up monitoring and alerting
1275
+ - Configure secret management
1276
+ - Implement deployment automation
1277
+ confirmChecklist:
1278
+ - Pipeline runs on every commit
1279
+ - Tests run before deployment
1280
+ - Deployments are automated
1281
+ - Infrastructure is version controlled
1282
+ - Secrets are managed securely
1283
+ - Monitoring is in place
1284
+ review:
1285
+ focus: |
1286
+ Verify pipeline reliability, security, and operational readiness.
1287
+ Ensure rollback procedures work and documentation is complete.
1288
+ readChecklist:
1289
+ - Verify pipeline runs successfully end-to-end
1290
+ - Test rollback procedures
1291
+ - Review security configurations
1292
+ - Validate monitoring and alerts
1293
+ - Check documentation completeness
1294
+ confirmChecklist:
1295
+ - Pipeline is tested and reliable
1296
+ - Rollback procedure is documented and tested
1297
+ - Alerts are configured and tested
1298
+ - Runbooks exist for common issues
1299
+ deploy:
1300
+ focus: |
1301
+ Deploy pipeline and infrastructure changes to production.
1302
+ Verify operational readiness.
1303
+ readChecklist:
1304
+ - Deploy pipeline configuration to production
1305
+ - Verify deployment workflows work correctly
1306
+ - Confirm monitoring and alerting are operational
1307
+ - Run deployment through the new pipeline
1308
+ confirmChecklist:
1309
+ - Pipeline deployed and operational
1310
+ - Workflows tested in production
1311
+ - Monitoring confirms healthy operation
1312
+ - First deployment through pipeline succeeded
1313
+ toolReferences:
1314
+ - name: Nixpacks
1315
+ url: https://nixpacks.com/docs
1316
+ simpleIcon: nixos
1317
+ description:
1318
+ Auto-detecting build system that creates optimized container images
1319
+ useWhen: Building container images without writing Dockerfiles
1320
+ - name: GitHub Actions
1321
+ url: https://docs.github.com/en/actions
1322
+ simpleIcon: githubactions
1323
+ description: CI/CD and automation platform for GitHub repositories
1324
+ useWhen: Automating build, test, and deployment pipelines
1325
+ - name: Colima
1326
+ url: https://github.com/abiosoft/colima
1327
+ simpleIcon: docker
1328
+ description: Container runtime for macOS with Docker-compatible CLI
1329
+ useWhen:
1330
+ Running containers locally, building images, or containerizing
1331
+ applications
1332
+ instructions: |
1333
+ ## Step 1: Build with Nixpacks
1334
+
1335
+ Nixpacks auto-detects your project type and creates optimized
1336
+ container images without requiring a Dockerfile. For custom
1337
+ builds, create a nixpacks.toml with build phases and start
1338
+ command.
1339
+
1340
+ ## Step 2: Create GitHub Actions Workflow
1341
+
1342
+ Define a CI/CD workflow with test and build-and-push jobs.
1343
+ Run tests on every push and PR. Build and push container
1344
+ images to GitHub Container Registry on main merges only.
1345
+ Tag images with commit SHA for traceability.
1346
+
1347
+ ## Step 3: Configure Deployment
1348
+
1349
+ Add a deploy job that triggers after successful image push.
1350
+ Use rolling deployment with automatic rollback on failure.
1351
+ Upgrade to blue-green only when you need instant rollback
1352
+ for high-traffic services.
1353
+ installScript: |
1354
+ set -e
1355
+ curl -sSL https://nixpacks.com/install.sh | bash
1356
+ command -v nixpacks
1357
+ command -v docker || command -v colima
422
1358
  implementationReference: |
423
- ## Storage Selection Guide
424
-
425
- | Type | Use When | Examples |
426
- |------|----------|----------|
427
- | Relational | ACID needed, complex queries | PostgreSQL, MySQL |
428
- | Document | Flexible schema, hierarchical | MongoDB, Firestore |
429
- | Key-Value | Simple lookups, caching | Redis, DynamoDB |
430
- | Time Series | Temporal data, metrics | InfluxDB, TimescaleDB |
1359
+ ## Nixpacks Build
1360
+
1361
+ ```bash
1362
+ nixpacks build . --name app
1363
+ docker run --rm app
1364
+ ```
1365
+
1366
+ ## Nixpacks Config
1367
+
1368
+ ```toml
1369
+ # nixpacks.toml
1370
+ [phases.build]
1371
+ cmds = ["npm run build"]
1372
+
1373
+ [start]
1374
+ cmd = "node dist/index.js"
1375
+ ```
1376
+
1377
+ ## GitHub Actions CI/CD
1378
+
1379
+ ```yaml
1380
+ name: CI/CD
1381
+ on:
1382
+ push:
1383
+ branches: [main]
1384
+ pull_request:
1385
+ branches: [main]
1386
+
1387
+ env:
1388
+ REGISTRY: ghcr.io/${{ github.repository }}
1389
+
1390
+ jobs:
1391
+ test:
1392
+ runs-on: ubuntu-latest
1393
+ steps:
1394
+ - uses: actions/checkout@v4
1395
+ - uses: actions/setup-node@v4
1396
+ with:
1397
+ node-version: 20
1398
+ cache: npm
1399
+ - run: npm ci
1400
+ - run: npm test
1401
+
1402
+ build-and-push:
1403
+ needs: test
1404
+ runs-on: ubuntu-latest
1405
+ permissions:
1406
+ contents: read
1407
+ packages: write
1408
+ steps:
1409
+ - uses: actions/checkout@v4
1410
+ - uses: docker/login-action@v3
1411
+ with:
1412
+ registry: ghcr.io
1413
+ username: ${{ github.actor }}
1414
+ password: ${{ secrets.GITHUB_TOKEN }}
1415
+ - name: Build and push
1416
+ run: |
1417
+ nixpacks build . --name app
1418
+ docker tag app ${{ env.REGISTRY }}:${{ github.sha }}
1419
+ docker push ${{ env.REGISTRY }}:${{ github.sha }}
1420
+ if: github.ref == 'refs/heads/main'
1421
+ ```
1422
+
1423
+ ## Rolling Deployment
1424
+
1425
+ ```yaml
1426
+ deploy:
1427
+ needs: build-and-push
1428
+ if: github.ref == 'refs/heads/main'
1429
+ runs-on: ubuntu-latest
1430
+ environment: production
1431
+ steps:
1432
+ - name: Deploy
1433
+ run: |
1434
+ kubectl set image deployment/app app=${{ env.REGISTRY }}:${{ github.sha }}
1435
+ kubectl rollout status deployment/app --timeout=5m || \
1436
+ (kubectl rollout undo deployment/app && exit 1)
1437
+ ```
1438
+
1439
+ ## Verification
1440
+
1441
+ ```bash
1442
+ nixpacks build . --name app
1443
+ docker run --rm app
1444
+ git push origin main
1445
+ # Check Actions tab for pipeline status
1446
+ ```
1447
+
1448
+ Your pipeline is working when:
1449
+ - Tests run on every PR
1450
+ - Images are pushed only on main merges
1451
+ - Deployments complete within 10 minutes
1452
+ - Failed deployments automatically roll back
1453
+ - id: technical_debt_management
1454
+ name: Technical Debt Management
1455
+ human:
1456
+ description:
1457
+ Identifying, prioritizing, and addressing technical debt strategically.
1458
+ Accepts technical debt when it enables faster business value; explicitly
1459
+ leaves generalization to platform teams when appropriate.
1460
+ levelDescriptions:
1461
+ awareness:
1462
+ You recognize obvious technical debt (duplicated code, missing tests,
1463
+ outdated dependencies) and flag issues to the team.
1464
+ foundational:
1465
+ You document technical debt with context and business impact,
1466
+ contribute to prioritization discussions, and address debt in code you
1467
+ touch.
1468
+ working:
1469
+ You prioritize debt systematically based on risk and impact, balance
1470
+ debt reduction with feature work, and make pragmatic trade-offs. You
1471
+ know when to take on debt intentionally.
1472
+ practitioner:
1473
+ You create debt reduction strategies across teams in your area,
1474
+ influence roadmap decisions to include debt work, and teach engineers
1475
+ when to accept debt for speed vs when to invest in quality.
1476
+ expert:
1477
+ You shape approaches to technical debt across the business unit. You
1478
+ create frameworks others adopt, balance large-scale technical health
1479
+ with delivery velocity, and are recognized for strategic debt
1480
+ management.
1481
+ agent:
1482
+ name: technical-debt-management
1483
+ description: |
1484
+ Guide for identifying, prioritizing, and addressing technical debt.
1485
+ useWhen: |
1486
+ Assessing code quality issues, planning refactoring work, or making
1487
+ build vs fix decisions.
1488
+ stages:
1489
+ specify:
1490
+ focus: |
1491
+ Define technical debt scope and acceptance criteria.
1492
+ Clarify impact and urgency of debt items.
1493
+ readChecklist:
1494
+ - Document debt items and their business impact
1495
+ - Define acceptance criteria for debt resolution
1496
+ - Specify constraints (time, risk tolerance)
1497
+ - Identify dependencies and affected systems
1498
+ - Mark ambiguities with [NEEDS CLARIFICATION]
1499
+ confirmChecklist:
1500
+ - Debt items are documented
1501
+ - Business impact is assessed
1502
+ - Acceptance criteria are defined
1503
+ - Constraints are clear
1504
+ plan:
1505
+ focus: |
1506
+ Assess technical debt and prioritize based on impact and effort.
1507
+ Decide whether to accept, defer, or address debt.
1508
+ readChecklist:
1509
+ - Identify and document technical debt
1510
+ - Assess impact and effort for each item
1511
+ - Prioritize using impact/effort matrix
1512
+ - Decide accept, defer, or address
1513
+ confirmChecklist:
1514
+ - Debt is documented with context
1515
+ - Impact and effort are assessed
1516
+ - Prioritization criteria are clear
1517
+ - Decision is documented
1518
+ onboard:
1519
+ focus: |
1520
+ Set up code analysis and quality measurement tools.
1521
+ Configure SonarQube, dependency scanning, and establish
1522
+ baseline metrics.
1523
+ readChecklist:
1524
+ - Install code analysis tools (SonarQube scanner, Dependabot)
1525
+ - Configure quality gates and analysis rules
1526
+ - Run initial scan to establish baseline metrics
1527
+ - Set up dependency update automation
1528
+ - Configure IDE integration for quality feedback
1529
+ confirmChecklist:
1530
+ - Code analysis tools installed and configured
1531
+ - Baseline quality metrics captured
1532
+ - Quality gates defined and enforced
1533
+ - Dependency scanning is operational
1534
+ - IDE shows quality feedback inline
1535
+ code:
1536
+ focus: |
1537
+ Address debt incrementally while delivering features. Document
1538
+ intentional debt clearly.
1539
+ readChecklist:
1540
+ - Apply Kid Scout Rule (leave code better)
1541
+ - Refactor while adding features
1542
+ - Document new intentional debt
1543
+ - Track debt in backlog
1544
+ confirmChecklist:
1545
+ - Debt work is visible in planning
1546
+ - New debt is intentional and documented
1547
+ - Code quality improved where touched
1548
+ - Technical debt backlog updated
1549
+ review:
1550
+ focus: |
1551
+ Validate debt reduction and ensure new debt is intentional
1552
+ and documented.
1553
+ readChecklist:
1554
+ - Review debt reduction progress
1555
+ - Verify new debt is documented
1556
+ - Check debt backlog currency
1557
+ - Assess overall technical health
1558
+ confirmChecklist:
1559
+ - Debt reduction validated
1560
+ - New debt justified and documented
1561
+ - Backlog is current
1562
+ - Metrics track debt trends
1563
+ deploy:
1564
+ focus: |
1565
+ Deploy debt reduction changes and verify improvements
1566
+ in production.
1567
+ readChecklist:
1568
+ - Deploy refactored code to production
1569
+ - Verify no regressions from debt work
1570
+ - Monitor system health after changes
1571
+ - Update debt backlog and metrics
1572
+ confirmChecklist:
1573
+ - Debt reduction deployed successfully
1574
+ - No regressions detected
1575
+ - System health maintained or improved
1576
+ - Debt backlog updated
1577
+ toolReferences:
1578
+ - name: SonarQube
1579
+ url: https://docs.sonarsource.com/sonarqube/latest/
1580
+ simpleIcon: sonarqubeserver
1581
+ description: Code quality and security analysis platform
1582
+ useWhen:
1583
+ Measuring technical debt, tracking quality metrics, or identifying
1584
+ code smells at scale
1585
+ - name: Dependabot
1586
+ url: https://docs.github.com/en/code-security/dependabot
1587
+ simpleIcon: dependabot
1588
+ description: Automated dependency updates for GitHub repositories
1589
+ useWhen: Automating dependency updates and reducing dependency debt
1590
+ instructions: |
1591
+ ## Step 1: Identify and Document Debt
1592
+
1593
+ Use a consistent format in code comments and issues. Include
1594
+ a tracking ID, description, impact estimate, effort size,
1595
+ and owner.
1596
+
1597
+ ## Step 2: Prioritize Using Impact/Effort Matrix
1598
+
1599
+ Categorize each debt item by impact (high/low) and effort
1600
+ (high/low). Do high-impact/low-effort items now. Plan
1601
+ high-impact/high-effort items for sprints. Apply Kid Scout
1602
+ Rule to low-impact/low-effort items. Defer or accept
1603
+ low-impact/high-effort items.
1604
+
1605
+ ## Step 3: Decide Accept, Defer, or Address
1606
+
1607
+ Accept debt when time-to-market is critical AND you have a
1608
+ payback plan, requirements are uncertain, or code is
1609
+ short-lived. Never accept debt in security-sensitive paths,
1610
+ core functionality, or high-change-frequency areas.
1611
+
1612
+ ## Step 4: Track Debt Metrics
1613
+
1614
+ Configure SonarQube quality gates: Technical Debt Ratio
1615
+ below 5%, no new code smells on changed files,
1616
+ maintainability rating A or better.
1617
+
1618
+ ## Step 5: Reduce Debt Incrementally
1619
+
1620
+ Apply the Kid Scout Rule: leave code better than you found
1621
+ it. Refactor adjacent to feature work. For large legacy
1622
+ systems, use the Strangler Fig pattern to route traffic
1623
+ incrementally until legacy is unused.
1624
+ installScript: |
1625
+ set -e
1626
+ npm install -D sonarqube-scanner
1627
+ npx sonar-scanner --version || true
1628
+ implementationReference: |
1629
+ ## Debt Documentation Format
1630
+
1631
+ ```markdown
1632
+ TODO(DEBT-123): Extract common validation logic
1633
+ Impact: Slows feature work ~2h/week
1634
+ Effort: M (1-2 days)
1635
+ Owner: @team-platform
1636
+ ```
1637
+
1638
+ ## Impact/Effort Matrix
1639
+
1640
+ | Impact | Effort | Action |
1641
+ |--------|--------|------------------|
1642
+ | High | Low | Do now |
1643
+ | High | High | Plan for sprint |
1644
+ | Low | Low | Kid Scout Rule |
1645
+ | Low | High | Defer or accept |
1646
+
1647
+ ## Accept vs Reject Debt
1648
+
1649
+ **Accept when:**
1650
+ - Time-to-market is critical AND you have a payback plan
1651
+ - Requirements are uncertain (prototype/experiment)
1652
+ - Code is short-lived (migration, one-off script)
1653
+
1654
+ **Never accept in:**
1655
+ - Security-sensitive code paths
1656
+ - Core system functionality
1657
+ - High-change-frequency areas
1658
+
1659
+ ## Strangler Fig Pattern
1660
+
1661
+ ```
1662
+ Request -> Router -> [New Service | Legacy] -> Response
1663
+ ```
1664
+ Route traffic incrementally until legacy is unused.
1665
+
1666
+ ## Verification
1667
+
1668
+ - Debt backlog is current and prioritized
1669
+ - New intentional debt has documented justification
1670
+ - SonarQube metrics trend in right direction
1671
+ - Team can articulate why specific debt was accepted