@forwardimpact/map 0.11.1 → 0.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (59) hide show
  1. package/README.md +1 -1
  2. package/bin/fit-map.js +91 -34
  3. package/package.json +14 -4
  4. package/schema/json/capability.schema.json +33 -0
  5. package/schema/json/discipline.schema.json +2 -6
  6. package/schema/rdf/capability.ttl +48 -0
  7. package/schema/rdf/discipline.ttl +6 -19
  8. package/src/index-generator.js +67 -38
  9. package/src/index.js +10 -25
  10. package/src/loader.js +407 -559
  11. package/src/schema-validation.js +327 -307
  12. package/src/validation.js +54 -0
  13. package/examples/behaviours/_index.yaml +0 -8
  14. package/examples/behaviours/outcome_ownership.yaml +0 -43
  15. package/examples/behaviours/polymathic_knowledge.yaml +0 -41
  16. package/examples/behaviours/precise_communication.yaml +0 -39
  17. package/examples/behaviours/relentless_curiosity.yaml +0 -37
  18. package/examples/behaviours/systems_thinking.yaml +0 -40
  19. package/examples/capabilities/_index.yaml +0 -8
  20. package/examples/capabilities/business.yaml +0 -205
  21. package/examples/capabilities/delivery.yaml +0 -1001
  22. package/examples/capabilities/people.yaml +0 -68
  23. package/examples/capabilities/reliability.yaml +0 -349
  24. package/examples/capabilities/scale.yaml +0 -1672
  25. package/examples/copilot-setup-steps.yaml +0 -25
  26. package/examples/devcontainer.yaml +0 -21
  27. package/examples/disciplines/_index.yaml +0 -6
  28. package/examples/disciplines/data_engineering.yaml +0 -68
  29. package/examples/disciplines/engineering_management.yaml +0 -61
  30. package/examples/disciplines/software_engineering.yaml +0 -68
  31. package/examples/drivers.yaml +0 -202
  32. package/examples/framework.yaml +0 -73
  33. package/examples/levels.yaml +0 -115
  34. package/examples/questions/behaviours/outcome_ownership.yaml +0 -228
  35. package/examples/questions/behaviours/polymathic_knowledge.yaml +0 -275
  36. package/examples/questions/behaviours/precise_communication.yaml +0 -248
  37. package/examples/questions/behaviours/relentless_curiosity.yaml +0 -248
  38. package/examples/questions/behaviours/systems_thinking.yaml +0 -238
  39. package/examples/questions/capabilities/business.yaml +0 -107
  40. package/examples/questions/capabilities/delivery.yaml +0 -101
  41. package/examples/questions/capabilities/people.yaml +0 -106
  42. package/examples/questions/capabilities/reliability.yaml +0 -105
  43. package/examples/questions/capabilities/scale.yaml +0 -104
  44. package/examples/questions/skills/architecture_design.yaml +0 -115
  45. package/examples/questions/skills/cloud_platforms.yaml +0 -105
  46. package/examples/questions/skills/code_quality.yaml +0 -162
  47. package/examples/questions/skills/data_modeling.yaml +0 -107
  48. package/examples/questions/skills/devops.yaml +0 -111
  49. package/examples/questions/skills/full_stack_development.yaml +0 -118
  50. package/examples/questions/skills/sre_practices.yaml +0 -113
  51. package/examples/questions/skills/stakeholder_management.yaml +0 -116
  52. package/examples/questions/skills/team_collaboration.yaml +0 -106
  53. package/examples/questions/skills/technical_writing.yaml +0 -110
  54. package/examples/self-assessments.yaml +0 -64
  55. package/examples/stages.yaml +0 -191
  56. package/examples/tracks/_index.yaml +0 -5
  57. package/examples/tracks/platform.yaml +0 -47
  58. package/examples/tracks/sre.yaml +0 -46
  59. package/examples/vscode-settings.yaml +0 -21
@@ -1,1672 +0,0 @@
1
- # yaml-language-server: $schema=https://www.forwardimpact.team/schema/json/capability.schema.json
2
-
3
- id: scale
4
- name: Scale
5
- emojiIcon: 📐
6
- ordinalRank: 7
7
- description: |
8
- Building systems that grow gracefully.
9
- Encompasses architecture, code quality, testing, performance,
10
- and technical decision-making.
11
- professionalResponsibilities:
12
- awareness:
13
- You follow established architectural patterns and coding standards with
14
- guidance from senior engineers
15
- foundational:
16
- You contribute to scalable designs, write quality code with appropriate
17
- tests, and understand architectural trade-offs
18
- working:
19
- You design scalable components, make sound architectural decisions, ensure
20
- code quality, and review others' designs
21
- practitioner:
22
- You lead architectural decisions for complex systems across teams, establish
23
- quality standards for your area, mentor engineers on architecture, and own
24
- technical debt strategy
25
- expert:
26
- You define technical standards across the business unit, guide enterprise
27
- architecture, are recognized externally for architectural expertise, and
28
- drive innovation
29
- managementResponsibilities:
30
- awareness:
31
- You understand technical architecture decisions and their resource and
32
- timeline implications
33
- foundational:
34
- You support team technical decisions, ensure alignment with architectural
35
- standards, and escalate technical risks
36
- working:
37
- You facilitate architectural discussions, manage technical debt
38
- prioritization, champion quality, and balance technical investment with
39
- delivery
40
- practitioner:
41
- You drive technical excellence across teams, establish quality standards for
42
- your area, own cross-team technical direction, and advise on architecture
43
- trade-offs
44
- expert:
45
- You shape technical strategy across the business unit, guide enterprise
46
- architecture governance, and represent technical priorities at executive
47
- level
48
- skills:
49
- - id: architecture_design
50
- name: Architecture & Design
51
- human:
52
- description:
53
- Ability to design software systems that are scalable, maintainable, and
54
- fit for purpose. In the AI era, this includes designing systems that
55
- effectively leverage AI capabilities while maintaining human oversight.
56
- proficiencyDescriptions:
57
- awareness:
58
- You understand basic architectural concepts (separation of concerns,
59
- modularity, coupling) and can read architecture diagrams. You follow
60
- established patterns with guidance.
61
- foundational:
62
- You explain and apply common patterns (MVC, microservices,
63
- event-driven) to familiar problems. You contribute to design
64
- discussions and identify when existing patterns don't fit.
65
- working:
66
- You design components and services independently for moderate
67
- complexity. You make appropriate trade-off decisions, document design
68
- rationale, and consider AI integration points in your designs.
69
- practitioner:
70
- You design complex multi-component systems end-to-end, evaluate
71
- architectural options for large initiatives across teams, guide
72
- technical decisions for your area, and mentor engineers on
73
- architecture. You balance elegance with delivery needs.
74
- expert:
75
- You define architecture standards and patterns across the business
76
- unit. You innovate on approaches to large-scale challenges, shape
77
- AI-integrated system design, and are recognized externally as an
78
- architecture authority.
79
- agent:
80
- name: architecture-design
81
- description: |
82
- Guide for designing software systems and making architectural
83
- decisions.
84
- useWhen: |
85
- Asked to design a system, evaluate architecture options, or make
86
- structural decisions about code organization.
87
- stages:
88
- specify:
89
- focus: |
90
- Define system requirements and constraints before design.
91
- Clarify functional and non-functional requirements.
92
- readChecklist:
93
- - Document functional requirements and use cases
94
- - Identify non-functional requirements (scale, latency,
95
- availability)
96
- - Document system constraints and integration points
97
- - Identify stakeholders and their concerns
98
- - Mark ambiguities with [NEEDS CLARIFICATION]
99
- confirmChecklist:
100
- - Functional requirements are documented
101
- - Non-functional requirements are specified
102
- - Constraints are identified
103
- - Stakeholder concerns are understood
104
- plan:
105
- focus: |
106
- Understand requirements and identify key architectural decisions.
107
- Document trade-offs and design rationale.
108
- readChecklist:
109
- - Clarify functional and non-functional requirements
110
- - Identify constraints (existing systems, team skills, timeline)
111
- - Document key decisions and trade-offs
112
- - Design for anticipated change
113
- confirmChecklist:
114
- - Requirements are clearly understood
115
- - Key decisions are documented with rationale
116
- - Trade-offs are explicit
117
- - Failure modes are considered
118
- onboard:
119
- focus: |
120
- Set up the development environment for the planned
121
- architecture. Install frameworks, configure project
122
- structure, and verify tooling.
123
- readChecklist:
124
- - Install planned frameworks and dependencies
125
- - Create project structure matching architecture design
126
- - Configure Mermaid rendering for architecture docs
127
- - Set up ADR directory for decision records
128
- - Configure linter and formatter for the project
129
- confirmChecklist:
130
- - Project structure reflects architectural boundaries
131
- - All planned frameworks installed and importable
132
- - Documentation tooling renders diagrams correctly
133
- - Linter and formatter configured and passing
134
- - Build system compiles without errors
135
- code:
136
- focus: |
137
- Implement architecture with clear boundaries and interfaces.
138
- Ensure components can evolve independently.
139
- readChecklist:
140
- - Define clear interfaces between components
141
- - Implement with appropriate patterns
142
- - Document design decisions in code
143
- - Test architectural boundaries
144
- confirmChecklist:
145
- - Dependencies are minimal and explicit
146
- - Interfaces are well-defined
147
- - Design patterns are documented
148
- - Architecture tests pass
149
- review:
150
- focus: |
151
- Validate architecture meets requirements and is maintainable.
152
- Ensure scalability and security are addressed.
153
- readChecklist:
154
- - Verify architecture meets requirements
155
- - Review for scalability concerns
156
- - Check security implications
157
- - Validate documentation completeness
158
- confirmChecklist:
159
- - Scalability requirements are addressed
160
- - Security implications are reviewed
161
- - Architecture is documented
162
- - Design is maintainable
163
- deploy:
164
- focus: |
165
- Deploy architecture and verify it performs as designed
166
- in production environment.
167
- readChecklist:
168
- - Deploy system components to production
169
- - Verify architectural boundaries work under load
170
- - Monitor performance against requirements
171
- - Document any operational learnings
172
- confirmChecklist:
173
- - System deployed successfully
174
- - Performance meets requirements
175
- - Monitoring confirms design assumptions
176
- - Operational procedures are documented
177
- toolReferences:
178
- - name: Mermaid
179
- url: https://mermaid.js.org/intro/
180
- simpleIcon: mermaid
181
- description: Text-based diagramming for architecture documentation
182
- useWhen: Creating architecture diagrams in markdown or documentation
183
- - name: ADR Tools
184
- url: https://adr.github.io/
185
- simpleIcon: task
186
- description: Architecture Decision Records tooling
187
- useWhen: Documenting architectural decisions and trade-offs
188
- instructions: |
189
- ## Step 1: Identify Key Decisions
190
-
191
- Architecture is decisions that are hard to change later.
192
- Before coding, document: data storage (SQL vs NoSQL),
193
- service boundaries (monolith vs services), and communication
194
- patterns (sync HTTP vs async events).
195
-
196
- ## Step 2: Visualize with Mermaid Diagrams
197
-
198
- Create architecture diagrams in markdown for version control.
199
- Use C4 diagram types for different levels of detail (context,
200
- container, component).
201
-
202
- ## Step 3: Document Trade-offs with ADRs
203
-
204
- Create an Architecture Decision Record for each significant
205
- choice. Include status, context, decision, and consequences
206
- (both positive and negative).
207
-
208
- ## Step 4: Structure Code with Clear Boundaries
209
-
210
- Organize code so each domain is independent. Use api, service,
211
- repository, and types layers within each module. Cross-cutting
212
- concerns go in a shared directory.
213
-
214
- ## Step 5: Define Explicit Interfaces
215
-
216
- Define contracts at domain boundaries. Hide implementations
217
- behind interfaces so modules can evolve independently.
218
- installScript: |
219
- set -e
220
- npm install -g @mermaid-js/mermaid-cli
221
- npm install -g adr-log
222
- command -v mmdc
223
- implementationReference: |
224
- ## Mermaid Architecture Diagram
225
-
226
- ```mermaid
227
- graph TB
228
- subgraph "API Layer"
229
- API[API Gateway]
230
- end
231
- subgraph "Domain Modules"
232
- Users[Users Module]
233
- Orders[Orders Module]
234
- end
235
- subgraph "Data Layer"
236
- DB[(PostgreSQL)]
237
- end
238
- API --> Users
239
- API --> Orders
240
- Users --> DB
241
- Orders --> DB
242
- ```
243
-
244
- ## ADR Template
245
-
246
- ```markdown
247
- # ADR 001: Start with Modular Monolith
248
-
249
- ## Status
250
- Accepted
251
-
252
- ## Context
253
- We need to deliver quickly but maintain future flexibility.
254
-
255
- ## Decision
256
- Start with a modular monolith with clear domain boundaries.
257
-
258
- ## Consequences
259
- - (+) Faster initial development
260
- - (+) Easier refactoring while boundaries evolve
261
- - (-) Must enforce boundaries through code review
262
- ```
263
-
264
- ## Module Structure
265
-
266
- ```
267
- src/
268
- modules/
269
- users/
270
- api.ts
271
- service.ts
272
- repository.ts
273
- types.ts
274
- orders/
275
- api.ts
276
- service.ts
277
- repository.ts
278
- types.ts
279
- shared/
280
- ```
281
-
282
- ## Interface at Domain Boundary
283
-
284
- ```javascript
285
- // Define contracts at domain boundaries
286
- class OrderServiceImpl {
287
- constructor(repo) { this.repo = repo; }
288
- async create(order) { return this.repo.save(order); }
289
- async get(id) { return this.repo.findById(id); }
290
- }
291
- ```
292
-
293
- ## Verification
294
-
295
- Your architecture is sound when:
296
- - Each module can be understood without reading others
297
- - Dependencies flow inward (api -> service -> repository)
298
- - ADRs explain "why" for each major decision
299
- - Diagrams render correctly in your documentation
300
-
301
- ## When to Extract Services
302
-
303
- Extract only when you have evidence:
304
- - Independent scaling is required (measured, not guessed)
305
- - Different deployment cadences are blocking teams
306
- - Team boundaries naturally align with service boundaries
307
- - id: cloud_platforms
308
- name: Cloud Platforms
309
- human:
310
- description:
311
- Working effectively with cloud infrastructure (AWS, Azure, GCP)
312
- proficiencyDescriptions:
313
- awareness:
314
- You understand cloud computing concepts (IaaS, PaaS, SaaS) and can use
315
- cloud services through consoles and defined interfaces with guidance.
316
- foundational:
317
- You deploy applications to cloud platforms and use common services
318
- (compute, storage, databases, queues). You understand cloud pricing
319
- and basic security configuration.
320
- working:
321
- You design cloud-native solutions, manage infrastructure as code,
322
- implement security best practices, and make informed service
323
- selections. You troubleshoot cloud-specific issues.
324
- practitioner:
325
- You architect multi-region, highly available solutions across teams.
326
- You optimize for cost and performance, lead cloud migrations for your
327
- area, and mentor engineers on cloud architecture patterns.
328
- expert:
329
- You shape cloud strategy across the business unit. You solve
330
- large-scale cloud challenges, define cloud governance, and are
331
- recognized as an authority on cloud architecture.
332
- agent:
333
- name: cloud-platforms
334
- description: Guide for working with cloud infrastructure and services.
335
- useWhen: |
336
- Deploying to cloud, selecting cloud services, configuring
337
- infrastructure, or solving cloud-specific challenges.
338
- stages:
339
- specify:
340
- focus: |
341
- Define cloud infrastructure requirements and constraints.
342
- Clarify availability, security, and cost expectations.
343
- readChecklist:
344
- - Document availability and reliability requirements
345
- - Identify security and compliance constraints
346
- - Specify cost budget and constraints
347
- - Define performance requirements (latency, throughput)
348
- - Mark ambiguities with [NEEDS CLARIFICATION]
349
- confirmChecklist:
350
- - Availability requirements are documented
351
- - Security requirements are specified
352
- - Cost constraints are defined
353
- - Performance requirements are clear
354
- plan:
355
- focus: |
356
- Select appropriate cloud services and design for availability,
357
- security, and cost efficiency.
358
- readChecklist:
359
- - Identify service requirements
360
- - Select appropriate cloud services
361
- - Plan for high availability
362
- - Consider security and cost
363
- confirmChecklist:
364
- - Service selection matches requirements
365
- - Availability approach planned
366
- - Security model defined
367
- - Cost controls considered
368
- onboard:
369
- focus: |
370
- Set up cloud development environment. Configure CLI tools,
371
- authenticate with cloud provider, and verify infrastructure
372
- tooling works.
373
- readChecklist:
374
- - Install cloud CLI tools (AWS CLI, Terraform)
375
- - Configure cloud credentials and authentication
376
- - Initialize Terraform workspace and providers
377
- - Verify cloud account access and permissions
378
- - Set up environment variables for cloud configuration
379
- confirmChecklist:
380
- - Cloud CLI authenticated and working
381
- - Terraform initialized with correct providers
382
- - IAM permissions verified for planned resources
383
- - Environment variables configured securely
384
- - Infrastructure as code directory structure created
385
- code:
386
- focus: |
387
- Implement cloud infrastructure with security best practices.
388
- Use infrastructure as code for reproducibility.
389
- readChecklist:
390
- - Define infrastructure as code
391
- - Configure security groups and IAM
392
- - Set up monitoring and alerting
393
- - Implement deployment automation
394
- confirmChecklist:
395
- - Multi-AZ deployment for availability
396
- - Security groups properly configured
397
- - IAM follows least privilege
398
- - Data encrypted at rest and in transit
399
- - Infrastructure defined as code
400
- review:
401
- focus: |
402
- Validate security, availability, and operational readiness.
403
- Ensure cost controls are in place.
404
- readChecklist:
405
- - Verify security configuration
406
- - Test availability and failover
407
- - Review cost projections
408
- - Validate monitoring coverage
409
- confirmChecklist:
410
- - Security review completed
411
- - Monitoring and alerting in place
412
- - Cost controls established
413
- - Operational runbooks exist
414
- deploy:
415
- focus: |
416
- Deploy cloud infrastructure and verify production readiness.
417
- Confirm failover and monitoring work correctly.
418
- readChecklist:
419
- - Deploy infrastructure to production
420
- - Verify multi-AZ failover works
421
- - Confirm monitoring and alerting are operational
422
- - Validate cost tracking is in place
423
- confirmChecklist:
424
- - Infrastructure deployed successfully
425
- - Failover tested in production
426
- - Monitoring is operational
427
- - Cost tracking is active
428
- toolReferences:
429
- - name: Terraform
430
- url: https://developer.hashicorp.com/terraform/docs
431
- simpleIcon: terraform
432
- description:
433
- Infrastructure as code tool for provisioning cloud resources
434
- useWhen: Defining and managing cloud infrastructure as code
435
- - name: AWS Lambda
436
- url: https://docs.aws.amazon.com/lambda/
437
- simpleIcon: task
438
- description: Serverless compute service for event-driven applications
439
- useWhen: Building event-driven functions without managing servers
440
- - name: Amazon EventBridge
441
- url: https://docs.aws.amazon.com/eventbridge/
442
- simpleIcon: task
443
- description: Serverless event bus for application integration
444
- useWhen: Building event-driven architectures or integrating AWS services
445
- - name: Amazon DynamoDB
446
- url: https://docs.aws.amazon.com/dynamodb/
447
- simpleIcon: task
448
- description:
449
- Serverless NoSQL database with single-digit millisecond latency
450
- useWhen:
451
- Building serverless applications requiring fast key-value or document
452
- storage
453
- - name: AWS Step Functions
454
- url: https://docs.aws.amazon.com/step-functions/
455
- simpleIcon: task
456
- description:
457
- Serverless workflow orchestration for coordinating AWS services
458
- useWhen:
459
- Orchestrating multi-step serverless workflows with error handling,
460
- retries, and state management
461
- instructions: |
462
- ## Step 1: Choose Your Architecture Pattern
463
-
464
- Event-driven serverless works best when workload is
465
- unpredictable or spiky, you want pay-per-use pricing,
466
- and components can operate independently.
467
-
468
- ## Step 2: Define Infrastructure with Terraform
469
-
470
- Use Terraform to define cloud resources declaratively.
471
- Define Lambda functions, DynamoDB tables, and IAM roles
472
- in HCL. Apply with terraform init, plan, apply.
473
-
474
- ## Step 3: Define the Event Flow
475
-
476
- Wire EventBridge rules to trigger Lambda functions based
477
- on event patterns. Define source, detail-type, and target.
478
-
479
- ## Step 4: Implement Lambda Functions
480
-
481
- Initialize SDK clients outside the handler for reuse across
482
- invocations. Keep handlers focused on a single responsibility.
483
-
484
- ## Step 5: Configure IAM (Least Privilege)
485
-
486
- Grant only the specific actions needed (e.g., dynamodb:GetItem,
487
- dynamodb:PutItem) on the specific resource ARN. Never use
488
- wildcard permissions.
489
-
490
- ## Step 6: Orchestrate with Step Functions
491
-
492
- Use Step Functions to coordinate multi-step workflows. Define
493
- state machines in Amazon States Language. Use Step Functions
494
- for workflows that need error handling, retries, parallel
495
- execution, or human approval gates.
496
- installScript: |
497
- set -e
498
- brew tap hashicorp/tap
499
- brew install hashicorp/tap/terraform
500
- terraform --version
501
- command -v aws
502
- implementationReference: |
503
- ## Terraform Lambda + DynamoDB
504
-
505
- ```hcl
506
- terraform {
507
- required_providers {
508
- aws = { source = "hashicorp/aws", version = "~> 5.0" }
509
- }
510
- }
511
-
512
- resource "aws_lambda_function" "process_order" {
513
- filename = "lambda.zip"
514
- function_name = "process-order"
515
- role = aws_iam_role.lambda_role.arn
516
- handler = "handler.process_order"
517
- runtime = "python3.11"
518
- environment {
519
- variables = { TABLE_NAME = aws_dynamodb_table.orders.name }
520
- }
521
- }
522
-
523
- resource "aws_dynamodb_table" "orders" {
524
- name = "orders"
525
- billing_mode = "PAY_PER_REQUEST"
526
- hash_key = "pk"
527
- range_key = "sk"
528
- attribute { name = "pk", type = "S" }
529
- attribute { name = "sk", type = "S" }
530
- }
531
- ```
532
-
533
- ## EventBridge Rule
534
-
535
- ```hcl
536
- resource "aws_cloudwatch_event_rule" "order_created" {
537
- name = "order-created"
538
- event_pattern = jsonencode({
539
- source = ["orders"]
540
- detail-type = ["OrderCreated"]
541
- })
542
- }
543
-
544
- resource "aws_cloudwatch_event_target" "process_order" {
545
- rule = aws_cloudwatch_event_rule.order_created.name
546
- target_id = "ProcessOrder"
547
- arn = aws_lambda_function.process_order.arn
548
- }
549
- ```
550
-
551
- ## Lambda Handler
552
-
553
- ```python
554
- import os, boto3
555
-
556
- dynamodb = boto3.resource('dynamodb')
557
- table = dynamodb.Table(os.environ['TABLE_NAME'])
558
-
559
- def process_order(event, context):
560
- order = event['detail']
561
- table.put_item(Item={
562
- 'pk': f"ORDER#{order['id']}",
563
- 'sk': 'METADATA',
564
- 'status': 'processing',
565
- **order
566
- })
567
- return {'statusCode': 200}
568
- ```
569
-
570
- ## IAM Least Privilege
571
-
572
- ```hcl
573
- resource "aws_iam_role_policy" "lambda_dynamodb" {
574
- name = "dynamodb-access"
575
- role = aws_iam_role.lambda_role.id
576
- policy = jsonencode({
577
- Version = "2012-10-17"
578
- Statement = [{
579
- Effect = "Allow"
580
- Action = ["dynamodb:GetItem", "dynamodb:PutItem"]
581
- Resource = aws_dynamodb_table.orders.arn
582
- }]
583
- })
584
- }
585
- ```
586
-
587
- ## Verification
588
-
589
- ```bash
590
- terraform apply
591
- aws events put-events --entries '[{"Source":"orders","DetailType":"OrderCreated","Detail":"{\"id\":\"123\"}"}]'
592
- ```
593
-
594
- Check CloudWatch Logs for function execution.
595
-
596
- ## Step Functions State Machine
597
-
598
- ```hcl
599
- resource "aws_sfn_state_machine" "order_workflow" {
600
- name = "order-processing"
601
- role_arn = aws_iam_role.sfn_role.arn
602
-
603
- definition = jsonencode({
604
- StartAt = "ValidateOrder"
605
- States = {
606
- ValidateOrder = {
607
- Type = "Task"
608
- Resource = aws_lambda_function.validate_order.arn
609
- Next = "ProcessPayment"
610
- Retry = [{
611
- ErrorEquals = ["States.TaskFailed"]
612
- IntervalSeconds = 2
613
- MaxAttempts = 3
614
- BackoffRate = 2.0
615
- }]
616
- Catch = [{
617
- ErrorEquals = ["States.ALL"]
618
- Next = "OrderFailed"
619
- }]
620
- }
621
- ProcessPayment = {
622
- Type = "Task"
623
- Resource = aws_lambda_function.process_payment.arn
624
- Next = "FulfillOrder"
625
- }
626
- FulfillOrder = {
627
- Type = "Task"
628
- Resource = aws_lambda_function.fulfill_order.arn
629
- End = true
630
- }
631
- OrderFailed = {
632
- Type = "Fail"
633
- Cause = "Order processing failed"
634
- }
635
- }
636
- })
637
- }
638
- ```
639
-
640
- ## When to Use Step Functions
641
-
642
- | Use Case | Approach |
643
- |----------|----------|
644
- | Simple event → action | EventBridge + Lambda |
645
- | Multi-step with retries | Step Functions |
646
- | Long-running workflows | Step Functions (Express or Standard) |
647
- | Parallel fan-out | Step Functions Parallel state |
648
- - id: code_quality
649
- name: Code Quality & Review
650
- human:
651
- description:
652
- Writing and reviewing clean, maintainable, tested, and well-documented
653
- code. In the AI era, code review becomes more important than code
654
- generation—every line must be understood and verified regardless of its
655
- source.
656
- proficiencyDescriptions:
657
- awareness:
658
- You follow team coding conventions and style guides with guidance. You
659
- understand why code quality matters and can run linters and tests
660
- others have written.
661
- foundational:
662
- You write readable, well-structured code. You use linting tools, write
663
- basic unit tests, and participate constructively in code reviews—both
664
- giving and receiving feedback.
665
- working:
666
- You produce consistently high-quality, well-tested code. You review
667
- AI-generated code critically and never ship code you don't fully
668
- understand. You identify edge cases and ensure adequate test coverage.
669
- practitioner:
670
- You establish and enforce quality standards across teams in your area.
671
- You mentor engineers on effective code review, ensure verification
672
- depth for AI-assisted development, and drive testing strategies.
673
- expert:
674
- You shape coding standards and quality practices across the business
675
- unit. You champion code review as a critical engineering skill, define
676
- AI-assisted development guidelines, and are recognized for quality
677
- engineering.
678
- agent:
679
- name: code-quality-review
680
- description: |
681
- Guide for reviewing code quality, identifying issues, and suggesting
682
- improvements.
683
- useWhen: |
684
- Asked to review code, check for best practices, or conduct code
685
- reviews.
686
- stages:
687
- specify:
688
- focus: |
689
- Define code quality requirements and review criteria.
690
- Clarify standards and acceptance criteria for the change.
691
- readChecklist:
692
- - Identify applicable coding standards
693
- - Document quality acceptance criteria
694
- - Specify test coverage requirements
695
- - Define review depth based on risk level
696
- - Mark ambiguities with [NEEDS CLARIFICATION]
697
- confirmChecklist:
698
- - Coding standards are identified
699
- - Quality criteria are documented
700
- - Test requirements are specified
701
- - Review approach matches risk level
702
- plan:
703
- focus: |
704
- Understand code review scope and establish review criteria.
705
- Consider what quality standards apply.
706
- readChecklist:
707
- - Identify code review scope
708
- - Understand applicable standards
709
- - Plan review approach
710
- - Consider risk level
711
- confirmChecklist:
712
- - Review scope is clear
713
- - Standards are understood
714
- - Review approach is planned
715
- - Risk level is assessed
716
- onboard:
717
- focus: |
718
- Set up code quality tooling. Configure linters, formatters,
719
- testing frameworks, and pre-commit hooks for the project.
720
- readChecklist:
721
- - Install linter (ESLint for JS/TS, Ruff for Python)
722
- - Install formatter (Prettier for JS/TS, Ruff for Python)
723
- - Configure linter rules matching project standards
724
- - Set up pre-commit hooks (husky/lint-staged or pre-commit)
725
- - Install testing framework (Node.js test runner, pytest)
726
- - Configure editor settings for format-on-save
727
- confirmChecklist:
728
- - Linter runs without configuration errors
729
- - Formatter produces consistent output
730
- - Pre-commit hooks catch style violations
731
- - Test runner discovers and runs existing tests
732
- - Editor integration works (format-on-save, inline errors)
733
- - CI pipeline includes quality checks
734
- code:
735
- focus: |
736
- Write clean, maintainable, tested code. Follow project
737
- conventions and ensure adequate coverage.
738
- readChecklist:
739
- - Write readable, well-structured code
740
- - Add appropriate tests
741
- - Follow project conventions
742
- - Document non-obvious logic
743
- confirmChecklist:
744
- - Code compiles and passes all tests
745
- - Changes are covered by tests
746
- - Code follows project conventions
747
- - No unnecessary complexity
748
- review:
749
- focus: |
750
- Verify correctness, maintainability, and adherence to
751
- standards. Ensure no code is shipped that isn't understood.
752
- readChecklist:
753
- - Verify code does what it claims
754
- - Check test coverage
755
- - Review for maintainability
756
- - Confirm style compliance
757
- confirmChecklist:
758
- - No obvious security vulnerabilities
759
- - Error handling is appropriate
760
- - Documentation updated if needed
761
- - No code you don't fully understand
762
- deploy:
763
- focus: |
764
- Merge and deploy reviewed code. Verify quality checks pass
765
- in production pipeline.
766
- readChecklist:
767
- - Merge approved changes
768
- - Verify CI pipeline passes
769
- - Monitor for issues after deployment
770
- - Document any lessons learned
771
- confirmChecklist:
772
- - Code merged successfully
773
- - CI pipeline passes all checks
774
- - No regressions detected
775
- - Deployment verified
776
- toolReferences:
777
- - name: ESLint
778
- url: https://eslint.org/docs/latest/
779
- simpleIcon: eslint
780
- description: Pluggable JavaScript/TypeScript linting utility
781
- useWhen:
782
- Enforcing code style and catching errors in JavaScript/TypeScript
783
- projects
784
- - name: Ruff
785
- url: https://docs.astral.sh/ruff/
786
- simpleIcon: ruff
787
- description: Extremely fast Python linter and formatter
788
- useWhen: Enforcing code style and catching errors in Python projects
789
- - name: Prettier
790
- url: https://prettier.io/docs/en/
791
- simpleIcon: prettier
792
- description: Opinionated code formatter
793
- useWhen:
794
- Enforcing consistent code formatting across JavaScript/TypeScript
795
- codebases
796
- - name: SonarQube
797
- url: https://docs.sonarsource.com/sonarqube/latest/
798
- simpleIcon: sonarqubeserver
799
- description: Code quality and security analysis platform
800
- useWhen:
801
- Analyzing code quality metrics, identifying code smells, or tracking
802
- quality gates
803
- - name: Playwright
804
- url: https://playwright.dev/docs/intro
805
- simpleIcon: playwright
806
- description: End-to-end testing framework for web applications
807
- useWhen:
808
- Writing and running end-to-end tests to verify full application
809
- behavior across browsers
810
- instructions: |
811
- ## Step 1: Set Up Linting
812
-
813
- Configure ESLint for JavaScript/TypeScript using flat config.
814
- For Python, configure Ruff in pyproject.toml with appropriate
815
- rule selections (E, F, I at minimum).
816
-
817
- ## Step 2: Configure Formatting
818
-
819
- Set up Prettier for JS/TS with consistent settings. Enable
820
- format-on-save in your editor. Ruff handles Python formatting.
821
-
822
- ## Step 3: Add Pre-commit Hooks
823
-
824
- Install husky and lint-staged to run linting and formatting
825
- automatically on staged files before each commit.
826
-
827
- ## Step 4: Set Up Quality Gates (Optional)
828
-
829
- Configure SonarQube to analyze code quality metrics. Set
830
- quality gates for technical debt ratio, code smells, and
831
- maintainability rating.
832
-
833
- ## Step 5: Add End-to-End Tests
834
-
835
- Use Playwright to write E2E tests that verify full application
836
- behavior in a real browser. Test critical user flows end-to-end
837
- to catch integration issues that unit tests miss.
838
- installScript: |
839
- set -e
840
- npm install -D eslint prettier husky lint-staged
841
- pip install ruff
842
- npx playwright install --with-deps chromium
843
- npx eslint --version
844
- ruff --version
845
- npx playwright --version
846
- implementationReference: |
847
- ## ESLint Flat Config
848
-
849
- ```javascript
850
- // eslint.config.js
851
- export default [
852
- { rules: { "no-unused-vars": "error", "no-console": "warn" } }
853
- ];
854
- ```
855
-
856
- ## Ruff Config
857
-
858
- ```toml
859
- # pyproject.toml
860
- [tool.ruff]
861
- line-length = 88
862
- select = ["E", "F", "I"]
863
- ```
864
-
865
- ## Prettier Config
866
-
867
- ```json
868
- { "semi": true, "singleQuote": true, "trailingComma": "es5" }
869
- ```
870
-
871
- ## Pre-commit Hooks
872
-
873
- ```bash
874
- npx husky init
875
- echo "npx lint-staged" > .husky/pre-commit
876
- ```
877
-
878
- ```json
879
- "lint-staged": {
880
- "*.{js,ts}": ["eslint --fix", "prettier --write"],
881
- "*.py": ["ruff check --fix", "ruff format"]
882
- }
883
- ```
884
-
885
- ## Code Review Checklist
886
-
887
- 1. **Correctness** — Does it do what it claims?
888
- 2. **Tests** — Are changes covered? Edge cases?
889
- 3. **Readability** — Can you understand it in 6 months?
890
- 4. **No surprises** — Side effects documented?
891
-
892
- ## Playwright E2E Tests
893
-
894
- ```javascript
895
- // tests/app.spec.js
896
- import { test, expect } from '@playwright/test';
897
-
898
- test('homepage loads and displays content', async ({ page }) => {
899
- await page.goto('/');
900
- await expect(page.locator('h1')).toBeVisible();
901
- });
902
-
903
- test('user can submit form', async ({ page }) => {
904
- await page.goto('/form');
905
- await page.fill('[name="email"]', 'test@example.com');
906
- await page.click('button[type="submit"]');
907
- await expect(page.locator('.success')).toBeVisible();
908
- });
909
- ```
910
-
911
- ```javascript
912
- // playwright.config.js
913
- import { defineConfig } from '@playwright/test';
914
- export default defineConfig({
915
- testDir: './tests',
916
- use: { baseURL: 'http://localhost:3000' },
917
- webServer: { command: 'npm run dev', port: 3000 },
918
- });
919
- ```
920
-
921
- ## Verification
922
-
923
- Run locally before commit:
924
- ```bash
925
- npx eslint . && npx prettier --check . && npm test
926
- ```
927
- - id: data_modeling
928
- name: Data Modeling
929
- human:
930
- description:
931
- Designing data structures and database schemas that support application
932
- needs
933
- proficiencyDescriptions:
934
- awareness:
935
- You understand the difference between relational and non-relational
936
- data stores. You can create basic schemas from specifications with
937
- guidance.
938
- foundational:
939
- You design normalized schemas for straightforward use cases and
940
- understand indexing basics. You write efficient queries for common
941
- patterns.
942
- working:
943
- You create efficient data models that balance normalization with query
944
- performance. You optimize queries, handle schema migrations safely,
945
- and choose appropriate storage technologies.
946
- practitioner:
947
- You design complex data architectures spanning multiple systems across
948
- teams. You make strategic trade-offs between consistency, performance,
949
- and maintainability. You mentor engineers in your area on data
950
- modeling best practices.
951
- expert:
952
- You define data modeling standards across the business unit. You
953
- handle extreme scale and complex distributed data challenges, innovate
954
- on approaches, and are recognized as a data architecture authority.
955
- agent:
956
- name: data-modeling
957
- description: |
958
- Guide for designing database schemas, data structures, and data
959
- architectures.
960
- useWhen: |
961
- Designing tables, optimizing queries, or making decisions about data
962
- storage technologies.
963
- stages:
964
- specify:
965
- focus: |
966
- Define data requirements and access patterns.
967
- Clarify schema requirements before designing.
968
- readChecklist:
969
- - Document data entities and relationships
970
- - Identify query patterns and access requirements
971
- - Specify consistency and performance requirements
972
- - Define data retention and compliance needs
973
- - Mark ambiguities with [NEEDS CLARIFICATION]
974
- confirmChecklist:
975
- - Data entities are documented
976
- - Query patterns are identified
977
- - Performance requirements are specified
978
- - Compliance needs are clear
979
- plan:
980
- focus: |
981
- Understand data requirements and select appropriate storage
982
- technology. Plan schema with query patterns in mind.
983
- readChecklist:
984
- - Identify data requirements and access patterns
985
- - Select appropriate storage technology
986
- - Plan normalization approach
987
- - Design indexing strategy
988
- confirmChecklist:
989
- - Requirements understood
990
- - Appropriate storage technology selected
991
- - Schema design planned
992
- - Query patterns identified
993
- onboard:
994
- focus: |
995
- Set up the database environment. Install ORM/query tools,
996
- configure database connections, and verify migration
997
- tooling works.
998
- readChecklist:
999
- - Install database client and ORM (Prisma, SQLAlchemy)
1000
- - Configure database connection in .env file
1001
- - Start local database (Supabase, PostgreSQL, etc.)
1002
- - Initialize migration tooling and verify it connects
1003
- - Create database user with appropriate privileges
1004
- confirmChecklist:
1005
- - Database running locally and accepting connections
1006
- - ORM/client configured and connected
1007
- - Migration tooling initialized and working
1008
- - Database credentials stored securely in .env
1009
- - Schema management directory structure created
1010
- code:
1011
- focus: |
1012
- Implement schema with appropriate normalization and indexes.
1013
- Plan safe migrations for existing data.
1014
- readChecklist:
1015
- - Create schema with appropriate normalization
1016
- - Add indexes for query patterns
1017
- - Implement safe migration plan
1018
- - Document data model
1019
- confirmChecklist:
1020
- - Schema normalized appropriately
1021
- - Indexes support query patterns
1022
- - Migration plan is safe
1023
- - Backward compatibility maintained
1024
- review:
1025
- focus: |
1026
- Validate schema meets requirements and migrations are safe.
1027
- Ensure data integrity is maintained.
1028
- readChecklist:
1029
- - Test query performance
1030
- - Verify migration safety
1031
- - Check data integrity
1032
- - Review documentation
1033
- confirmChecklist:
1034
- - Query performance validated
1035
- - Migration tested on production-like data
1036
- - Data integrity verified
1037
- - Documentation complete
1038
- deploy:
1039
- focus: |
1040
- Deploy schema changes to production safely.
1041
- Verify data integrity and query performance.
1042
- readChecklist:
1043
- - Run migrations in production
1044
- - Verify data integrity after migration
1045
- - Monitor query performance
1046
- - Confirm rollback plan works
1047
- confirmChecklist:
1048
- - Migration completed successfully
1049
- - Data integrity verified
1050
- - Performance meets requirements
1051
- - Rollback procedure tested
1052
- toolReferences:
1053
- - name: PostgreSQL
1054
- url: https://www.postgresql.org/docs/
1055
- simpleIcon: postgresql
1056
- description: Advanced open source relational database
1057
- useWhen:
1058
- Building applications requiring ACID transactions and complex queries
1059
- - name: Prisma
1060
- url: https://www.prisma.io/docs/
1061
- simpleIcon: prisma
1062
- description:
1063
- Type-safe ORM for Node.js and TypeScript which works well with
1064
- Supabase
1065
- useWhen:
1066
- Building type-safe database access layers in Node.js applications
1067
- instructions: |
1068
- ## Step 1: Create a Prisma User in Supabase
1069
-
1070
- In the Supabase SQL Editor, create a dedicated Prisma user
1071
- with appropriate privileges. Grant usage, create, and all
1072
- permissions on the public schema.
1073
-
1074
- ## Step 2: Initialize Prisma Project
1075
-
1076
- Install Prisma and TypeScript dependencies. Run prisma init
1077
- to scaffold the schema and .env file.
1078
-
1079
- ## Step 3: Configure Supabase Connection
1080
-
1081
- Get your Supavisor Session pooler string from Supabase
1082
- Dashboard. Use port 5432 for session mode (migrations).
1083
- For serverless, also configure transaction mode (port 6543).
1084
-
1085
- ## Step 4: Define Your Schema
1086
-
1087
- Define models with relationships, indexes, and constraints
1088
- in prisma/schema.prisma. Use uuid for IDs and add timestamps.
1089
-
1090
- ## Step 5: Run Migrations
1091
-
1092
- Run prisma migrate dev to create tables and generate the
1093
- client. Verify tables in Supabase Dashboard.
1094
-
1095
- ## Step 6: Use the Prisma Client
1096
-
1097
- Import PrismaClient and use type-safe queries. Always
1098
- disconnect in finally blocks and handle errors.
1099
- installScript: |
1100
- set -e
1101
- npm install prisma @prisma/client typescript ts-node --save-dev
1102
- npx prisma init
1103
- npx prisma --version
1104
- implementationReference: |
1105
- ## Supabase Prisma User Setup
1106
-
1107
- ```sql
1108
- create user "prisma" with password 'your_secure_password' bypassrls createdb;
1109
- grant "prisma" to "postgres";
1110
- grant usage on schema public to prisma;
1111
- grant create on schema public to prisma;
1112
- grant all on all tables in schema public to prisma;
1113
- grant all on all routines in schema public to prisma;
1114
- grant all on all sequences in schema public to prisma;
1115
- ```
1116
-
1117
- ## Connection Strings
1118
-
1119
- ```env
1120
- # Session mode (migrations)
1121
- DATABASE_URL="postgres://prisma.[REF]:[PASS]@[REGION].pooler.supabase.com:5432/postgres"
1122
-
1123
- # Transaction mode (serverless)
1124
- DATABASE_URL="postgres://prisma.[REF]:[PASS]@[REGION].pooler.supabase.com:6543/postgres?pgbouncer=true"
1125
- DIRECT_URL="postgres://prisma.[REF]:[PASS]@[REGION].pooler.supabase.com:5432/postgres"
1126
- ```
1127
-
1128
- ## Schema Example
1129
-
1130
- ```prisma
1131
- generator client {
1132
- provider = "prisma-client-js"
1133
- }
1134
-
1135
- datasource db {
1136
- provider = "postgresql"
1137
- url = env("DATABASE_URL")
1138
- directUrl = env("DIRECT_URL")
1139
- }
1140
-
1141
- model User {
1142
- id String @id @default(uuid())
1143
- email String @unique
1144
- name String?
1145
- posts Post[]
1146
- createdAt DateTime @default(now())
1147
- }
1148
-
1149
- model Post {
1150
- id String @id @default(uuid())
1151
- title String
1152
- content String?
1153
- published Boolean @default(false)
1154
- author User? @relation(fields: [authorId], references: [id])
1155
- authorId String?
1156
- createdAt DateTime @default(now())
1157
- }
1158
- ```
1159
-
1160
- ## Prisma Client Usage
1161
-
1162
- ```javascript
1163
- import { PrismaClient } from '@prisma/client';
1164
- const prisma = new PrismaClient();
1165
-
1166
- const user = await prisma.user.create({
1167
- data: {
1168
- email: 'alice@example.com',
1169
- name: 'Alice',
1170
- posts: { create: { title: 'Hello World' } }
1171
- },
1172
- include: { posts: true }
1173
- });
1174
- ```
1175
-
1176
- ## Verification
1177
-
1178
- - Migrations apply without errors
1179
- - Tables visible in Supabase Dashboard
1180
- - Queries return expected data
1181
- - Connection pooling works in production
1182
- - id: devops
1183
- name: DevOps & CI/CD
1184
- human:
1185
- description:
1186
- Building and maintaining deployment pipelines, infrastructure, and
1187
- operational practices
1188
- proficiencyDescriptions:
1189
- awareness:
1190
- You understand CI/CD concepts (build, test, deploy) and can trigger
1191
- and monitor pipelines others have built. You follow deployment
1192
- procedures.
1193
- foundational:
1194
- You configure basic CI/CD pipelines, understand containerization, and
1195
- can troubleshoot common build and deployment failures.
1196
- working:
1197
- You build complete CI/CD pipelines end-to-end, manage infrastructure
1198
- as code, implement monitoring, and design deployment strategies for
1199
- your services.
1200
- practitioner:
1201
- You design deployment strategies for complex multi-service systems
1202
- across teams, optimize pipeline performance and reliability, define
1203
- DevOps practices for your area, and mentor engineers on
1204
- infrastructure.
1205
- expert:
1206
- You shape DevOps culture and practices across the business unit. You
1207
- introduce innovative approaches to deployment and infrastructure,
1208
- solve large-scale DevOps challenges, and are recognized externally.
1209
- agent:
1210
- name: devops-cicd
1211
- description: |
1212
- Guide for building CI/CD pipelines, managing infrastructure as code,
1213
- and implementing deployment best practices.
1214
- useWhen: |
1215
- Setting up pipelines, containerizing applications, or configuring
1216
- infrastructure.
1217
- stages:
1218
- specify:
1219
- focus: |
1220
- Define CI/CD and infrastructure requirements.
1221
- Clarify deployment strategy and operational needs.
1222
- readChecklist:
1223
- - Document deployment frequency requirements
1224
- - Identify rollback and recovery requirements
1225
- - Specify monitoring and alerting needs
1226
- - Define security and compliance constraints
1227
- - Mark ambiguities with [NEEDS CLARIFICATION]
1228
- confirmChecklist:
1229
- - Deployment requirements are documented
1230
- - Recovery requirements are specified
1231
- - Monitoring needs are identified
1232
- - Compliance constraints are clear
1233
- plan:
1234
- focus: |
1235
- Plan CI/CD pipeline architecture and infrastructure requirements.
1236
- Consider deployment strategies and monitoring needs.
1237
- readChecklist:
1238
- - Define pipeline stages (build, test, deploy)
1239
- - Identify infrastructure requirements
1240
- - Plan deployment strategy (rolling, blue-green, canary)
1241
- - Consider monitoring and alerting needs
1242
- - Plan secret management approach
1243
- confirmChecklist:
1244
- - Pipeline architecture is documented
1245
- - Deployment strategy is chosen and justified
1246
- - Infrastructure requirements are identified
1247
- - Monitoring approach is defined
1248
- onboard:
1249
- focus: |
1250
- Set up CI/CD and infrastructure tooling. Install build
1251
- tools, configure container runtime, and verify pipeline
1252
- connectivity.
1253
- readChecklist:
1254
- - Install container runtime (Colima/Docker)
1255
- - Install build tool (Nixpacks) and verify it works
1256
- - Configure GitHub Actions workflow directory structure
1257
- - Set up repository secrets for CI/CD pipeline
1258
- - Configure container registry authentication
1259
- - Verify local container builds succeed
1260
- confirmChecklist:
1261
- - Container runtime running and responsive
1262
- - Build tool creates images from project source
1263
- - GitHub Actions workflow files are valid YAML
1264
- - Repository secrets configured for deployment
1265
- - Container registry push/pull works
1266
- - Local build-test-run cycle completes successfully
1267
- code:
1268
- focus: |
1269
- Implement CI/CD pipelines and infrastructure as code. Follow
1270
- best practices for containerization and deployment automation.
1271
- readChecklist:
1272
- - Configure CI/CD pipeline stages
1273
- - Implement infrastructure as code (Terraform)
1274
- - Create Dockerfiles with security best practices
1275
- - Set up monitoring and alerting
1276
- - Configure secret management
1277
- - Implement deployment automation
1278
- confirmChecklist:
1279
- - Pipeline runs on every commit
1280
- - Tests run before deployment
1281
- - Deployments are automated
1282
- - Infrastructure is version controlled
1283
- - Secrets are managed securely
1284
- - Monitoring is in place
1285
- review:
1286
- focus: |
1287
- Verify pipeline reliability, security, and operational readiness.
1288
- Ensure rollback procedures work and documentation is complete.
1289
- readChecklist:
1290
- - Verify pipeline runs successfully end-to-end
1291
- - Test rollback procedures
1292
- - Review security configurations
1293
- - Validate monitoring and alerts
1294
- - Check documentation completeness
1295
- confirmChecklist:
1296
- - Pipeline is tested and reliable
1297
- - Rollback procedure is documented and tested
1298
- - Alerts are configured and tested
1299
- - Runbooks exist for common issues
1300
- deploy:
1301
- focus: |
1302
- Deploy pipeline and infrastructure changes to production.
1303
- Verify operational readiness.
1304
- readChecklist:
1305
- - Deploy pipeline configuration to production
1306
- - Verify deployment workflows work correctly
1307
- - Confirm monitoring and alerting are operational
1308
- - Run deployment through the new pipeline
1309
- confirmChecklist:
1310
- - Pipeline deployed and operational
1311
- - Workflows tested in production
1312
- - Monitoring confirms healthy operation
1313
- - First deployment through pipeline succeeded
1314
- toolReferences:
1315
- - name: Nixpacks
1316
- url: https://nixpacks.com/docs
1317
- simpleIcon: nixos
1318
- description:
1319
- Auto-detecting build system that creates optimized container images
1320
- useWhen: Building container images without writing Dockerfiles
1321
- - name: GitHub Actions
1322
- url: https://docs.github.com/en/actions
1323
- simpleIcon: githubactions
1324
- description: CI/CD and automation platform for GitHub repositories
1325
- useWhen: Automating build, test, and deployment pipelines
1326
- - name: Colima
1327
- url: https://github.com/abiosoft/colima
1328
- simpleIcon: docker
1329
- description: Container runtime for macOS with Docker-compatible CLI
1330
- useWhen:
1331
- Running containers locally, building images, or containerizing
1332
- applications
1333
- instructions: |
1334
- ## Step 1: Build with Nixpacks
1335
-
1336
- Nixpacks auto-detects your project type and creates optimized
1337
- container images without requiring a Dockerfile. For custom
1338
- builds, create a nixpacks.toml with build phases and start
1339
- command.
1340
-
1341
- ## Step 2: Create GitHub Actions Workflow
1342
-
1343
- Define a CI/CD workflow with test and build-and-push jobs.
1344
- Run tests on every push and PR. Build and push container
1345
- images to GitHub Container Registry on main merges only.
1346
- Tag images with commit SHA for traceability.
1347
-
1348
- ## Step 3: Configure Deployment
1349
-
1350
- Add a deploy job that triggers after successful image push.
1351
- Use rolling deployment with automatic rollback on failure.
1352
- Upgrade to blue-green only when you need instant rollback
1353
- for high-traffic services.
1354
- installScript: |
1355
- set -e
1356
- curl -sSL https://nixpacks.com/install.sh | bash
1357
- command -v nixpacks
1358
- command -v docker || command -v colima
1359
- implementationReference: |
1360
- ## Nixpacks Build
1361
-
1362
- ```bash
1363
- nixpacks build . --name app
1364
- docker run --rm app
1365
- ```
1366
-
1367
- ## Nixpacks Config
1368
-
1369
- ```toml
1370
- # nixpacks.toml
1371
- [phases.build]
1372
- cmds = ["npm run build"]
1373
-
1374
- [start]
1375
- cmd = "node dist/index.js"
1376
- ```
1377
-
1378
- ## GitHub Actions CI/CD
1379
-
1380
- ```yaml
1381
- name: CI/CD
1382
- on:
1383
- push:
1384
- branches: [main]
1385
- pull_request:
1386
- branches: [main]
1387
-
1388
- env:
1389
- REGISTRY: ghcr.io/${{ github.repository }}
1390
-
1391
- jobs:
1392
- test:
1393
- runs-on: ubuntu-latest
1394
- steps:
1395
- - uses: actions/checkout@v4
1396
- - uses: actions/setup-node@v4
1397
- with:
1398
- node-version: 20
1399
- cache: npm
1400
- - run: npm ci
1401
- - run: npm test
1402
-
1403
- build-and-push:
1404
- needs: test
1405
- runs-on: ubuntu-latest
1406
- permissions:
1407
- contents: read
1408
- packages: write
1409
- steps:
1410
- - uses: actions/checkout@v4
1411
- - uses: docker/login-action@v3
1412
- with:
1413
- registry: ghcr.io
1414
- username: ${{ github.actor }}
1415
- password: ${{ secrets.GITHUB_TOKEN }}
1416
- - name: Build and push
1417
- run: |
1418
- nixpacks build . --name app
1419
- docker tag app ${{ env.REGISTRY }}:${{ github.sha }}
1420
- docker push ${{ env.REGISTRY }}:${{ github.sha }}
1421
- if: github.ref == 'refs/heads/main'
1422
- ```
1423
-
1424
- ## Rolling Deployment
1425
-
1426
- ```yaml
1427
- deploy:
1428
- needs: build-and-push
1429
- if: github.ref == 'refs/heads/main'
1430
- runs-on: ubuntu-latest
1431
- environment: production
1432
- steps:
1433
- - name: Deploy
1434
- run: |
1435
- kubectl set image deployment/app app=${{ env.REGISTRY }}:${{ github.sha }}
1436
- kubectl rollout status deployment/app --timeout=5m || \
1437
- (kubectl rollout undo deployment/app && exit 1)
1438
- ```
1439
-
1440
- ## Verification
1441
-
1442
- ```bash
1443
- nixpacks build . --name app
1444
- docker run --rm app
1445
- git push origin main
1446
- # Check Actions tab for pipeline status
1447
- ```
1448
-
1449
- Your pipeline is working when:
1450
- - Tests run on every PR
1451
- - Images are pushed only on main merges
1452
- - Deployments complete within 10 minutes
1453
- - Failed deployments automatically roll back
1454
- - id: technical_debt_management
1455
- name: Technical Debt Management
1456
- human:
1457
- description:
1458
- Identifying, prioritizing, and addressing technical debt strategically.
1459
- Accepts technical debt when it enables faster business value; explicitly
1460
- leaves generalization to platform teams when appropriate.
1461
- proficiencyDescriptions:
1462
- awareness:
1463
- You recognize obvious technical debt (duplicated code, missing tests,
1464
- outdated dependencies) and flag issues to the team.
1465
- foundational:
1466
- You document technical debt with context and business impact,
1467
- contribute to prioritization discussions, and address debt in code you
1468
- touch.
1469
- working:
1470
- You prioritize debt systematically based on risk and impact, balance
1471
- debt reduction with feature work, and make pragmatic trade-offs. You
1472
- know when to take on debt intentionally.
1473
- practitioner:
1474
- You create debt reduction strategies across teams in your area,
1475
- influence roadmap decisions to include debt work, and teach engineers
1476
- when to accept debt for speed vs when to invest in quality.
1477
- expert:
1478
- You shape approaches to technical debt across the business unit. You
1479
- create frameworks others adopt, balance large-scale technical health
1480
- with delivery velocity, and are recognized for strategic debt
1481
- management.
1482
- agent:
1483
- name: technical-debt-management
1484
- description: |
1485
- Guide for identifying, prioritizing, and addressing technical debt.
1486
- useWhen: |
1487
- Assessing code quality issues, planning refactoring work, or making
1488
- build vs fix decisions.
1489
- stages:
1490
- specify:
1491
- focus: |
1492
- Define technical debt scope and acceptance criteria.
1493
- Clarify impact and urgency of debt items.
1494
- readChecklist:
1495
- - Document debt items and their business impact
1496
- - Define acceptance criteria for debt resolution
1497
- - Specify constraints (time, risk tolerance)
1498
- - Identify dependencies and affected systems
1499
- - Mark ambiguities with [NEEDS CLARIFICATION]
1500
- confirmChecklist:
1501
- - Debt items are documented
1502
- - Business impact is assessed
1503
- - Acceptance criteria are defined
1504
- - Constraints are clear
1505
- plan:
1506
- focus: |
1507
- Assess technical debt and prioritize based on impact and effort.
1508
- Decide whether to accept, defer, or address debt.
1509
- readChecklist:
1510
- - Identify and document technical debt
1511
- - Assess impact and effort for each item
1512
- - Prioritize using impact/effort matrix
1513
- - Decide accept, defer, or address
1514
- confirmChecklist:
1515
- - Debt is documented with context
1516
- - Impact and effort are assessed
1517
- - Prioritization criteria are clear
1518
- - Decision is documented
1519
- onboard:
1520
- focus: |
1521
- Set up code analysis and quality measurement tools.
1522
- Configure SonarQube, dependency scanning, and establish
1523
- baseline metrics.
1524
- readChecklist:
1525
- - Install code analysis tools (SonarQube scanner, Dependabot)
1526
- - Configure quality gates and analysis rules
1527
- - Run initial scan to establish baseline metrics
1528
- - Set up dependency update automation
1529
- - Configure IDE integration for quality feedback
1530
- confirmChecklist:
1531
- - Code analysis tools installed and configured
1532
- - Baseline quality metrics captured
1533
- - Quality gates defined and enforced
1534
- - Dependency scanning is operational
1535
- - IDE shows quality feedback inline
1536
- code:
1537
- focus: |
1538
- Address debt incrementally while delivering features. Document
1539
- intentional debt clearly.
1540
- readChecklist:
1541
- - Apply Kid Scout Rule (leave code better)
1542
- - Refactor while adding features
1543
- - Document new intentional debt
1544
- - Track debt in backlog
1545
- confirmChecklist:
1546
- - Debt work is visible in planning
1547
- - New debt is intentional and documented
1548
- - Code quality improved where touched
1549
- - Technical debt backlog updated
1550
- review:
1551
- focus: |
1552
- Validate debt reduction and ensure new debt is intentional
1553
- and documented.
1554
- readChecklist:
1555
- - Review debt reduction progress
1556
- - Verify new debt is documented
1557
- - Check debt backlog currency
1558
- - Assess overall technical health
1559
- confirmChecklist:
1560
- - Debt reduction validated
1561
- - New debt justified and documented
1562
- - Backlog is current
1563
- - Metrics track debt trends
1564
- deploy:
1565
- focus: |
1566
- Deploy debt reduction changes and verify improvements
1567
- in production.
1568
- readChecklist:
1569
- - Deploy refactored code to production
1570
- - Verify no regressions from debt work
1571
- - Monitor system health after changes
1572
- - Update debt backlog and metrics
1573
- confirmChecklist:
1574
- - Debt reduction deployed successfully
1575
- - No regressions detected
1576
- - System health maintained or improved
1577
- - Debt backlog updated
1578
- toolReferences:
1579
- - name: SonarQube
1580
- url: https://docs.sonarsource.com/sonarqube/latest/
1581
- simpleIcon: sonarqubeserver
1582
- description: Code quality and security analysis platform
1583
- useWhen:
1584
- Measuring technical debt, tracking quality metrics, or identifying
1585
- code smells at scale
1586
- - name: Dependabot
1587
- url: https://docs.github.com/en/code-security/dependabot
1588
- simpleIcon: dependabot
1589
- description: Automated dependency updates for GitHub repositories
1590
- useWhen: Automating dependency updates and reducing dependency debt
1591
- instructions: |
1592
- ## Step 1: Identify and Document Debt
1593
-
1594
- Use a consistent format in code comments and issues. Include
1595
- a tracking ID, description, impact estimate, effort size,
1596
- and owner.
1597
-
1598
- ## Step 2: Prioritize Using Impact/Effort Matrix
1599
-
1600
- Categorize each debt item by impact (high/low) and effort
1601
- (high/low). Do high-impact/low-effort items now. Plan
1602
- high-impact/high-effort items for sprints. Apply Kid Scout
1603
- Rule to low-impact/low-effort items. Defer or accept
1604
- low-impact/high-effort items.
1605
-
1606
- ## Step 3: Decide Accept, Defer, or Address
1607
-
1608
- Accept debt when time-to-market is critical AND you have a
1609
- payback plan, requirements are uncertain, or code is
1610
- short-lived. Never accept debt in security-sensitive paths,
1611
- core functionality, or high-change-frequency areas.
1612
-
1613
- ## Step 4: Track Debt Metrics
1614
-
1615
- Configure SonarQube quality gates: Technical Debt Ratio
1616
- below 5%, no new code smells on changed files,
1617
- maintainability rating A or better.
1618
-
1619
- ## Step 5: Reduce Debt Incrementally
1620
-
1621
- Apply the Kid Scout Rule: leave code better than you found
1622
- it. Refactor adjacent to feature work. For large legacy
1623
- systems, use the Strangler Fig pattern to route traffic
1624
- incrementally until legacy is unused.
1625
- installScript: |
1626
- set -e
1627
- npm install -D sonarqube-scanner
1628
- npx sonar-scanner --version || true
1629
- implementationReference: |
1630
- ## Debt Documentation Format
1631
-
1632
- ```markdown
1633
- TODO(DEBT-123): Extract common validation logic
1634
- Impact: Slows feature work ~2h/week
1635
- Effort: M (1-2 days)
1636
- Owner: @team-platform
1637
- ```
1638
-
1639
- ## Impact/Effort Matrix
1640
-
1641
- | Impact | Effort | Action |
1642
- |--------|--------|------------------|
1643
- | High | Low | Do now |
1644
- | High | High | Plan for sprint |
1645
- | Low | Low | Kid Scout Rule |
1646
- | Low | High | Defer or accept |
1647
-
1648
- ## Accept vs Reject Debt
1649
-
1650
- **Accept when:**
1651
- - Time-to-market is critical AND you have a payback plan
1652
- - Requirements are uncertain (prototype/experiment)
1653
- - Code is short-lived (migration, one-off script)
1654
-
1655
- **Never accept in:**
1656
- - Security-sensitive code paths
1657
- - Core system functionality
1658
- - High-change-frequency areas
1659
-
1660
- ## Strangler Fig Pattern
1661
-
1662
- ```
1663
- Request -> Router -> [New Service | Legacy] -> Response
1664
- ```
1665
- Route traffic incrementally until legacy is unused.
1666
-
1667
- ## Verification
1668
-
1669
- - Debt backlog is current and prioritized
1670
- - New intentional debt has documented justification
1671
- - SonarQube metrics trend in right direction
1672
- - Team can articulate why specific debt was accepted