@forwardimpact/schema 0.8.3 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,8 +1,9 @@
1
1
  # yaml-language-server: $schema=https://www.forwardimpact.team/schema/json/capability.schema.json
2
2
 
3
+ id: delivery
3
4
  name: Delivery
4
5
  emojiIcon: 🚀
5
- ordinalRank: 1
6
+ ordinalRank: 3
6
7
  description: |
7
8
  Building and shipping solutions that solve real problems.
8
9
  Encompasses full-stack development, data integration, problem discovery,
@@ -41,144 +42,239 @@ managementResponsibilities:
41
42
  Shape delivery culture across the business unit, lead strategic delivery
42
43
  transformations, and represent delivery commitments at executive level
43
44
  skills:
44
- - id: architecture_design
45
- name: Architecture & Design
45
+ - id: data_integration
46
+ name: Data Integration
46
47
  human:
47
48
  description:
48
- Ability to design software systems that are scalable, maintainable, and
49
- fit for purpose. In the AI era, this includes designing systems that
50
- effectively leverage AI capabilities while maintaining human oversight.
49
+ Gaining access to enterprise data, cleaning messy real-world datasets,
50
+ and making information usable for decision-making—often with
51
+ inconsistent formats, missing values, and undocumented schemas. The
52
+ heart of embedded engineering work.
51
53
  levelDescriptions:
52
54
  awareness:
53
- You understand basic architectural concepts (separation of concerns,
54
- modularity, coupling) and can read architecture diagrams. You follow
55
- established patterns with guidance.
55
+ You understand how data flows through systems and can use existing
56
+ pipelines, APIs, and data sources with guidance. You know to ask about
57
+ data quality.
56
58
  foundational:
57
- You explain and apply common patterns (MVC, microservices,
58
- event-driven) to familiar problems. You contribute to design
59
- discussions and identify when existing patterns don't fit.
59
+ You create simple data transformations and handle common formats (CSV,
60
+ JSON, SQL). You identify and report data quality issues and understand
61
+ basic ETL concepts.
60
62
  working:
61
- You design components and services independently for moderate
62
- complexity. You make appropriate trade-off decisions, document design
63
- rationale, and consider AI integration points in your designs.
63
+ You integrate multiple data sources independently, clean messy
64
+ datasets, handle inconsistent formats and missing values, and document
65
+ data lineage. You troubleshoot integration failures.
64
66
  practitioner:
65
- You design complex multi-component systems end-to-end, evaluate
66
- architectural options for large initiatives across teams, guide
67
- technical decisions for your area, and mentor engineers on
68
- architecture. You balance elegance with delivery needs.
67
+ You navigate complex enterprise data landscapes across teams, build
68
+ relationships to gain data access, handle undocumented schemas through
69
+ investigation, and build robust, maintainable integration solutions.
70
+ You mentor engineers in your area on data integration challenges.
69
71
  expert:
70
- You define architecture standards and patterns across the business
71
- unit. You innovate on approaches to large-scale challenges, shape
72
- AI-integrated system design, and are recognized externally as an
73
- architecture authority.
72
+ You define data integration patterns and best practices across the
73
+ business unit. You architect large-scale data flows, solve the most
74
+ complex integration challenges, and are the authority on enterprise
75
+ data integration.
74
76
  agent:
75
- name: architecture-design
76
- description:
77
- Guide for designing software systems and making architectural decisions.
77
+ name: data-integration
78
+ description: |
79
+ Guide for integrating data from multiple sources, cleaning messy
80
+ datasets, and handling data quality issues.
78
81
  useWhen: |
79
- Asked to design a system, evaluate architecture options, or make
80
- structural decisions about code organization.
82
+ Working with enterprise data, ETL pipelines, or data transformation
83
+ tasks.
81
84
  stages:
82
85
  specify:
83
86
  focus: |
84
- Define system requirements and constraints before design.
85
- Clarify functional and non-functional requirements.
87
+ Define data integration requirements and acceptance criteria.
88
+ Clarify data sources, formats, and quality expectations.
86
89
  readChecklist:
87
- - Document functional requirements and use cases
88
- - Identify non-functional requirements (scale, latency,
89
- availability)
90
- - Document system constraints and integration points
91
- - Identify stakeholders and their concerns
90
+ - Identify source and target data systems
91
+ - Document data format and schema requirements
92
+ - Define data quality acceptance criteria
93
+ - Clarify data freshness and latency requirements
92
94
  - Mark ambiguities with [NEEDS CLARIFICATION]
93
95
  confirmChecklist:
94
- - Functional requirements are documented
95
- - Non-functional requirements are specified
96
- - Constraints are identified
97
- - Stakeholder concerns are understood
96
+ - Data sources are identified and accessible
97
+ - Data format requirements are documented
98
+ - Quality criteria are defined
99
+ - Latency requirements are clear
98
100
  plan:
99
- focus: Understanding requirements and designing solutions
101
+ focus: |
102
+ Plan data integration approach. Identify sources, assess quality,
103
+ and plan transformation logic.
100
104
  readChecklist:
101
- - Gather context about existing systems and constraints
102
- - Clarify non-functional requirements (scale, latency, availability)
103
- - Identify key decisions that are hard to change later
104
- - Evaluate trade-offs between architectural options
105
- - Document approach with rationale
105
+ - Identify data sources and access requirements
106
+ - Assess data quality and completeness
107
+ - Plan transformation logic and validation
108
+ - Document data lineage approach
106
109
  confirmChecklist:
107
- - Requirements are clearly understood
108
- - Key decisions are documented with rationale
109
- - Trade-offs are explicit
110
- - Dependencies are identified
110
+ - Data sources are identified
111
+ - Data formats are understood
112
+ - Data quality requirements are defined
113
+ - Transformation logic is planned
111
114
  onboard:
112
115
  focus: |
113
- Set up the development environment for the planned
114
- architecture. Install frameworks, configure project
115
- structure, and verify tooling.
116
+ Set up the data integration environment. Install data
117
+ processing tools, configure data source access, and verify
118
+ connectivity to all required systems.
116
119
  readChecklist:
117
- - Install planned frameworks and dependencies
118
- - Create project structure matching architecture design
119
- - Configure Mermaid rendering for architecture docs
120
- - Set up ADR directory for decision records
121
- - Configure linter and formatter for the project
120
+ - Install data tools (DuckDB, Polars, Great Expectations)
121
+ - Configure database connections and API credentials
122
+ - Verify access to all identified data sources
123
+ - Set up virtual environment and pin dependency versions
124
+ - Create .env file with connection strings and credentials
122
125
  confirmChecklist:
123
- - Project structure reflects architectural boundaries
124
- - All planned frameworks installed and importable
125
- - Documentation tooling renders diagrams correctly
126
- - Linter and formatter configured and passing
127
- - Build system compiles without errors
126
+ - All data processing libraries installed and importable
127
+ - Data source connections verified and working
128
+ - Credentials stored securely in .env (not committed to git)
129
+ - Sample queries run successfully against each data source
130
+ - Virtual environment is reproducible (requirements.txt or
131
+ pyproject.toml)
128
132
  code:
129
- focus: Implementing architecture faithfully while adapting to reality
133
+ focus: |
134
+ Implement data transformations with robust quality checks
135
+ and error handling for messy real-world data.
130
136
  readChecklist:
131
- - Verify implementation aligns with design decisions
132
- - Implement interfaces and boundaries before internals
133
- - Document any deviations from design with rationale
134
- - Keep architecture documentation in sync with implementation
137
+ - Implement data extraction and loading
138
+ - Handle data quality issues (nulls, formats, duplicates)
139
+ - Create transformation logic
140
+ - Add validation and error handling
141
+ - Document data lineage
135
142
  confirmChecklist:
136
- - Implementation matches documented design
137
- - Deviations documented with rationale
138
- - Failure modes are considered and handled
139
- - Security implications are reviewed
143
+ - Data transformations produce expected output
144
+ - Basic validation exists for input data
145
+ - Data formats are handled correctly
146
+ - Error handling exists for malformed data
147
+ - Pipeline is idempotent
140
148
  review:
141
- focus: Verifying architecture implementation and documentation
149
+ focus: |
150
+ Validate data quality, transformation correctness, and
151
+ operational readiness.
142
152
  readChecklist:
143
- - Compare implementation to design documentation
144
- - Verify all decisions were followed or documented
145
- - Assess maintainability and extensibility
146
- - Ensure architecture enables future changes
153
+ - Verify data quality checks
154
+ - Test with edge cases and malformed data
155
+ - Review error handling coverage
156
+ - Validate documentation completeness
147
157
  confirmChecklist:
148
- - Design docs reflect actual implementation
149
- - Architecture decisions validated in practice
150
- - Scalability requirements addressed
158
+ - Data quality checks are implemented
159
+ - Edge cases are handled
160
+ - Data lineage is documented
161
+ - Failures are logged and alertable
151
162
  deploy:
152
163
  focus: |
153
- Deploy architecture and verify it performs as designed
154
- in production environment.
164
+ Deploy data pipeline to production and verify data flow.
165
+ Monitor for data quality and latency issues.
155
166
  readChecklist:
156
- - Deploy system components to production
157
- - Verify architectural boundaries work under load
158
- - Monitor performance against requirements
159
- - Document any operational learnings
167
+ - Deploy pipeline configuration
168
+ - Verify data flows end-to-end in production
169
+ - Monitor data quality metrics
170
+ - Confirm alerting is operational
160
171
  confirmChecklist:
161
- - System deployed successfully
162
- - Performance meets requirements
163
- - Monitoring confirms design assumptions
164
- - Operational procedures are documented
172
+ - Pipeline deployed successfully
173
+ - Data flowing in production
174
+ - Quality metrics within thresholds
175
+ - Alerting verified working
176
+ toolReferences:
177
+ - name: DuckDB
178
+ url: https://duckdb.org/docs/
179
+ simpleIcon: duckdb
180
+ description: In-process analytical database
181
+ useWhen: Querying CSV/Parquet files with SQL or quick data exploration
182
+ - name: Polars
183
+ url: https://docs.pola.rs/
184
+ simpleIcon: polars
185
+ description: Fast DataFrame library with lazy evaluation
186
+ useWhen: Transforming and cleaning large datasets programmatically
187
+ - name: Great Expectations
188
+ url: https://docs.greatexpectations.io/
189
+ simpleIcon: python
190
+ description: Data validation and profiling framework
191
+ useWhen: Validating data quality and creating data documentation
192
+ instructions: |
193
+ ## Step 1: Explore the Source Data
194
+
195
+ Use DuckDB to quickly inspect files without loading into memory.
196
+ Check schema, data types, row counts, and null distributions.
197
+
198
+ ## Step 2: Transform with Polars
199
+
200
+ Use lazy evaluation for large datasets: filter, fill nulls,
201
+ parse dates, and aggregate. Collect only when the query plan
202
+ is complete. Write cleaned data to Parquet.
203
+
204
+ ## Step 3: Validate Data Quality
205
+
206
+ Define expectations with Great Expectations: not-null checks,
207
+ uniqueness constraints, value ranges. Run validation and
208
+ check results.
209
+
210
+ ## Step 4: Export to Target Format
211
+
212
+ Use DuckDB COPY or Polars write methods to export transformed
213
+ data to the target format and location.
214
+ installScript: |
215
+ set -e
216
+ pip install duckdb polars great-expectations
217
+ python -c "import duckdb, polars, great_expectations"
165
218
  implementationReference: |
166
- ## Common Patterns
167
-
168
- ### Service Architecture
169
- - **Microservices**: Independent deployment, clear boundaries
170
- - **Monolith**: Simpler deployment, easier refactoring
171
- - **Modular monolith**: Boundaries within single deployment
172
-
173
- ### Data Patterns
174
- - **Event sourcing**: Full audit trail, complex queries
175
- - **CQRS**: Separate read and write models
176
- - **Repository pattern**: Abstract data access
177
-
178
- ### Communication Patterns
179
- - **REST**: Synchronous, request-response
180
- - **Event-driven**: Asynchronous, loose coupling
181
- - **gRPC**: Efficient, strongly typed
219
+ ## SQL Exploration
220
+
221
+ ```sql
222
+ SELECT * FROM read_csv('data.csv') LIMIT 10;
223
+ DESCRIBE SELECT * FROM read_csv('data.csv');
224
+ SELECT COUNT(*), COUNT(id), COUNT(email) FROM read_csv('data.csv');
225
+ ```
226
+
227
+ ## Polars Transformation
228
+
229
+ ```python
230
+ import polars as pl
231
+
232
+ df = (
233
+ pl.scan_csv("source_data.csv")
234
+ .filter(pl.col("status") == "active")
235
+ .with_columns(
236
+ pl.col("value").fill_null(0),
237
+ pl.col("date").str.to_date("%Y-%m-%d")
238
+ )
239
+ .group_by("category")
240
+ .agg(pl.col("value").sum())
241
+ .collect()
242
+ )
243
+ df.write_parquet("cleaned_data.parquet")
244
+ ```
245
+
246
+ ## Data Quality Validation
247
+
248
+ ```python
249
+ import great_expectations as gx
250
+
251
+ context = gx.get_context()
252
+ validator = context.sources.pandas_default.read_csv("cleaned_data.csv")
253
+ validator.expect_column_values_to_not_be_null("id")
254
+ validator.expect_column_values_to_be_unique("id")
255
+ validator.expect_column_values_to_be_between("age", 0, 120)
256
+ results = validator.validate()
257
+ ```
258
+
259
+ ## Verification
260
+
261
+ Your pipeline is working when:
262
+ - Source data loads without errors
263
+ - Transformation produces expected row counts
264
+ - Data quality checks pass
265
+ - Output file is readable and contains expected data
266
+
267
+ ```python
268
+ result = pl.read_parquet("output.parquet")
269
+ assert len(result) > 0, "Output should have rows"
270
+ ```
271
+
272
+ ## Common Pitfalls
273
+
274
+ - **Data leakage**: Using future data in training sets
275
+ - **Silent nulls**: Empty strings vs NULL vs placeholder values
276
+ - **Schema drift**: Columns change without warning
277
+ - **Encoding issues**: UTF-8 vs Latin-1 in CSV files
182
278
  - id: full_stack_development
183
279
  name: Full-Stack Development
184
280
  human:
@@ -213,11 +309,12 @@ skills:
213
309
  polymathic engineering.
214
310
  agent:
215
311
  name: full-stack-development
216
- description:
217
- Guide for building complete solutions across the full technology stack.
312
+ description: |
313
+ Guide for building complete solutions across the full technology
314
+ stack.
218
315
  useWhen: |
219
- Implementing features spanning frontend, backend, database, and
220
- infrastructure layers.
316
+ Asked to implement features spanning frontend, backend, database,
317
+ and infrastructure layers.
221
318
  stages:
222
319
  specify:
223
320
  focus: |
@@ -235,60 +332,68 @@ skills:
235
332
  - Integration points are identified
236
333
  - Non-functional requirements are clear
237
334
  plan:
238
- focus: Designing the complete solution across layers
335
+ focus: |
336
+ Design the full-stack solution architecture. Define API
337
+ contracts and plan layer interactions.
239
338
  readChecklist:
240
- - Define the API contract between frontend and backend
241
- - Design database schema to support the feature
339
+ - Define the API contract first
340
+ - Plan frontend and backend responsibilities
341
+ - Design database schema
242
342
  - Plan infrastructure requirements
243
- - Identify cross-layer dependencies
244
343
  confirmChecklist:
245
344
  - API contract is defined
246
- - Database schema is designed
247
- - Infrastructure needs identified
248
- - Layer boundaries are clear
345
+ - Layer responsibilities are clear
346
+ - Database schema is planned
347
+ - Infrastructure approach is decided
249
348
  onboard:
250
349
  focus: |
251
- Set up the full-stack development environment.
252
- Install dependencies for each layer and verify
253
- connectivity between layers.
350
+ Set up the full-stack development environment. Install
351
+ frameworks, configure services, set up database, and verify
352
+ the development server runs.
254
353
  readChecklist:
255
- - Install frontend dependencies (e.g., npm install)
256
- - Install backend dependencies and runtime
257
- - Start local database and verify connection
258
- - Configure environment variables (.env files)
259
- - Verify API layer can connect to database
260
- - Set up linter and formatter for the project
354
+ - Install project dependencies (npm install, pip install)
355
+ - Configure environment variables in .env.local or .env
356
+ - Start local database and apply schema/migrations
357
+ - Configure linter, formatter, and pre-commit hooks
358
+ - Set up GitHub tokens for API access if needed
359
+ - Verify development server starts without errors
261
360
  confirmChecklist:
262
- - Frontend dev server starts without errors
263
- - Backend API responds to health checks
264
- - Database is running and seeded with test data
265
- - Environment variables configured for all layers
266
- - Linter and formatter configured and passing
267
- - Build commands succeed for all layers
361
+ - All dependencies installed and versions locked
362
+ - Environment variables configured for local development
363
+ - Database running locally with schema applied
364
+ - Linter and formatter pass on existing code
365
+ - Development server starts and responds to requests
366
+ - CI pipeline configuration is valid
268
367
  code:
269
- focus: Building vertically across all layers
368
+ focus: |
369
+ Build vertically—complete one feature end-to-end before
370
+ starting another. Validates assumptions early.
270
371
  readChecklist:
271
- - Implement backend API endpoints
272
- - Build frontend components and integrate with API
273
- - Set up database migrations and queries
274
- - Configure infrastructure as code
275
- - Test across layer boundaries
372
+ - Implement API endpoints
373
+ - Build frontend integration
374
+ - Create database schema and queries
375
+ - Configure infrastructure as needed
376
+ - Test across layers
276
377
  confirmChecklist:
277
378
  - Frontend connects to backend correctly
278
379
  - Database schema supports the feature
279
380
  - Error handling spans all layers
280
381
  - Feature works end-to-end
382
+ - Deployment is automated
281
383
  review:
282
- focus: Verifying integration across the stack
384
+ focus: |
385
+ Verify integration across layers and ensure deployment
386
+ readiness.
283
387
  readChecklist:
284
- - Test complete user flows end-to-end
285
- - Verify error handling at each layer
286
- - Check deployment pipeline works
287
- - Validate monitoring and logging
388
+ - Test integration across all layers
389
+ - Verify error handling end-to-end
390
+ - Check deployment configuration
391
+ - Review documentation
288
392
  confirmChecklist:
289
- - End-to-end tests pass
290
- - Deployment is automated
291
- - Cross-layer errors are handled gracefully
393
+ - Integration tests pass
394
+ - Deployment verified
395
+ - Documentation is complete
396
+ - Feature is production-ready
292
397
  deploy:
293
398
  focus: |
294
399
  Deploy full-stack feature to production and verify end-to-end
@@ -305,39 +410,583 @@ skills:
305
410
  - No errors in monitoring
306
411
  - Performance meets requirements
307
412
  toolReferences:
308
- - name: Terraform
309
- url: https://developer.hashicorp.com/terraform/docs
310
- simpleIcon: terraform
311
- description: Infrastructure as code tool
312
- useWhen: Provisioning and managing cloud infrastructure as code
313
- - name: CloudFormation
314
- url: https://docs.aws.amazon.com/cloudformation/
315
- description: AWS infrastructure as code service
316
- useWhen: Managing cloud infrastructure using declarative templates
413
+ - name: Supabase
414
+ url: https://supabase.com/docs
415
+ simpleIcon: supabase
416
+ description: Open source Firebase alternative with PostgreSQL
417
+ useWhen:
418
+ Building applications with PostgreSQL, auth, and real-time features
419
+ - name: Next.js
420
+ url: https://nextjs.org/docs
421
+ simpleIcon: nextdotjs
422
+ description: React framework for full-stack web applications
423
+ useWhen:
424
+ Building React applications with server-side rendering or API routes
425
+ - name: GitHub Actions
426
+ url: https://docs.github.com/en/actions
427
+ simpleIcon: githubactions
428
+ description: CI/CD and automation platform
429
+ useWhen: Automating builds, tests, and deployments
430
+ - name: Nixpacks
431
+ url: https://nixpacks.com/docs
432
+ simpleIcon: nixos
433
+ description: Build tool that auto-detects and builds applications
434
+ useWhen: Auto-building and deploying applications to containers
317
435
  - name: Colima
318
436
  url: https://github.com/abiosoft/colima
319
437
  simpleIcon: docker
320
- description: Container runtime for macOS with Docker-compatible CLI
438
+ description:
439
+ Lightweight container runtime for macOS with Docker-compatible CLI
321
440
  useWhen:
322
- Running containers locally, building images, or containerizing
323
- applications
441
+ Running containers locally for development, building images, or
442
+ testing containerized apps
443
+ instructions: |
444
+ ## Step 1: Configure Environment
445
+
446
+ Get connection details from `supabase status`. Create `.env.local`
447
+ with Supabase URL and anon key. Create the Supabase client module.
448
+
449
+ ## Step 2: Create Database Schema
450
+
451
+ Create a migration with `supabase migration new`, define the
452
+ SQL schema with RLS enabled, and apply with `supabase db push`.
453
+
454
+ ## Step 3: Build API Routes
455
+
456
+ Create Next.js API routes for GET and POST operations using
457
+ the Supabase client.
458
+
459
+ ## Step 4: Build Frontend
460
+
461
+ Create a React component that fetches from the API and renders
462
+ data. Start with a simple list display.
463
+
464
+ ## Step 5: Deploy
465
+
466
+ Use Nixpacks to auto-detect and build the image. Run it
467
+ locally with Colima's Docker-compatible runtime to verify
468
+ before deploying to production.
469
+ installScript: |
470
+ set -e
471
+ brew install colima
472
+ colima start
473
+ brew install supabase/tap/supabase || npm install -g supabase
474
+ npx create-next-app@latest my-app --typescript
475
+ cd my-app
476
+ supabase init
477
+ supabase start
478
+ npm install @supabase/supabase-js
479
+ colima status
324
480
  implementationReference: |
325
- ## Technology Stack
481
+ ## Supabase Client Setup
482
+
483
+ ```typescript
484
+ // lib/supabase.ts
485
+ import { createClient } from '@supabase/supabase-js'
486
+
487
+ export const supabase = createClient(
488
+ process.env.NEXT_PUBLIC_SUPABASE_URL!,
489
+ process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
490
+ )
491
+ ```
492
+
493
+ ## Database Schema
494
+
495
+ ```sql
496
+ CREATE TABLE items (
497
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
498
+ name TEXT NOT NULL,
499
+ description TEXT,
500
+ created_at TIMESTAMPTZ DEFAULT NOW()
501
+ );
502
+ ALTER TABLE items ENABLE ROW LEVEL SECURITY;
503
+ ```
504
+
505
+ ## API Route
506
+
507
+ ```typescript
508
+ // app/api/items/route.ts
509
+ import { supabase } from '@/lib/supabase'
510
+ import { NextResponse } from 'next/server'
511
+
512
+ export async function GET() {
513
+ const { data, error } = await supabase.from('items').select('*')
514
+ if (error) return NextResponse.json({ error }, { status: 500 })
515
+ return NextResponse.json(data)
516
+ }
517
+ ```
518
+
519
+ ## Frontend Component
520
+
521
+ ```typescript
522
+ // app/page.tsx
523
+ 'use client'
524
+ import { useEffect, useState } from 'react'
525
+
526
+ export default function Home() {
527
+ const [items, setItems] = useState([])
528
+ useEffect(() => {
529
+ fetch('/api/items').then(r => r.json()).then(setItems)
530
+ }, [])
531
+ return (
532
+ <main>
533
+ <h1>Items</h1>
534
+ <ul>{items.map((item: any) => <li key={item.id}>{item.name}</li>)}</ul>
535
+ </main>
536
+ )
537
+ }
538
+ ```
539
+
540
+ ## Verification
541
+
542
+ Your full-stack app is working when:
543
+ - `npm run dev` starts without errors
544
+ - Frontend loads at http://localhost:3000
545
+ - API responds at http://localhost:3000/api/items
546
+ - Data persists in database (check Supabase Studio at http://localhost:54323)
547
+
548
+ ## Common Pitfalls
549
+
550
+ - **Missing env vars**: Supabase client fails silently
551
+ - **RLS without policies**: Queries return empty results
552
+ - **Type mismatch**: Generate types with `supabase gen types typescript`
553
+ - **Migration order**: Migrations apply alphabetically by filename
554
+
555
+ ## Local Container Testing with Colima
556
+
557
+ ```bash
558
+ # Start Colima (lightweight Docker-compatible runtime)
559
+ colima start
560
+
561
+ # Build with Nixpacks and run locally
562
+ nixpacks build . --name my-app
563
+ docker run --rm -p 3000:3000 --env-file .env.local my-app
564
+
565
+ # Verify app responds
566
+ curl http://localhost:3000
567
+ ```
568
+ - id: problem_discovery
569
+ name: Problem Discovery
570
+ human:
571
+ description:
572
+ Navigating undefined problem spaces to uncover real requirements through
573
+ observation and immersion. Where most engineers expect specifications,
574
+ FDEs embrace ambiguity—starting with open questions like "How can we
575
+ accelerate patient recruitment?" rather than detailed requirements
576
+ documents.
577
+ levelDescriptions:
578
+ awareness:
579
+ You recognize that initial requirements are often incomplete. You ask
580
+ clarifying questions when you encounter gaps and don't make
581
+ assumptions.
582
+ foundational:
583
+ You actively seek context beyond initial requirements, interview
584
+ stakeholders to understand "why" behind requests, and document
585
+ discovered constraints and assumptions.
586
+ working:
587
+ You navigate ambiguous problem spaces independently. You discover
588
+ requirements through observation and user shadowing, reframe problems
589
+ to find higher-value solutions, and distinguish symptoms from root
590
+ causes.
591
+ practitioner:
592
+ You seek out undefined problems rather than avoiding them. You embed
593
+ with users to discover latent needs, coach engineers in your area on
594
+ problem discovery techniques, and turn ambiguity into clear problem
595
+ statements.
596
+ expert:
597
+ You shape approaches to problem discovery across the business unit.
598
+ You are recognized for transforming ambiguous situations into clear
599
+ opportunities, influence how teams engage with business problems, and
600
+ are the go-to person for the most undefined challenges.
601
+ agent:
602
+ name: problem-discovery
603
+ description: |
604
+ Guide for navigating undefined problem spaces and uncovering real
605
+ requirements.
606
+ useWhen: |
607
+ Facing ambiguous requests, exploring user needs, or translating vague
608
+ asks into clear problem statements.
609
+ stages:
610
+ specify:
611
+ focus: |
612
+ Explore the problem space and document what is known.
613
+ Surface ambiguities and unknowns before attempting solutions.
614
+ readChecklist:
615
+ - Document the initial problem statement as understood
616
+ - List stakeholders and their perspectives
617
+ - Identify what is known vs unknown
618
+ - Document assumptions that need validation
619
+ - Mark all ambiguities with [NEEDS CLARIFICATION]
620
+ confirmChecklist:
621
+ - Initial problem statement is documented
622
+ - Stakeholders are identified
623
+ - Known vs unknown is explicit
624
+ - Assumptions are listed for validation
625
+ plan:
626
+ focus: |
627
+ Embrace ambiguity and explore the problem space. Understand
628
+ context deeply before proposing solutions.
629
+ readChecklist:
630
+ - Ask open-ended questions about goals and context
631
+ - Identify stakeholders and their needs
632
+ - Discover constraints and prior attempts
633
+ - Distinguish symptoms from root causes
634
+ - Write clear problem statement
635
+ confirmChecklist:
636
+ - Understand who has the problem
637
+ - Success criteria are clear
638
+ - Root cause identified, not just symptoms
639
+ - Constraints and assumptions documented
640
+ - Problem statement is validated
641
+ onboard:
642
+ focus: |
643
+ Set up the environment for solution implementation.
644
+ Install required tools, configure access to relevant
645
+ systems, and prepare workspace for development.
646
+ readChecklist:
647
+ - Install project dependencies from plan requirements
648
+ - Configure access to relevant data sources and APIs
649
+ - Set up environment variables and credentials
650
+ - Verify access to stakeholder communication channels
651
+ - Create workspace structure for documentation and code
652
+ confirmChecklist:
653
+ - All planned tools and dependencies are installed
654
+ - API keys and credentials are configured securely
655
+ - Workspace structure supports the planned approach
656
+ - Access to all required systems is verified
657
+ - Development environment matches plan requirements
658
+ code:
659
+ focus: |
660
+ Implement solution while staying connected to the original
661
+ problem. Validate assumptions as you build.
662
+ readChecklist:
663
+ - Build incrementally to validate understanding
664
+ - Check in with stakeholders frequently
665
+ - Adjust as new information emerges
666
+ - Document discovered requirements
667
+ confirmChecklist:
668
+ - Solution addresses the validated problem
669
+ - Stakeholder feedback is incorporated
670
+ - Discovered requirements are documented
671
+ - Scope boundaries are maintained
672
+ review:
673
+ focus: |
674
+ Verify solution addresses the real problem and stakeholders
675
+ agree on success.
676
+ readChecklist:
677
+ - Validate with original stakeholders
678
+ - Confirm problem is addressed
679
+ - Document learnings for future reference
680
+ confirmChecklist:
681
+ - Stakeholders confirm problem is solved
682
+ - Success criteria are met
683
+ - Learnings are documented
684
+ deploy:
685
+ focus: |
686
+ Release solution and verify it addresses the real problem
687
+ in production context.
688
+ readChecklist:
689
+ - Deploy solution to production
690
+ - Gather stakeholder feedback on live solution
691
+ - Monitor for unexpected usage patterns
692
+ - Document discovered requirements for future iterations
693
+ confirmChecklist:
694
+ - Solution is deployed
695
+ - Stakeholders have validated in production
696
+ - Usage patterns match expectations
697
+ - Learnings are captured
698
+ instructions: |
699
+ ## Discovery Process
700
+
701
+ ### 1. Embrace Ambiguity
702
+ - Don't rush to solutions
703
+ - Resist the urge to fill gaps with assumptions
704
+ - Ask open-ended questions
705
+ - Seek to understand context deeply
706
+
707
+ ### 2. Understand the Context
708
+ - Who are the stakeholders?
709
+ - What triggered this request?
710
+ - What has been tried before?
711
+ - What constraints exist?
712
+ - What does success look like?
713
+
714
+ ### 3. Find the Real Problem
715
+ - Ask "why" repeatedly (5 Whys technique)
716
+ - Distinguish wants from needs
717
+ - Identify root causes vs symptoms
718
+ - Challenge initial framing
719
+
720
+ ### 4. Validate Understanding
721
+ - Restate the problem in your own words
722
+ - Confirm with stakeholders
723
+ - Check for hidden assumptions
724
+ - Identify what's still unknown
725
+ implementationReference: |
726
+ ## Key Questions
727
+
728
+ ### Understanding Goals
729
+ - What outcome are you trying to achieve?
730
+ - How will you know if this succeeds?
731
+ - What happens if we do nothing?
732
+ - What's the deadline and why?
733
+
734
+ ### Understanding Context
735
+ - Who uses this and how?
736
+ - What's the current workaround?
737
+ - What constraints must we work within?
738
+ - What has been tried before?
739
+
740
+ ### Understanding Scope
741
+ - What's in scope vs out of scope?
742
+ - What's the minimum viable solution?
743
+ - What could we cut if needed?
744
+ - What can't we compromise on?
745
+
746
+ ## Problem Statement Template
747
+
748
+ A good problem statement answers:
749
+ - **Who** has this problem?
750
+ - **What** is the problem they face?
751
+ - **Why** does it matter?
752
+ - **When/Where** does it occur?
753
+ - **How** is it currently handled?
754
+
755
+ Format: "[User type] needs [capability] because [reason], but currently [obstacle]."
756
+
757
+ ## Common Pitfalls
758
+
759
+ - **Solutioning too early**: Jumping to "how" before understanding "what"
760
+ - **Taking requests literally**: Building what was asked, not what's needed
761
+ - **Assuming completeness**: Believing initial requirements are complete
762
+ - **Ignoring context**: Missing business or user context
763
+ - **Single perspective**: Only talking to one stakeholder
764
+ - id: rapid_prototyping
765
+ name: Rapid Prototyping & Validation
766
+ human:
767
+ description:
768
+ Building working solutions quickly to validate ideas and build trust
769
+ through delivery. Credibility comes from showing real software in days,
770
+ not months— demonstrating value before polishing details. "Working
771
+ solutions delivered in days" is the FDE standard.
772
+ levelDescriptions:
773
+ awareness:
774
+ You understand the value of prototypes for learning quickly. You can
775
+ create simple demos and mockups with guidance.
776
+ foundational:
777
+ You build functional prototypes to validate ideas, prioritize core
778
+ functionality over polish, and iterate based on user feedback. You
779
+ know the difference between prototype and production code.
780
+ working:
781
+ You deliver working solutions rapidly (days not weeks). You use
782
+ prototypes to build stakeholder trust, know when to stop prototyping
783
+ and start productionizing, and balance speed with appropriate quality.
784
+ practitioner:
785
+ You lead rapid delivery initiatives across teams in your area, coach
786
+ on prototype-first approaches, establish trust through consistent fast
787
+ delivery, and define clear criteria for prototype-to-production
788
+ transitions.
789
+ expert:
790
+ You shape culture around rapid validation and iterative delivery
791
+ across the business unit. You are recognized for transformative fast
792
+ delivery, define standards for prototype-to-production, and exemplify
793
+ the "deliver in days" mindset.
794
+ agent:
795
+ name: rapid-prototyping
796
+ description: |
797
+ Guide for building working prototypes quickly to validate ideas and
798
+ demonstrate feasibility.
799
+ useWhen: |
800
+ Asked to build a quick demo, proof of concept, MVP, or prototype
801
+ something rapidly.
802
+ stages:
803
+ specify:
804
+ focus: |
805
+ Define what the prototype must demonstrate and success criteria.
806
+ Scope ruthlessly—prototypes are for learning, not production.
807
+ readChecklist:
808
+ - Identify the key question or hypothesis to validate
809
+ - Document minimum acceptable demonstration
810
+ - Define what success looks like for this prototype
811
+ - Explicitly mark what is out of scope
812
+ - Mark any ambiguities with [NEEDS CLARIFICATION]
813
+ confirmChecklist:
814
+ - Key question to answer is clear
815
+ - Minimum viable demonstration is defined
816
+ - Success criteria are explicit
817
+ - Out of scope items are documented
818
+ plan:
819
+ focus: |
820
+ Define what the prototype needs to demonstrate and set
821
+ success criteria. Scope ruthlessly for speed.
822
+ readChecklist:
823
+ - Define the key question to answer
824
+ - Scope to minimum viable demonstration
825
+ - Identify what can be hardcoded or skipped
826
+ - Set time box for delivery
827
+ confirmChecklist:
828
+ - Success criteria are defined
829
+ - Scope is minimal and focused
830
+ - Time box is agreed
831
+ - It's clear this is a prototype
832
+ onboard:
833
+ focus: |
834
+ Set up the prototyping environment as fast as possible.
835
+ Use scaffolding tools, install minimal dependencies,
836
+ and get to a running state quickly.
837
+ readChecklist:
838
+ - Scaffold project using template or CLI tool
839
+ - Install only essential dependencies
840
+ - Configure minimal environment variables
841
+ - Start development server and verify it runs
842
+ - Skip non-essential tooling (linters, CI) for speed
843
+ confirmChecklist:
844
+ - Project scaffolded and running locally
845
+ - Core dependencies installed
846
+ - Development server responds to requests
847
+ - Ready to start building visible output immediately
848
+ code:
849
+ focus: |
850
+ Build the simplest thing that demonstrates the concept.
851
+ Prioritize visible progress over backend elegance.
852
+ readChecklist:
853
+ - Start with visible UI/output
854
+ - Hardcode values that would normally be configurable
855
+ - Skip edge cases that won't appear in demo
856
+ - Show progress frequently
857
+ - Document shortcuts taken
858
+ confirmChecklist:
859
+ - Core concept is demonstrable
860
+ - Happy path works end-to-end
861
+ - Known limitations are documented
862
+ - Stakeholders can interact with it
863
+ review:
864
+ focus: |
865
+ Validate prototype answers the original question. Decide
866
+ whether to iterate, productionize, or abandon.
867
+ readChecklist:
868
+ - Demo to stakeholders
869
+ - Gather feedback on the concept
870
+ - Decide next steps
871
+ - Document learnings
872
+ confirmChecklist:
873
+ - Stakeholders have seen the prototype
874
+ - Original question is answered
875
+ - Next steps are decided
876
+ - Learnings are captured
877
+ deploy:
878
+ focus: |
879
+ Make prototype accessible to stakeholders for evaluation.
880
+ Prototypes may not need production deployment.
881
+ readChecklist:
882
+ - Deploy to accessible environment (staging or demo)
883
+ - Share access with stakeholders
884
+ - Gather hands-on feedback
885
+ - Decide on next phase (iterate, productionize, or abandon)
886
+ confirmChecklist:
887
+ - Prototype is accessible to stakeholders
888
+ - Feedback has been gathered
889
+ - Decision on next steps is made
890
+ - Learnings are documented
891
+ toolReferences:
892
+ - name: Supabase
893
+ url: https://supabase.com/docs
894
+ simpleIcon: supabase
895
+ description: Open source Firebase alternative with PostgreSQL
896
+ useWhen: Instant PostgreSQL database with auth for rapid prototypes
897
+ - name: Next.js
898
+ url: https://nextjs.org/docs
899
+ simpleIcon: nextdotjs
900
+ description: React framework for full-stack web applications
901
+ useWhen: Scaffolding a full-stack prototype with server-side rendering
902
+ - name: Nixpacks
903
+ url: https://nixpacks.com/docs
904
+ simpleIcon: nixos
905
+ description: Build tool that auto-detects and builds applications
906
+ useWhen: Deploying prototypes to containers without writing Dockerfiles
907
+ instructions: |
908
+ ## Step 1: Define What to Demonstrate
909
+
910
+ Before writing code, answer: What question does this prototype
911
+ answer? What's the minimum to demonstrate the concept? What can
912
+ be hardcoded or skipped? When will you stop?
913
+
914
+ ## Step 2: Start with Visible Output
915
+
916
+ Build the UI first—stakeholders need to see something.
917
+ Hardcode data initially so you have working output in minutes.
918
+
919
+ ## Step 3: Add Real Data When Needed
920
+
921
+ Only add database when the UI needs real data. Use Supabase
922
+ Studio to create tables directly (skip migrations for prototypes).
923
+
924
+ ## Step 4: Document Shortcuts
925
+
926
+ Add a README section listing what was skipped and what's needed
927
+ to productionize. This prevents confusion later.
928
+ installScript: |
929
+ set -e
930
+ npx create-next-app@latest my-prototype --typescript
931
+ cd my-prototype
932
+ supabase init
933
+ supabase start
934
+ npm run dev
935
+ implementationReference: |
936
+ ## Start with Hardcoded UI
937
+
938
+ ```typescript
939
+ // app/page.tsx
940
+ export default function Home() {
941
+ const items = [
942
+ { id: 1, name: 'Demo Item 1' },
943
+ { id: 2, name: 'Demo Item 2' },
944
+ ]
945
+ return (
946
+ <main style={{ padding: '2rem' }}>
947
+ <h1>Prototype Demo</h1>
948
+ <ul>{items.map(item => <li key={item.id}>{item.name}</li>)}</ul>
949
+ </main>
950
+ )
951
+ }
952
+ ```
953
+
954
+ ## Replace with Real Data
955
+
956
+ ```typescript
957
+ import { supabase } from '@/lib/supabase'
958
+ const { data: items } = await supabase.from('items').select('*')
959
+ ```
960
+
961
+ ## Document Shortcuts
962
+
963
+ ```markdown
964
+ ## Prototype Limitations
965
+ This is a prototype for [purpose]. Not production-ready.
966
+
967
+ **Shortcuts taken:**
968
+ - No authentication
969
+ - Hardcoded configuration in code
970
+ - No error handling for edge cases
971
+
972
+ **To productionize:**
973
+ - Add authentication
974
+ - Move config to environment variables
975
+ - Add proper error handling
976
+ ```
326
977
 
327
- ### Primary Languages
328
- - **JavaScript/TypeScript**: Frontend and Node.js backend
329
- - **Python**: Backend APIs and data processing
978
+ ## Acceptable vs Required
330
979
 
331
- ### Infrastructure
332
- - **Terraform**: Cloud infrastructure as code
333
- - **CloudFormation**: AWS-specific infrastructure
334
- - **Docker**: Containerization
980
+ | Acceptable to Skip | Still Required |
981
+ |-------------------|----------------|
982
+ | Authentication | Core functionality works |
983
+ | Error handling | Happy path is reliable |
984
+ | Migrations | It's clear this is a prototype |
985
+ | Tests | Limitations are documented |
335
986
 
336
- ## Layer Responsibilities
987
+ ## Common Pitfalls
337
988
 
338
- | Layer | Responsibilities |
339
- |-------|-----------------|
340
- | Frontend | UI/UX, client validation, API integration |
341
- | Backend | Business logic, auth, external services |
342
- | Database | Persistence, queries, migrations |
343
- | Infrastructure | Deployment, scaling, monitoring |
989
+ - **Over-engineering**: Adding features "while you're at it"
990
+ - **No stopping point**: Polishing what you might throw away
991
+ - **Unclear purpose**: Building without knowing what question to answer
992
+ - **Hidden shortcuts**: Not documenting what was skipped