@brainforge/core 3.1.3 → 3.1.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/index.js +1567 -0
- package/package.json +1 -1
package/dist/index.js
CHANGED
|
@@ -3300,6 +3300,1573 @@ Systems fail. This post-mortem focuses on fixing systems, not assigning blame. T
|
|
|
3300
3300
|
],
|
|
3301
3301
|
tags: ["incident", "postmortem", "sre", "general"],
|
|
3302
3302
|
version: "1.0.0"
|
|
3303
|
+
},
|
|
3304
|
+
// ── Backend (continued) ───────────────────────────────────────────────────
|
|
3305
|
+
{
|
|
3306
|
+
id: "backend/graphql-api",
|
|
3307
|
+
name: "GraphQL API",
|
|
3308
|
+
category: "backend",
|
|
3309
|
+
description: "Design a GraphQL schema with resolvers and authentication",
|
|
3310
|
+
template: `Design a GraphQL API for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3311
|
+
|
|
3312
|
+
Domain: {{domain}}
|
|
3313
|
+
Types: {{types}}
|
|
3314
|
+
|
|
3315
|
+
Produce:
|
|
3316
|
+
1. Schema definition (SDL) \u2014 type definitions, queries, mutations{{#if subscriptions}}, subscriptions{{/if}}
|
|
3317
|
+
2. Resolver implementations \u2014 with DataLoader batching for N+1 prevention
|
|
3318
|
+
3. Authentication: directive-based (@auth(role: "admin")) applied to protected fields/mutations
|
|
3319
|
+
4. Input validation: use input types, validate in resolvers, return UserError union
|
|
3320
|
+
5. Error handling: distinguish client errors (UserError) from server errors
|
|
3321
|
+
|
|
3322
|
+
Pagination: use Relay-style cursor connections for all list fields.
|
|
3323
|
+
{{#if (eq project.type "student")}}- Add comments explaining resolver execution and the N+1 problem{{/if}}`,
|
|
3324
|
+
contextSchema: [
|
|
3325
|
+
{ key: "domain", label: "Domain (what it manages)", type: "text", placeholder: "E-commerce: products, orders, customers" },
|
|
3326
|
+
{ key: "types", label: "Main types", type: "text", placeholder: "User, Product, Order, LineItem" },
|
|
3327
|
+
{ key: "subscriptions", label: "Include subscriptions?", type: "boolean", default: false }
|
|
3328
|
+
],
|
|
3329
|
+
tags: ["graphql", "api", "backend"],
|
|
3330
|
+
version: "1.0.0"
|
|
3331
|
+
},
|
|
3332
|
+
{
|
|
3333
|
+
id: "backend/microservice-scaffold",
|
|
3334
|
+
name: "Microservice Scaffold",
|
|
3335
|
+
category: "backend",
|
|
3336
|
+
description: "Scaffold a production-ready microservice with standard cross-cutting concerns",
|
|
3337
|
+
template: `Scaffold a microservice named "{{serviceName}}" in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3338
|
+
|
|
3339
|
+
Responsibility: {{responsibility}}
|
|
3340
|
+
Communication: {{communication}}
|
|
3341
|
+
|
|
3342
|
+
Include:
|
|
3343
|
+
1. Project structure: src/{routes,services,repositories,events,config,middleware}
|
|
3344
|
+
2. Configuration: env-var driven, validated on startup (fail fast if required vars missing)
|
|
3345
|
+
3. Health endpoints: GET /health (liveness) and GET /ready (readiness with dependency checks)
|
|
3346
|
+
4. Structured logging: JSON logs with correlation ID propagated from incoming headers
|
|
3347
|
+
5. Graceful shutdown: drain in-flight requests, close DB connections, deregister from service registry
|
|
3348
|
+
6. {{communication}} integration \u2014 consuming and publishing events/calls to other services
|
|
3349
|
+
7. Dockerfile with multi-stage build (build \u2192 runtime, non-root user, minimal image)
|
|
3350
|
+
|
|
3351
|
+
Cross-cutting: request ID middleware, error handler, timeout middleware ({{timeout}} default).`,
|
|
3352
|
+
contextSchema: [
|
|
3353
|
+
{ key: "serviceName", label: "Service name", type: "text", placeholder: "payment-service" },
|
|
3354
|
+
{ key: "responsibility", label: "What this service owns", type: "text", placeholder: "Payment processing, refunds, and transaction history" },
|
|
3355
|
+
{ key: "communication", label: "Communication style", type: "select", options: ["REST (sync)", "gRPC (sync)", "Message queue (async)", "Mixed"], default: "REST (sync)" },
|
|
3356
|
+
{ key: "timeout", label: "Default request timeout", type: "text", default: "30s" }
|
|
3357
|
+
],
|
|
3358
|
+
tags: ["microservice", "architecture", "backend", "devops"],
|
|
3359
|
+
version: "1.0.0"
|
|
3360
|
+
},
|
|
3361
|
+
{
|
|
3362
|
+
id: "backend/webhook-handler",
|
|
3363
|
+
name: "Webhook Handler",
|
|
3364
|
+
category: "backend",
|
|
3365
|
+
description: "Implement an incoming webhook endpoint with HMAC signature verification",
|
|
3366
|
+
template: `Implement an incoming webhook handler for {{provider}} in {{project.language}}{{#if project.framework}} / {{project.framework}}{{/if}}.
|
|
3367
|
+
|
|
3368
|
+
Events to handle: {{events}}
|
|
3369
|
+
|
|
3370
|
+
Requirements:
|
|
3371
|
+
1. Signature verification \u2014 validate HMAC-SHA256 signature from {{signatureHeader}} header before any processing
|
|
3372
|
+
2. Idempotency \u2014 use the event ID from the payload; skip duplicate events (store processed IDs for {{idempotencyWindow}})
|
|
3373
|
+
3. Immediate 200 response \u2014 acknowledge within 5 seconds; do real work asynchronously via a job queue
|
|
3374
|
+
4. Retry safety \u2014 all handlers must be idempotent (safe to call multiple times with same payload)
|
|
3375
|
+
5. Event routing \u2014 dispatch to the correct handler based on event type field
|
|
3376
|
+
6. Dead letter handling \u2014 after 3 failed processing attempts, move to dead letter queue and alert
|
|
3377
|
+
|
|
3378
|
+
Payload parsing: validate against expected schema before processing; reject malformed events with 400 (but return 200 to provider to stop retries on schema mismatches \u2014 log for manual review).`,
|
|
3379
|
+
contextSchema: [
|
|
3380
|
+
{ key: "provider", label: "Webhook provider", type: "text", placeholder: "Stripe, GitHub, Twilio" },
|
|
3381
|
+
{ key: "events", label: "Events to handle", type: "text", placeholder: "payment.succeeded, payment.failed, subscription.updated" },
|
|
3382
|
+
{ key: "signatureHeader", label: "Signature header name", type: "text", placeholder: "Stripe-Signature", default: "X-Hub-Signature-256" },
|
|
3383
|
+
{ key: "idempotencyWindow", label: "Idempotency window", type: "text", default: "24 hours" }
|
|
3384
|
+
],
|
|
3385
|
+
tags: ["webhook", "integration", "security", "backend"],
|
|
3386
|
+
version: "1.0.0"
|
|
3387
|
+
},
|
|
3388
|
+
{
|
|
3389
|
+
id: "backend/background-job",
|
|
3390
|
+
name: "Background Job Worker",
|
|
3391
|
+
category: "backend",
|
|
3392
|
+
description: "Implement a background job queue with retry, concurrency, and monitoring",
|
|
3393
|
+
template: `Implement a background job system for {{project.name}} in {{project.language}}{{#if project.framework}} / {{project.framework}}{{/if}}.
|
|
3394
|
+
|
|
3395
|
+
Queue backend: {{queueBackend}}
|
|
3396
|
+
Jobs to implement: {{jobs}}
|
|
3397
|
+
Concurrency: {{concurrency}} workers
|
|
3398
|
+
|
|
3399
|
+
For each job:
|
|
3400
|
+
1. Job definition: typed payload interface, validate on enqueue
|
|
3401
|
+
2. Handler: process logic, idempotent, return success or throw to trigger retry
|
|
3402
|
+
3. Retry policy: exponential backoff (1s \u2192 5s \u2192 30s \u2192 2m \u2192 10m), max {{maxRetries}} attempts
|
|
3403
|
+
4. Dead letter queue: after max retries, move to DLQ with full context for manual inspection
|
|
3404
|
+
5. Progress tracking: update job status (pending/running/done/failed) in the queue store
|
|
3405
|
+
|
|
3406
|
+
Worker process:
|
|
3407
|
+
- Graceful shutdown: finish current job, stop polling, exit
|
|
3408
|
+
- Concurrency: process up to {{concurrency}} jobs in parallel
|
|
3409
|
+
- Observability: log job start/end with duration and job ID
|
|
3410
|
+
|
|
3411
|
+
Also provide: an enqueue helper, a job status query endpoint, and a DLQ viewer.`,
|
|
3412
|
+
contextSchema: [
|
|
3413
|
+
{ key: "queueBackend", label: "Queue backend", type: "select", options: ["BullMQ (Redis)", "pg-boss (PostgreSQL)", "RabbitMQ", "AWS SQS"], default: "BullMQ (Redis)" },
|
|
3414
|
+
{ key: "jobs", label: "Jobs to implement", type: "text", placeholder: "send-email, resize-image, generate-report, sync-crm" },
|
|
3415
|
+
{ key: "concurrency", label: "Worker concurrency", type: "text", default: "5" },
|
|
3416
|
+
{ key: "maxRetries", label: "Max retry attempts", type: "text", default: "5" }
|
|
3417
|
+
],
|
|
3418
|
+
tags: ["queue", "jobs", "async", "backend"],
|
|
3419
|
+
version: "1.0.0"
|
|
3420
|
+
},
|
|
3421
|
+
{
|
|
3422
|
+
id: "backend/health-check",
|
|
3423
|
+
name: "Health Check Endpoints",
|
|
3424
|
+
category: "backend",
|
|
3425
|
+
description: "Implement liveness and readiness health check endpoints",
|
|
3426
|
+
template: `Implement health check endpoints for {{project.name}} in {{project.language}}{{#if project.framework}} / {{project.framework}}{{/if}}.
|
|
3427
|
+
|
|
3428
|
+
Dependencies to check: {{dependencies}}
|
|
3429
|
+
|
|
3430
|
+
Endpoints:
|
|
3431
|
+
1. GET /health \u2014 liveness probe (is the process alive?)
|
|
3432
|
+
- Always returns 200 if the process is running
|
|
3433
|
+
- Response: { status: "ok", uptime: number, version: string }
|
|
3434
|
+
|
|
3435
|
+
2. GET /ready \u2014 readiness probe (is the service ready to accept traffic?)
|
|
3436
|
+
- Checks each dependency: {{dependencies}}
|
|
3437
|
+
- Returns 200 if ALL checks pass, 503 if any fail
|
|
3438
|
+
- Response: { status: "ready"|"degraded", checks: { [dep]: { status, latencyMs } } }
|
|
3439
|
+
|
|
3440
|
+
3. GET /metrics \u2014 Prometheus-compatible metrics (optional)
|
|
3441
|
+
- Expose: request count, error rate, response time histogram, queue depth
|
|
3442
|
+
|
|
3443
|
+
Design rules:
|
|
3444
|
+
- Health checks must complete within 1 second
|
|
3445
|
+
- DB check: run a cheap query (SELECT 1), not a full table scan
|
|
3446
|
+
- Cache check: SET then GET a test key
|
|
3447
|
+
- Never expose sensitive config or internal error details in responses`,
|
|
3448
|
+
contextSchema: [
|
|
3449
|
+
{ key: "dependencies", label: "Dependencies to check", type: "text", placeholder: "PostgreSQL database, Redis cache, external payment API" }
|
|
3450
|
+
],
|
|
3451
|
+
tags: ["health", "monitoring", "kubernetes", "backend"],
|
|
3452
|
+
version: "1.0.0"
|
|
3453
|
+
},
|
|
3454
|
+
// ── Frontend (continued) ──────────────────────────────────────────────────
|
|
3455
|
+
{
|
|
3456
|
+
id: "frontend/dark-mode",
|
|
3457
|
+
name: "Dark Mode Toggle",
|
|
3458
|
+
category: "frontend",
|
|
3459
|
+
description: "Implement dark mode with system preference detection and persistence",
|
|
3460
|
+
template: `Implement dark mode for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3461
|
+
|
|
3462
|
+
Styling system: {{stylingSystem}}
|
|
3463
|
+
|
|
3464
|
+
Requirements:
|
|
3465
|
+
1. System preference detection: respect prefers-color-scheme on first visit
|
|
3466
|
+
2. User override: toggle button that overrides system preference
|
|
3467
|
+
3. Persistence: store user's choice in localStorage (key: "color-scheme")
|
|
3468
|
+
4. No flash: inject theme class on <html> before React hydrates (add a small blocking script)
|
|
3469
|
+
5. Smooth transition: CSS transition on background-color and color (200ms ease)
|
|
3470
|
+
|
|
3471
|
+
Implementation:
|
|
3472
|
+
- useColorScheme() hook \u2014 returns { scheme, toggle, setScheme }
|
|
3473
|
+
- ThemeProvider component \u2014 reads from localStorage, applies class to document root
|
|
3474
|
+
- Toggle button component with sun/moon icon and accessible aria-label
|
|
3475
|
+
- CSS custom properties for all theme tokens (--bg-primary, --text-primary, etc.)
|
|
3476
|
+
|
|
3477
|
+
{{#if (eq stylingSystem "Tailwind")}}- Use Tailwind's \`dark:\` variant with class strategy (darkMode: 'class' in config){{/if}}`,
|
|
3478
|
+
contextSchema: [
|
|
3479
|
+
{ key: "stylingSystem", label: "Styling system", type: "select", options: ["Tailwind", "CSS Modules", "Styled Components", "Plain CSS"], default: "Tailwind" }
|
|
3480
|
+
],
|
|
3481
|
+
tags: ["dark-mode", "theme", "accessibility", "frontend"],
|
|
3482
|
+
version: "1.0.0"
|
|
3483
|
+
},
|
|
3484
|
+
{
|
|
3485
|
+
id: "frontend/search-ui",
|
|
3486
|
+
name: "Search UI",
|
|
3487
|
+
category: "frontend",
|
|
3488
|
+
description: "Build a search input with debouncing, loading states, and result display",
|
|
3489
|
+
template: `Build a search UI component for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3490
|
+
|
|
3491
|
+
What is being searched: {{searchTarget}}
|
|
3492
|
+
Search mode: {{searchMode}}
|
|
3493
|
+
|
|
3494
|
+
Component features:
|
|
3495
|
+
1. Input with debounce ({{debounceMs}}ms) \u2014 don't fire on every keystroke
|
|
3496
|
+
2. Loading indicator while fetching results
|
|
3497
|
+
3. Result list: display {{resultShape}} with keyboard navigation (\u2191\u2193 arrows, Enter to select, Escape to close)
|
|
3498
|
+
4. Empty state: "No results for '{{query}}'" with helpful suggestion
|
|
3499
|
+
5. Error state: network failure message with retry button
|
|
3500
|
+
6. Highlight matched terms in results using mark tags
|
|
3501
|
+
7. Accessible: role="combobox", aria-expanded, aria-autocomplete, aria-activedescendant
|
|
3502
|
+
|
|
3503
|
+
{{#if (eq searchMode "client-side")}}- Load all data once, filter in-memory using fuzzy matching{{/if}}
|
|
3504
|
+
{{#if (eq searchMode "server-side")}}- Call GET /search?q= on each debounced input change; cancel previous request (AbortController){{/if}}
|
|
3505
|
+
|
|
3506
|
+
Include useSearch() hook that encapsulates state, debounce, and fetch logic.`,
|
|
3507
|
+
contextSchema: [
|
|
3508
|
+
{ key: "searchTarget", label: "What to search", type: "text", placeholder: "Products, users, or documentation articles" },
|
|
3509
|
+
{ key: "searchMode", label: "Search mode", type: "select", options: ["server-side", "client-side"], default: "server-side" },
|
|
3510
|
+
{ key: "debounceMs", label: "Debounce delay (ms)", type: "text", default: "300" },
|
|
3511
|
+
{ key: "resultShape", label: "Result display shape", type: "text", placeholder: "title + description + category badge" }
|
|
3512
|
+
],
|
|
3513
|
+
tags: ["search", "debounce", "ux", "frontend"],
|
|
3514
|
+
version: "1.0.0"
|
|
3515
|
+
},
|
|
3516
|
+
{
|
|
3517
|
+
id: "frontend/modal-dialog",
|
|
3518
|
+
name: "Modal / Dialog",
|
|
3519
|
+
category: "frontend",
|
|
3520
|
+
description: "Build an accessible modal dialog with focus trap and keyboard support",
|
|
3521
|
+
template: `Build a modal/dialog component for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3522
|
+
|
|
3523
|
+
Modal type: {{modalType}}
|
|
3524
|
+
|
|
3525
|
+
Accessibility requirements (WAI-ARIA Dialog pattern):
|
|
3526
|
+
1. role="dialog" with aria-modal="true" and aria-labelledby pointing to title
|
|
3527
|
+
2. Focus trap: Tab cycles only within modal; Shift+Tab in reverse
|
|
3528
|
+
3. Initial focus: first focusable element or close button on open
|
|
3529
|
+
4. Escape key closes; clicking overlay closes (configurable)
|
|
3530
|
+
5. Return focus to trigger element on close
|
|
3531
|
+
6. Prevent body scroll when open (add overflow: hidden to body)
|
|
3532
|
+
|
|
3533
|
+
Component API:
|
|
3534
|
+
- <Modal open={bool} onClose={() => void} title="...">children</Modal>
|
|
3535
|
+
- useModal() hook \u2014 returns { open, close, isOpen } for programmatic control
|
|
3536
|
+
- Render in a Portal (document.body) to avoid z-index issues
|
|
3537
|
+
|
|
3538
|
+
Animation: fade in (150ms) + scale up from 95% on open; reverse on close.
|
|
3539
|
+
|
|
3540
|
+
Variants: {{modalType}} \u2014 provide appropriate default size and layout.`,
|
|
3541
|
+
contextSchema: [
|
|
3542
|
+
{ key: "modalType", label: "Modal type", type: "select", options: ["confirmation dialog", "form dialog", "image/media viewer", "full-screen overlay"], default: "confirmation dialog" }
|
|
3543
|
+
],
|
|
3544
|
+
tags: ["modal", "dialog", "accessibility", "frontend"],
|
|
3545
|
+
version: "1.0.0"
|
|
3546
|
+
},
|
|
3547
|
+
{
|
|
3548
|
+
id: "frontend/dashboard-widget",
|
|
3549
|
+
name: "Dashboard Widget",
|
|
3550
|
+
category: "frontend",
|
|
3551
|
+
description: "Build a reusable dashboard widget with loading, error, and empty states",
|
|
3552
|
+
template: `Build a reusable dashboard widget for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3553
|
+
|
|
3554
|
+
Widget purpose: {{purpose}}
|
|
3555
|
+
Data source: {{dataSource}}
|
|
3556
|
+
|
|
3557
|
+
Widget structure:
|
|
3558
|
+
1. Widget shell: title, subtitle, optional action button (top-right), border/shadow
|
|
3559
|
+
2. Loading state: skeleton loader matching the content shape (not a spinner)
|
|
3560
|
+
3. Error state: friendly message + retry button; log error details to console only
|
|
3561
|
+
4. Empty state: illustration + clear call-to-action
|
|
3562
|
+
5. Content area: {{purpose}}
|
|
3563
|
+
|
|
3564
|
+
Data fetching:
|
|
3565
|
+
- useWidget(endpoint) hook \u2014 handles fetch, polling (every {{pollInterval}}), and error state
|
|
3566
|
+
- SWR or React Query pattern: stale-while-revalidate for instant UI on revisit
|
|
3567
|
+
- Optimistic updates for any write actions within the widget
|
|
3568
|
+
|
|
3569
|
+
Responsiveness: widget collapses gracefully on small screens; configurable width (1/3, 1/2, full).`,
|
|
3570
|
+
contextSchema: [
|
|
3571
|
+
{ key: "purpose", label: "What the widget shows", type: "text", placeholder: "Revenue chart, recent orders table, user activity feed" },
|
|
3572
|
+
{ key: "dataSource", label: "Data endpoint", type: "text", placeholder: "GET /api/stats/revenue" },
|
|
3573
|
+
{ key: "pollInterval", label: "Auto-refresh interval", type: "text", default: "60 seconds" }
|
|
3574
|
+
],
|
|
3575
|
+
tags: ["dashboard", "widget", "data-fetching", "frontend"],
|
|
3576
|
+
version: "1.0.0"
|
|
3577
|
+
},
|
|
3578
|
+
{
|
|
3579
|
+
id: "frontend/notifications",
|
|
3580
|
+
name: "Notifications Center",
|
|
3581
|
+
category: "frontend",
|
|
3582
|
+
description: "Build an in-app notification system with a bell icon and notification list",
|
|
3583
|
+
template: `Build a notifications center for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3584
|
+
|
|
3585
|
+
Notification types: {{types}}
|
|
3586
|
+
Delivery: {{delivery}}
|
|
3587
|
+
|
|
3588
|
+
Components:
|
|
3589
|
+
1. NotificationBell \u2014 icon with unread count badge; count caps at 99+
|
|
3590
|
+
2. NotificationDropdown \u2014 opens below bell, lists recent {{maxVisible}} notifications
|
|
3591
|
+
3. NotificationItem \u2014 icon/avatar, title, body (truncated at 2 lines), relative timestamp ("2 min ago"), mark-read on click
|
|
3592
|
+
|
|
3593
|
+
State management (useNotifications hook):
|
|
3594
|
+
- Fetch initial list from {{delivery}} on mount
|
|
3595
|
+
- Mark as read (single or all)
|
|
3596
|
+
- Unread count derived from list state
|
|
3597
|
+
- Optimistic mark-read (update UI immediately, revert on API error)
|
|
3598
|
+
|
|
3599
|
+
{{#if (eq delivery "WebSocket")}}- Subscribe to WebSocket channel; prepend new notifications as they arrive; cap list at 50{{/if}}
|
|
3600
|
+
{{#if (eq delivery "polling")}}- Poll GET /notifications?unread=true every 30 seconds{{/if}}
|
|
3601
|
+
|
|
3602
|
+
Accessibility: role="region" aria-label="Notifications", focus management in dropdown.`,
|
|
3603
|
+
contextSchema: [
|
|
3604
|
+
{ key: "types", label: "Notification types", type: "text", placeholder: "system alert, user mention, task assigned, payment received" },
|
|
3605
|
+
{ key: "delivery", label: "Delivery mechanism", type: "select", options: ["WebSocket", "polling", "Server-Sent Events"], default: "WebSocket" },
|
|
3606
|
+
{ key: "maxVisible", label: "Max visible in dropdown", type: "text", default: "10" }
|
|
3607
|
+
],
|
|
3608
|
+
tags: ["notifications", "real-time", "ux", "frontend"],
|
|
3609
|
+
version: "1.0.0"
|
|
3610
|
+
},
|
|
3611
|
+
// ── Database (continued) ──────────────────────────────────────────────────
|
|
3612
|
+
{
|
|
3613
|
+
id: "database/transaction",
|
|
3614
|
+
name: "Database Transaction",
|
|
3615
|
+
category: "database",
|
|
3616
|
+
description: "Implement a multi-step database transaction with rollback and savepoints",
|
|
3617
|
+
template: `Implement a database transaction for {{project.name}} in {{project.language}}.
|
|
3618
|
+
|
|
3619
|
+
Operation: {{operation}}
|
|
3620
|
+
Steps: {{steps}}
|
|
3621
|
+
Database: {{database}}
|
|
3622
|
+
|
|
3623
|
+
Design:
|
|
3624
|
+
1. Wrap all steps in a single transaction \u2014 all succeed or all roll back
|
|
3625
|
+
2. Use savepoints for partial rollback within the transaction if {{useSavepoints}}
|
|
3626
|
+
3. Isolation level: {{isolationLevel}} \u2014 explain why this level is appropriate
|
|
3627
|
+
4. Optimistic locking: add version column to detect concurrent updates; throw ConflictError on mismatch
|
|
3628
|
+
5. Retry logic: on serialization failure (deadlock/conflict), retry up to 3 times with jitter
|
|
3629
|
+
|
|
3630
|
+
Transaction helper:
|
|
3631
|
+
\`\`\`
|
|
3632
|
+
withTransaction(async (tx) => {
|
|
3633
|
+
// all DB calls use tx, not the global pool
|
|
3634
|
+
const result = await step1(tx);
|
|
3635
|
+
await step2(tx, result);
|
|
3636
|
+
return finalResult;
|
|
3637
|
+
});
|
|
3638
|
+
\`\`\`
|
|
3639
|
+
|
|
3640
|
+
Provide the transaction wrapper, each step as a separate function, and a test that verifies rollback behavior.`,
|
|
3641
|
+
contextSchema: [
|
|
3642
|
+
{ key: "operation", label: "What this transaction does", type: "text", placeholder: "Transfer funds between two accounts" },
|
|
3643
|
+
{ key: "steps", label: "Steps in the transaction", type: "text", placeholder: "1. Debit source, 2. Credit destination, 3. Insert audit log" },
|
|
3644
|
+
{ key: "database", label: "Database", type: "select", options: ["PostgreSQL", "MySQL", "SQLite", "MongoDB"], default: "PostgreSQL" },
|
|
3645
|
+
{ key: "isolationLevel", label: "Isolation level", type: "select", options: ["READ COMMITTED", "REPEATABLE READ", "SERIALIZABLE"], default: "READ COMMITTED" },
|
|
3646
|
+
{ key: "useSavepoints", label: "Use savepoints?", type: "boolean", default: false }
|
|
3647
|
+
],
|
|
3648
|
+
tags: ["transaction", "database", "consistency", "backend"],
|
|
3649
|
+
version: "1.0.0"
|
|
3650
|
+
},
|
|
3651
|
+
{
|
|
3652
|
+
id: "database/full-text-search",
|
|
3653
|
+
name: "Full-Text Search",
|
|
3654
|
+
category: "database",
|
|
3655
|
+
description: "Implement full-text search with ranking and filtering",
|
|
3656
|
+
template: `Implement full-text search for {{entity}} in {{project.language}}.
|
|
3657
|
+
|
|
3658
|
+
Fields to search: {{fields}}
|
|
3659
|
+
Database / search engine: {{engine}}
|
|
3660
|
+
Filters: {{filters}}
|
|
3661
|
+
|
|
3662
|
+
Implementation:
|
|
3663
|
+
1. Index setup: create a full-text index on {{fields}}
|
|
3664
|
+
{{#if (eq engine "PostgreSQL")}}- Use tsvector columns + GIN index; trgm extension for fuzzy matching{{/if}}
|
|
3665
|
+
{{#if (eq engine "Elasticsearch")}}- Define mapping with text fields, analyzers, and custom tokenizers{{/if}}
|
|
3666
|
+
{{#if (eq engine "Meilisearch")}}- Configure searchable attributes, ranking rules, and stop words{{/if}}
|
|
3667
|
+
|
|
3668
|
+
2. Search function:
|
|
3669
|
+
- Accept: query string, filters ({{filters}}), pagination (page/limit), sort
|
|
3670
|
+
- Return: { hits: Entity[], total: number, took: number }
|
|
3671
|
+
- Highlight matched terms in results
|
|
3672
|
+
|
|
3673
|
+
3. Ranking: boost results where query matches title > body > tags
|
|
3674
|
+
4. Stemming/synonyms: handle plural forms and common synonyms
|
|
3675
|
+
5. Typo tolerance: fuzzy match for queries > 4 characters
|
|
3676
|
+
|
|
3677
|
+
Provide index migration/setup script and a search service class.`,
|
|
3678
|
+
contextSchema: [
|
|
3679
|
+
{ key: "entity", label: "Entity to search", type: "text", placeholder: "Articles, products, or users" },
|
|
3680
|
+
{ key: "fields", label: "Searchable fields", type: "text", placeholder: "title, description, tags, author" },
|
|
3681
|
+
{ key: "engine", label: "Search engine", type: "select", options: ["PostgreSQL", "Elasticsearch", "Meilisearch", "Typesense"], default: "PostgreSQL" },
|
|
3682
|
+
{ key: "filters", label: "Filterable attributes", type: "text", placeholder: "category, status, dateRange, authorId" }
|
|
3683
|
+
],
|
|
3684
|
+
tags: ["search", "database", "full-text", "backend"],
|
|
3685
|
+
version: "1.0.0"
|
|
3686
|
+
},
|
|
3687
|
+
{
|
|
3688
|
+
id: "database/indexing-strategy",
|
|
3689
|
+
name: "Index Strategy",
|
|
3690
|
+
category: "database",
|
|
3691
|
+
description: "Design a database indexing strategy for a table with query pattern analysis",
|
|
3692
|
+
template: `Design an indexing strategy for the {{tableName}} table in {{project.language}}.
|
|
3693
|
+
|
|
3694
|
+
Columns: {{columns}}
|
|
3695
|
+
Common queries: {{queries}}
|
|
3696
|
+
Database: {{database}}
|
|
3697
|
+
|
|
3698
|
+
Analyze each query pattern and recommend indexes:
|
|
3699
|
+
|
|
3700
|
+
1. For each query, identify:
|
|
3701
|
+
- Which columns appear in WHERE, ORDER BY, JOIN ON, and GROUP BY
|
|
3702
|
+
- Selectivity of each filter column (high = good index candidate)
|
|
3703
|
+
- Read vs write ratio (more writes \u2192 fewer indexes)
|
|
3704
|
+
|
|
3705
|
+
2. Index recommendations:
|
|
3706
|
+
- Single-column indexes for high-selectivity equality filters
|
|
3707
|
+
- Composite indexes: column order = equality first, then range, then ORDER BY
|
|
3708
|
+
- Covering indexes for read-heavy queries (include all SELECT columns)
|
|
3709
|
+
- Partial indexes for filtered subsets (WHERE deleted_at IS NULL)
|
|
3710
|
+
|
|
3711
|
+
3. Anti-patterns to avoid:
|
|
3712
|
+
- Duplicate indexes
|
|
3713
|
+
- Indexes on low-cardinality columns (boolean, status with 3 values)
|
|
3714
|
+
- Over-indexing write-heavy tables
|
|
3715
|
+
|
|
3716
|
+
Provide CREATE INDEX statements with comments explaining each decision, and EXPLAIN ANALYZE examples.`,
|
|
3717
|
+
contextSchema: [
|
|
3718
|
+
{ key: "tableName", label: "Table name", type: "text", placeholder: "orders" },
|
|
3719
|
+
{ key: "columns", label: "Table columns", type: "text", placeholder: "id, user_id, status, created_at, total, deleted_at" },
|
|
3720
|
+
{ key: "queries", label: "Common query patterns", type: "text", placeholder: "Get orders by user (sorted by date), filter by status, sum totals by month" },
|
|
3721
|
+
{ key: "database", label: "Database", type: "select", options: ["PostgreSQL", "MySQL", "SQLite"], default: "PostgreSQL" }
|
|
3722
|
+
],
|
|
3723
|
+
tags: ["indexing", "database", "performance", "sql"],
|
|
3724
|
+
version: "1.0.0"
|
|
3725
|
+
},
|
|
3726
|
+
{
|
|
3727
|
+
id: "database/backup-restore",
|
|
3728
|
+
name: "Backup & Restore",
|
|
3729
|
+
category: "database",
|
|
3730
|
+
description: "Design and implement a database backup and restore procedure",
|
|
3731
|
+
template: `Design a backup and restore procedure for {{project.name}}.
|
|
3732
|
+
|
|
3733
|
+
Database: {{database}}
|
|
3734
|
+
Environment: {{environment}}
|
|
3735
|
+
Recovery objectives: RPO = {{rpo}}, RTO = {{rto}}
|
|
3736
|
+
|
|
3737
|
+
Backup strategy:
|
|
3738
|
+
1. Full backup: schedule and retention (how many to keep)
|
|
3739
|
+
2. Incremental/WAL archiving: continuous capture of changes between full backups
|
|
3740
|
+
3. Storage: {{storage}} \u2014 encrypted at rest, versioned, cross-region copy
|
|
3741
|
+
4. Verification: after each backup, run a restore test to a throwaway instance and verify row counts
|
|
3742
|
+
|
|
3743
|
+
Restore procedure (step-by-step runbook):
|
|
3744
|
+
1. Stop application traffic (maintenance mode)
|
|
3745
|
+
2. Identify target backup (point-in-time or specific snapshot)
|
|
3746
|
+
3. Restore to staging first \u2014 verify data integrity
|
|
3747
|
+
4. Promote to production
|
|
3748
|
+
5. Run smoke tests before re-enabling traffic
|
|
3749
|
+
|
|
3750
|
+
Automation:
|
|
3751
|
+
- Backup script with notifications on failure (email/Slack)
|
|
3752
|
+
- Restore script with dry-run mode
|
|
3753
|
+
- Scheduled job (cron or cloud scheduler)
|
|
3754
|
+
- Alert: backup not received in > 25 hours
|
|
3755
|
+
|
|
3756
|
+
RPO of {{rpo}} means: maximum {{rpo}} of data loss is acceptable.
|
|
3757
|
+
RTO of {{rto}} means: service must be restored within {{rto}}.`,
|
|
3758
|
+
contextSchema: [
|
|
3759
|
+
{ key: "database", label: "Database", type: "select", options: ["PostgreSQL", "MySQL", "MongoDB", "SQLite"], default: "PostgreSQL" },
|
|
3760
|
+
{ key: "environment", label: "Environment", type: "select", options: ["AWS RDS", "GCP Cloud SQL", "self-hosted VPS", "Supabase"], default: "AWS RDS" },
|
|
3761
|
+
{ key: "rpo", label: "Recovery Point Objective", type: "text", default: "1 hour" },
|
|
3762
|
+
{ key: "rto", label: "Recovery Time Objective", type: "text", default: "4 hours" },
|
|
3763
|
+
{ key: "storage", label: "Backup storage", type: "select", options: ["AWS S3", "GCS", "Azure Blob", "local disk"], default: "AWS S3" }
|
|
3764
|
+
],
|
|
3765
|
+
tags: ["backup", "disaster-recovery", "database", "devops"],
|
|
3766
|
+
version: "1.0.0"
|
|
3767
|
+
},
|
|
3768
|
+
// ── Testing (continued) ───────────────────────────────────────────────────
|
|
3769
|
+
{
|
|
3770
|
+
id: "testing/snapshot-test",
|
|
3771
|
+
name: "Snapshot Testing",
|
|
3772
|
+
category: "testing",
|
|
3773
|
+
description: "Set up snapshot testing for UI components or API responses",
|
|
3774
|
+
template: `Set up snapshot testing for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
3775
|
+
|
|
3776
|
+
Snapshot target: {{target}}
|
|
3777
|
+
Framework: {{testFramework}}
|
|
3778
|
+
|
|
3779
|
+
Strategy:
|
|
3780
|
+
1. What to snapshot: stable, deterministic output \u2014 component render trees, API response shapes, serialized objects
|
|
3781
|
+
2. What NOT to snapshot: timestamps, UUIDs, random values \u2014 use matchers (expect.any(String)) for these
|
|
3782
|
+
3. Snapshot size: keep snapshots small and focused; large snapshots become meaningless noise
|
|
3783
|
+
|
|
3784
|
+
For UI components ({{target}}):
|
|
3785
|
+
- Render with representative props (default, loading, error, empty states)
|
|
3786
|
+
- Use inline snapshots for small outputs (< 20 lines)
|
|
3787
|
+
- Use external .snap files for larger renders
|
|
3788
|
+
|
|
3789
|
+
For API responses:
|
|
3790
|
+
- Snapshot the shape, not the values \u2014 use asymmetric matchers
|
|
3791
|
+
- Test: response has correct keys and value types
|
|
3792
|
+
|
|
3793
|
+
Maintenance rules:
|
|
3794
|
+
- Update snapshots intentionally: \`--updateSnapshot\` flag with a review of the diff
|
|
3795
|
+
- Delete obsolete snapshots immediately
|
|
3796
|
+
- CI fails on unintentional snapshot changes`,
|
|
3797
|
+
contextSchema: [
|
|
3798
|
+
{ key: "target", label: "What to snapshot", type: "select", options: ["React/Vue components", "API response shapes", "serialized objects", "CLI output"], default: "React/Vue components" },
|
|
3799
|
+
{ key: "testFramework", label: "Test framework", type: "select", options: ["Jest", "Vitest", "pytest", "Go testing"], default: "Vitest" }
|
|
3800
|
+
],
|
|
3801
|
+
tags: ["snapshot", "testing", "regression"],
|
|
3802
|
+
version: "1.0.0"
|
|
3803
|
+
},
|
|
3804
|
+
{
|
|
3805
|
+
id: "testing/performance-test",
|
|
3806
|
+
name: "Performance Test Plan",
|
|
3807
|
+
category: "testing",
|
|
3808
|
+
description: "Design a performance and load testing strategy with target thresholds",
|
|
3809
|
+
template: `Design a performance test plan for {{project.name}}.
|
|
3810
|
+
|
|
3811
|
+
Target endpoint/flow: {{target}}
|
|
3812
|
+
Expected load: {{expectedLoad}}
|
|
3813
|
+
Tool: {{tool}}
|
|
3814
|
+
|
|
3815
|
+
Test types:
|
|
3816
|
+
1. **Baseline test** \u2014 single user, measure raw response time and resource usage
|
|
3817
|
+
2. **Load test** \u2014 ramp to {{expectedLoad}} over 5 minutes, hold for 10 minutes, ramp down
|
|
3818
|
+
3. **Stress test** \u2014 push to 2x {{expectedLoad}} to find the breaking point
|
|
3819
|
+
4. **Soak test** \u2014 run at 70% capacity for 1 hour to detect memory leaks
|
|
3820
|
+
|
|
3821
|
+
Pass/fail thresholds:
|
|
3822
|
+
- p50 response time \u2264 {{p50Target}}
|
|
3823
|
+
- p95 response time \u2264 {{p95Target}}
|
|
3824
|
+
- p99 response time \u2264 {{p99Target}}
|
|
3825
|
+
- Error rate < 0.1%
|
|
3826
|
+
- CPU usage < 80% at target load
|
|
3827
|
+
- Memory growth < 5% per hour (soak test)
|
|
3828
|
+
|
|
3829
|
+
Monitoring during test:
|
|
3830
|
+
- Application metrics: request rate, error rate, response time histogram
|
|
3831
|
+
- Infrastructure: CPU, memory, disk I/O, network throughput
|
|
3832
|
+
- Database: query time, connection pool usage, slow query log
|
|
3833
|
+
|
|
3834
|
+
Provide k6/Locust/JMeter script and a results analysis template.`,
|
|
3835
|
+
contextSchema: [
|
|
3836
|
+
{ key: "target", label: "What to test", type: "text", placeholder: "POST /api/checkout flow" },
|
|
3837
|
+
{ key: "expectedLoad", label: "Expected peak load", type: "text", placeholder: "500 concurrent users" },
|
|
3838
|
+
{ key: "tool", label: "Load testing tool", type: "select", options: ["k6", "Locust", "Artillery", "JMeter", "Apache Bench"], default: "k6" },
|
|
3839
|
+
{ key: "p50Target", label: "p50 target", type: "text", default: "200ms" },
|
|
3840
|
+
{ key: "p95Target", label: "p95 target", type: "text", default: "500ms" },
|
|
3841
|
+
{ key: "p99Target", label: "p99 target", type: "text", default: "1s" }
|
|
3842
|
+
],
|
|
3843
|
+
tags: ["performance", "load-testing", "testing", "sre"],
|
|
3844
|
+
version: "1.0.0"
|
|
3845
|
+
},
|
|
3846
|
+
{
|
|
3847
|
+
id: "testing/contract-test",
|
|
3848
|
+
name: "Contract Test",
|
|
3849
|
+
category: "testing",
|
|
3850
|
+
description: "Write consumer-driven contract tests between two services",
|
|
3851
|
+
template: `Write consumer-driven contract tests between {{consumer}} and {{provider}} in {{project.language}}.
|
|
3852
|
+
|
|
3853
|
+
Integration point: {{integrationPoint}}
|
|
3854
|
+
Tool: {{tool}}
|
|
3855
|
+
|
|
3856
|
+
Consumer side ({{consumer}}):
|
|
3857
|
+
1. Define the contract: what the consumer expects from the provider (request + response shape)
|
|
3858
|
+
2. Write consumer test: mock the provider using the contract, verify consumer handles responses correctly
|
|
3859
|
+
3. Publish contract to a pact broker or shared file
|
|
3860
|
+
|
|
3861
|
+
Provider side ({{provider}}):
|
|
3862
|
+
1. Load the consumer contract
|
|
3863
|
+
2. Run provider verification: replay each consumer interaction against the real provider
|
|
3864
|
+
3. Verify provider meets every expectation in the contract
|
|
3865
|
+
4. Publish verification result to the pact broker
|
|
3866
|
+
|
|
3867
|
+
Contract includes:
|
|
3868
|
+
- Request: method, path, headers, body schema
|
|
3869
|
+
- Response: status code, headers, body schema (use type matchers, not exact values)
|
|
3870
|
+
|
|
3871
|
+
CI integration:
|
|
3872
|
+
- Consumer: generate and publish pact on PR
|
|
3873
|
+
- Provider: verify pacts on PR (can-i-deploy check before merge)
|
|
3874
|
+
- Break the build if a provider change violates a consumer contract`,
|
|
3875
|
+
contextSchema: [
|
|
3876
|
+
{ key: "consumer", label: "Consumer service", type: "text", placeholder: "frontend-app or order-service" },
|
|
3877
|
+
{ key: "provider", label: "Provider service", type: "text", placeholder: "user-api or payment-service" },
|
|
3878
|
+
{ key: "integrationPoint", label: "What they integrate on", type: "text", placeholder: "GET /users/:id and POST /payments" },
|
|
3879
|
+
{ key: "tool", label: "Contract testing tool", type: "select", options: ["Pact", "Spring Cloud Contract", "Dredd", "manual JSON schema"], default: "Pact" }
|
|
3880
|
+
],
|
|
3881
|
+
tags: ["contract-testing", "microservices", "testing", "integration"],
|
|
3882
|
+
version: "1.0.0"
|
|
3883
|
+
},
|
|
3884
|
+
{
|
|
3885
|
+
id: "testing/mutation-test",
|
|
3886
|
+
name: "Mutation Testing",
|
|
3887
|
+
category: "testing",
|
|
3888
|
+
description: "Set up mutation testing to measure the true quality of your test suite",
|
|
3889
|
+
template: `Set up mutation testing for {{project.name}} in {{project.language}}.
|
|
3890
|
+
|
|
3891
|
+
Target: {{target}}
|
|
3892
|
+
Tool: {{tool}}
|
|
3893
|
+
|
|
3894
|
+
What mutation testing reveals:
|
|
3895
|
+
- Tests that pass even when code is broken (weak assertions)
|
|
3896
|
+
- Lines with no meaningful coverage (tests that don't assert)
|
|
3897
|
+
- Logic faults that your tests can't detect
|
|
3898
|
+
|
|
3899
|
+
Setup:
|
|
3900
|
+
1. Install and configure {{tool}} for the target source files
|
|
3901
|
+
2. Run baseline: all tests must pass before mutating
|
|
3902
|
+
3. Set a minimum mutation score threshold: {{threshold}}% \u2014 fail CI if score drops below
|
|
3903
|
+
|
|
3904
|
+
Interpreting results:
|
|
3905
|
+
- **Killed mutant**: your tests detected the code change \u2192 good
|
|
3906
|
+
- **Survived mutant**: the code changed but all tests still passed \u2192 weak test
|
|
3907
|
+
- **Timeout mutant**: mutant caused infinite loop \u2192 mark as killed
|
|
3908
|
+
|
|
3909
|
+
For each survived mutant, write a new assertion that kills it. Focus on:
|
|
3910
|
+
- Boundary conditions (off-by-one errors)
|
|
3911
|
+
- Boolean operators (AND vs OR, negation)
|
|
3912
|
+
- Arithmetic operators (+ vs -, * vs /)
|
|
3913
|
+
- Return value mutations (returning null, empty string, 0)
|
|
3914
|
+
|
|
3915
|
+
Exclude from mutation: generated code, type definitions, constants, logging.`,
|
|
3916
|
+
contextSchema: [
|
|
3917
|
+
{ key: "target", label: "Source to mutate", type: "text", placeholder: "src/services/ and src/utils/" },
|
|
3918
|
+
{ key: "tool", label: "Mutation testing tool", type: "select", options: ["Stryker (JS/TS)", "mutmut (Python)", "pitest (Java)", "go-mutesting (Go)"], default: "Stryker (JS/TS)" },
|
|
3919
|
+
{ key: "threshold", label: "Minimum mutation score (%)", type: "text", default: "80" }
|
|
3920
|
+
],
|
|
3921
|
+
tags: ["mutation-testing", "test-quality", "testing"],
|
|
3922
|
+
version: "1.0.0"
|
|
3923
|
+
},
|
|
3924
|
+
// ── DevOps (continued) ────────────────────────────────────────────────────
|
|
3925
|
+
{
|
|
3926
|
+
id: "devops/kubernetes",
|
|
3927
|
+
name: "Kubernetes Manifest",
|
|
3928
|
+
category: "devops",
|
|
3929
|
+
description: "Generate production-ready Kubernetes deployment manifests",
|
|
3930
|
+
template: `Generate Kubernetes manifests for {{project.name}}.
|
|
3931
|
+
|
|
3932
|
+
Service type: {{serviceType}}
|
|
3933
|
+
Replicas: {{replicas}}
|
|
3934
|
+
Resources: CPU {{cpuRequest}}/{{cpuLimit}}, Memory {{memRequest}}/{{memLimit}}
|
|
3935
|
+
|
|
3936
|
+
Manifests to produce:
|
|
3937
|
+
1. **Namespace** \u2014 isolate the application
|
|
3938
|
+
2. **Deployment** \u2014 {{replicas}} replicas, rolling update strategy (maxSurge: 1, maxUnavailable: 0)
|
|
3939
|
+
3. **Service** \u2014 {{serviceType}} to expose the deployment
|
|
3940
|
+
4. **ConfigMap** \u2014 non-sensitive configuration
|
|
3941
|
+
5. **Secret** \u2014 sensitive values (base64 encoded, reference env vars from here)
|
|
3942
|
+
6. **HorizontalPodAutoscaler** \u2014 scale on CPU > 70%, min {{replicas}} / max {{maxReplicas}} pods
|
|
3943
|
+
7. **PodDisruptionBudget** \u2014 minAvailable: {{minAvailable}} during cluster maintenance
|
|
3944
|
+
|
|
3945
|
+
Pod spec requirements:
|
|
3946
|
+
- Liveness probe: GET /health, initialDelaySeconds: 30, periodSeconds: 10
|
|
3947
|
+
- Readiness probe: GET /ready, initialDelaySeconds: 10, periodSeconds: 5
|
|
3948
|
+
- Resource requests and limits (always set both)
|
|
3949
|
+
- Security context: runAsNonRoot: true, readOnlyRootFilesystem: true
|
|
3950
|
+
- Image pull policy: Always (for mutable tags like :latest)
|
|
3951
|
+
|
|
3952
|
+
Provide all manifests in a single YAML file separated by ---`,
|
|
3953
|
+
contextSchema: [
|
|
3954
|
+
{ key: "serviceType", label: "Kubernetes service type", type: "select", options: ["ClusterIP", "LoadBalancer", "NodePort"], default: "ClusterIP" },
|
|
3955
|
+
{ key: "replicas", label: "Initial replica count", type: "text", default: "2" },
|
|
3956
|
+
{ key: "maxReplicas", label: "Max replicas (HPA)", type: "text", default: "10" },
|
|
3957
|
+
{ key: "minAvailable", label: "Min available (PDB)", type: "text", default: "1" },
|
|
3958
|
+
{ key: "cpuRequest", label: "CPU request", type: "text", default: "100m" },
|
|
3959
|
+
{ key: "cpuLimit", label: "CPU limit", type: "text", default: "500m" },
|
|
3960
|
+
{ key: "memRequest", label: "Memory request", type: "text", default: "128Mi" },
|
|
3961
|
+
{ key: "memLimit", label: "Memory limit", type: "text", default: "512Mi" }
|
|
3962
|
+
],
|
|
3963
|
+
tags: ["kubernetes", "k8s", "devops", "infrastructure"],
|
|
3964
|
+
version: "1.0.0"
|
|
3965
|
+
},
|
|
3966
|
+
{
|
|
3967
|
+
id: "devops/terraform",
|
|
3968
|
+
name: "Terraform Module",
|
|
3969
|
+
category: "devops",
|
|
3970
|
+
description: "Write a reusable Terraform module for cloud infrastructure",
|
|
3971
|
+
template: `Write a Terraform module for {{resource}} on {{cloud}}.
|
|
3972
|
+
|
|
3973
|
+
Module purpose: {{purpose}}
|
|
3974
|
+
Environment: {{environment}}
|
|
3975
|
+
|
|
3976
|
+
Module structure:
|
|
3977
|
+
\`\`\`
|
|
3978
|
+
modules/{{moduleName}}/
|
|
3979
|
+
main.tf \u2014 resource definitions
|
|
3980
|
+
variables.tf \u2014 input variables with types, descriptions, and defaults
|
|
3981
|
+
outputs.tf \u2014 output values consumed by other modules
|
|
3982
|
+
versions.tf \u2014 required_providers with version constraints
|
|
3983
|
+
README.md \u2014 usage example with all variables documented
|
|
3984
|
+
\`\`\`
|
|
3985
|
+
|
|
3986
|
+
Requirements:
|
|
3987
|
+
1. All resource names include var.environment for env isolation (dev/staging/prod)
|
|
3988
|
+
2. Tags on every resource: Environment, Project, ManagedBy=terraform
|
|
3989
|
+
3. No hardcoded values \u2014 everything configurable via variables
|
|
3990
|
+
4. Use data sources to look up existing resources (VPCs, AMIs) by tag/name, not ID
|
|
3991
|
+
5. Remote state: provide example backend config (S3 + DynamoDB lock)
|
|
3992
|
+
6. Sensitive outputs: mark sensitive = true; never output secrets as plain text
|
|
3993
|
+
|
|
3994
|
+
Provide: the module files, a usage example in a root module, and a terraform.tfvars.example.`,
|
|
3995
|
+
contextSchema: [
|
|
3996
|
+
{ key: "resource", label: "Resource to create", type: "text", placeholder: "RDS PostgreSQL instance, ECS service, S3 + CloudFront CDN" },
|
|
3997
|
+
{ key: "cloud", label: "Cloud provider", type: "select", options: ["AWS", "GCP", "Azure", "DigitalOcean"], default: "AWS" },
|
|
3998
|
+
{ key: "purpose", label: "Module purpose", type: "text", placeholder: "Managed PostgreSQL database with automated backups and read replica" },
|
|
3999
|
+
{ key: "environment", label: "Target environment", type: "text", default: "staging" },
|
|
4000
|
+
{ key: "moduleName", label: "Module name", type: "text", placeholder: "rds-postgres" }
|
|
4001
|
+
],
|
|
4002
|
+
tags: ["terraform", "iac", "infrastructure", "devops"],
|
|
4003
|
+
version: "1.0.0"
|
|
4004
|
+
},
|
|
4005
|
+
{
|
|
4006
|
+
id: "devops/secrets-management",
|
|
4007
|
+
name: "Secrets Management",
|
|
4008
|
+
category: "devops",
|
|
4009
|
+
description: "Design a secrets management strategy for a multi-environment application",
|
|
4010
|
+
template: `Design a secrets management strategy for {{project.name}}.
|
|
4011
|
+
|
|
4012
|
+
Environments: {{environments}}
|
|
4013
|
+
Secrets to manage: {{secrets}}
|
|
4014
|
+
Secrets store: {{store}}
|
|
4015
|
+
|
|
4016
|
+
Strategy:
|
|
4017
|
+
1. **Secret classification**
|
|
4018
|
+
- Tier 1 (critical): database passwords, JWT signing keys, payment API keys \u2192 {{store}}
|
|
4019
|
+
- Tier 2 (sensitive): third-party API keys, webhook secrets \u2192 {{store}} or env vars
|
|
4020
|
+
- Tier 3 (config): feature flags, URLs, timeouts \u2192 environment variables or config files
|
|
4021
|
+
|
|
4022
|
+
2. **Rotation**
|
|
4023
|
+
- Automated rotation for DB credentials ({{store}} + DB user rotation)
|
|
4024
|
+
- Rotation schedule: every 90 days; zero-downtime via dual-secret pattern
|
|
4025
|
+
- Alert on secrets within 14 days of expiry
|
|
4026
|
+
|
|
4027
|
+
3. **Access control**
|
|
4028
|
+
- Each service gets its own IAM role/service account \u2014 least privilege
|
|
4029
|
+
- Developers never see production secrets; use separate dev secrets
|
|
4030
|
+
- Audit log: every secret access logged with caller identity and timestamp
|
|
4031
|
+
|
|
4032
|
+
4. **CI/CD integration**
|
|
4033
|
+
- Reference secrets by name in CI config, never copy values into CI env vars
|
|
4034
|
+
- Dynamic secrets for CI builds (short-lived, scoped to the build)
|
|
4035
|
+
|
|
4036
|
+
5. **Emergency procedures**
|
|
4037
|
+
- Runbook: how to rotate a compromised secret in < 15 minutes
|
|
4038
|
+
- Break-glass access with full audit trail
|
|
4039
|
+
|
|
4040
|
+
Never commit secrets to git. Scan with truffleHog or gitleaks in pre-commit hooks.`,
|
|
4041
|
+
contextSchema: [
|
|
4042
|
+
{ key: "environments", label: "Environments", type: "text", default: "development, staging, production" },
|
|
4043
|
+
{ key: "secrets", label: "Secrets to manage", type: "text", placeholder: "DB password, JWT secret, Stripe key, SendGrid API key" },
|
|
4044
|
+
{ key: "store", label: "Secrets store", type: "select", options: ["AWS Secrets Manager", "HashiCorp Vault", "GCP Secret Manager", "Azure Key Vault", "Doppler"], default: "AWS Secrets Manager" }
|
|
4045
|
+
],
|
|
4046
|
+
tags: ["secrets", "security", "devops", "infrastructure"],
|
|
4047
|
+
version: "1.0.0"
|
|
4048
|
+
},
|
|
4049
|
+
{
|
|
4050
|
+
id: "devops/blue-green-deploy",
|
|
4051
|
+
name: "Blue-Green Deployment",
|
|
4052
|
+
category: "devops",
|
|
4053
|
+
description: "Design a blue-green deployment strategy for zero-downtime releases",
|
|
4054
|
+
template: `Design a blue-green deployment strategy for {{project.name}}.
|
|
4055
|
+
|
|
4056
|
+
Infrastructure: {{infrastructure}}
|
|
4057
|
+
Traffic routing: {{router}}
|
|
4058
|
+
|
|
4059
|
+
Setup:
|
|
4060
|
+
- **Blue** environment: current production (receives 100% traffic)
|
|
4061
|
+
- **Green** environment: new version (receives 0% traffic during preparation)
|
|
4062
|
+
|
|
4063
|
+
Deployment procedure:
|
|
4064
|
+
1. Deploy new version to **green** (parallel to live blue)
|
|
4065
|
+
2. Run smoke tests against green (health check, critical paths)
|
|
4066
|
+
3. Warm up green: pre-load caches, warm JIT
|
|
4067
|
+
4. Switch traffic: {{router}} routes 100% to green \u2192 blue goes idle
|
|
4068
|
+
5. Monitor green for 15 minutes (error rate, latency, business metrics)
|
|
4069
|
+
6. **Rollback path**: switch router back to blue \u2014 instant, no re-deploy
|
|
4070
|
+
7. Decommission blue after 24-hour hold period
|
|
4071
|
+
|
|
4072
|
+
Database migrations:
|
|
4073
|
+
- Must be backward-compatible (blue and green run simultaneously during switch)
|
|
4074
|
+
- Use expand-contract pattern: add new column \u2192 deploy \u2192 backfill \u2192 remove old column
|
|
4075
|
+
- Never rename or drop columns in the same deploy as the app change
|
|
4076
|
+
|
|
4077
|
+
Monitoring gates (auto-rollback if triggered):
|
|
4078
|
+
- Error rate > 1% for 2 consecutive minutes
|
|
4079
|
+
- p99 latency > 2\xD7 baseline
|
|
4080
|
+
- Failed health checks on > 1 instance
|
|
4081
|
+
|
|
4082
|
+
Provide: infrastructure-as-code snippets for {{infrastructure}} and a deployment runbook.`,
|
|
4083
|
+
contextSchema: [
|
|
4084
|
+
{ key: "infrastructure", label: "Infrastructure", type: "select", options: ["AWS ECS", "Kubernetes", "AWS Elastic Beanstalk", "Heroku", "bare VMs"], default: "Kubernetes" },
|
|
4085
|
+
{ key: "router", label: "Traffic router", type: "select", options: ["AWS ALB", "NGINX", "Kubernetes Ingress", "Cloudflare"], default: "Kubernetes Ingress" }
|
|
4086
|
+
],
|
|
4087
|
+
tags: ["deployment", "blue-green", "zero-downtime", "devops"],
|
|
4088
|
+
version: "1.0.0"
|
|
4089
|
+
},
|
|
4090
|
+
{
|
|
4091
|
+
id: "devops/ci-pipeline",
|
|
4092
|
+
name: "CI/CD Pipeline",
|
|
4093
|
+
category: "devops",
|
|
4094
|
+
description: "Design a full CI/CD pipeline from commit to production",
|
|
4095
|
+
template: `Design a CI/CD pipeline for {{project.name}} using {{ciTool}}.
|
|
4096
|
+
|
|
4097
|
+
Language: {{project.language}}
|
|
4098
|
+
Deploy target: {{deployTarget}}
|
|
4099
|
+
|
|
4100
|
+
Pipeline stages:
|
|
4101
|
+
1. **Lint & Format** \u2014 fail fast on style issues (< 1 min)
|
|
4102
|
+
2. **Build** \u2014 compile, bundle, generate artifacts
|
|
4103
|
+
3. **Unit tests** \u2014 isolated, mocked, fast (< 3 min)
|
|
4104
|
+
4. **Integration tests** \u2014 real DB + services (Docker Compose in CI)
|
|
4105
|
+
5. **Security scan** \u2014 SAST (Semgrep/SonarQube), dependency audit (npm audit/pip-audit), secret scan (gitleaks)
|
|
4106
|
+
6. **Build image** \u2014 Docker multi-stage, tag with git SHA
|
|
4107
|
+
7. **Push to registry** \u2014 push image, update image tag in IaC repo
|
|
4108
|
+
8. **Deploy to staging** \u2014 automatic on main branch merge
|
|
4109
|
+
9. **Smoke test** \u2014 hit critical endpoints on staging
|
|
4110
|
+
10. **Deploy to production** \u2014 manual approval gate (or auto if smoke tests pass)
|
|
4111
|
+
11. **Post-deploy verification** \u2014 run synthetic monitors for 5 minutes
|
|
4112
|
+
|
|
4113
|
+
Branch strategy:
|
|
4114
|
+
- Feature branches: run stages 1-5
|
|
4115
|
+
- Main branch: run all stages through staging deploy
|
|
4116
|
+
- Tags (v*): deploy to production after manual approval
|
|
4117
|
+
|
|
4118
|
+
Provide the full {{ciTool}} configuration file with all stages.`,
|
|
4119
|
+
contextSchema: [
|
|
4120
|
+
{ key: "ciTool", label: "CI/CD tool", type: "select", options: ["GitHub Actions", "GitLab CI", "CircleCI", "Jenkins", "Bitbucket Pipelines"], default: "GitHub Actions" },
|
|
4121
|
+
{ key: "deployTarget", label: "Deploy target", type: "select", options: ["Kubernetes (kubectl)", "AWS ECS", "Heroku", "Fly.io", "VPS (SSH)"], default: "Kubernetes (kubectl)" }
|
|
4122
|
+
],
|
|
4123
|
+
tags: ["ci-cd", "pipeline", "devops", "automation"],
|
|
4124
|
+
version: "1.0.0"
|
|
4125
|
+
},
|
|
4126
|
+
// ── Academic (continued) ──────────────────────────────────────────────────
|
|
4127
|
+
{
|
|
4128
|
+
id: "academic/graph-traversal",
|
|
4129
|
+
name: "Graph Traversal",
|
|
4130
|
+
category: "academic",
|
|
4131
|
+
description: "Implement and explain BFS and DFS graph traversal algorithms",
|
|
4132
|
+
template: `Implement BFS and DFS graph traversal for {{project.name}} in {{project.language}}.
|
|
4133
|
+
|
|
4134
|
+
Graph type: {{graphType}}
|
|
4135
|
+
Application: {{application}}
|
|
4136
|
+
|
|
4137
|
+
Implement both algorithms:
|
|
4138
|
+
|
|
4139
|
+
**Breadth-First Search (BFS):**
|
|
4140
|
+
- Use a queue (FIFO) \u2014 process nodes level by level
|
|
4141
|
+
- Time: O(V+E), Space: O(V)
|
|
4142
|
+
- Use when: shortest path in unweighted graph, level-order traversal
|
|
4143
|
+
|
|
4144
|
+
**Depth-First Search (DFS):**
|
|
4145
|
+
- Use a stack (explicit or call stack for recursion)
|
|
4146
|
+
- Time: O(V+E), Space: O(V) \u2014 but recursion risks stack overflow on deep graphs
|
|
4147
|
+
- Use when: cycle detection, topological sort, connected components
|
|
4148
|
+
|
|
4149
|
+
Applications for {{application}}:
|
|
4150
|
+
- Explain which algorithm is more appropriate and why
|
|
4151
|
+
- Implement the specialized variant (shortest path BFS / topological sort DFS)
|
|
4152
|
+
|
|
4153
|
+
Include:
|
|
4154
|
+
- Graph representation: adjacency list (space-efficient for sparse graphs)
|
|
4155
|
+
- Visited set to handle cycles in undirected graphs
|
|
4156
|
+
- Traced path reconstruction (parent map)
|
|
4157
|
+
- Unit tests with: disconnected graph, cyclic graph, single-node graph, empty graph
|
|
4158
|
+
|
|
4159
|
+
{{#if (eq project.type "student")}}- Add detailed comments explaining queue/stack state at each step{{/if}}`,
|
|
4160
|
+
contextSchema: [
|
|
4161
|
+
{ key: "graphType", label: "Graph type", type: "select", options: ["undirected unweighted", "directed unweighted", "directed weighted", "DAG"], default: "undirected unweighted" },
|
|
4162
|
+
{ key: "application", label: "Real-world application", type: "text", placeholder: "Social network friend suggestions, file system traversal, dependency resolution" }
|
|
4163
|
+
],
|
|
4164
|
+
tags: ["graphs", "bfs", "dfs", "algorithms", "academic"],
|
|
4165
|
+
version: "1.0.0"
|
|
4166
|
+
},
|
|
4167
|
+
{
|
|
4168
|
+
id: "academic/dynamic-programming",
|
|
4169
|
+
name: "Dynamic Programming",
|
|
4170
|
+
category: "academic",
|
|
4171
|
+
description: "Solve a problem using dynamic programming with memoization and tabulation",
|
|
4172
|
+
template: `Solve {{problem}} using dynamic programming in {{project.language}}.
|
|
4173
|
+
|
|
4174
|
+
Approach: {{approach}}
|
|
4175
|
+
|
|
4176
|
+
Step-by-step DP methodology:
|
|
4177
|
+
1. **Identify overlapping subproblems**: show a recursion tree that reveals repeated calls
|
|
4178
|
+
2. **Define the state**: what does dp[i] (or dp[i][j]) represent?
|
|
4179
|
+
3. **State transition (recurrence)**: write the formula relating dp[i] to smaller subproblems
|
|
4180
|
+
4. **Base cases**: smallest subproblems that can be answered directly
|
|
4181
|
+
5. **Iteration order**: which direction to fill the table (top-down or bottom-up)
|
|
4182
|
+
|
|
4183
|
+
Implement both:
|
|
4184
|
+
- **Top-down (memoization)**: recursive + cache (dict/array), natural to write
|
|
4185
|
+
- **Bottom-up (tabulation)**: iterative, fills DP table from base cases up
|
|
4186
|
+
|
|
4187
|
+
Complexity analysis:
|
|
4188
|
+
- Brute force: O(?) time
|
|
4189
|
+
- With DP: O(?) time, O(?) space
|
|
4190
|
+
- Space optimization (if applicable): reduce to O(1) or O(n) by only keeping needed rows
|
|
4191
|
+
|
|
4192
|
+
Problem: {{problem}}
|
|
4193
|
+
Concrete example walkthrough: trace through a small input step by step, showing the DP table.
|
|
4194
|
+
|
|
4195
|
+
{{#if (eq project.type "student")}}- Explain why greedy fails for this problem (if applicable){{/if}}`,
|
|
4196
|
+
contextSchema: [
|
|
4197
|
+
{ key: "problem", label: "Problem to solve", type: "text", placeholder: "Longest common subsequence, 0/1 knapsack, coin change, edit distance" },
|
|
4198
|
+
{ key: "approach", label: "Starting approach", type: "select", options: ["top-down (memoization)", "bottom-up (tabulation)", "both"], default: "both" }
|
|
4199
|
+
],
|
|
4200
|
+
tags: ["dynamic-programming", "memoization", "algorithms", "academic"],
|
|
4201
|
+
version: "1.0.0"
|
|
4202
|
+
},
|
|
4203
|
+
{
|
|
4204
|
+
id: "academic/binary-tree",
|
|
4205
|
+
name: "Binary Tree Operations",
|
|
4206
|
+
category: "academic",
|
|
4207
|
+
description: "Implement binary tree traversals, insertion, deletion, and balancing",
|
|
4208
|
+
template: `Implement binary tree operations for {{project.name}} in {{project.language}}.
|
|
4209
|
+
|
|
4210
|
+
Tree type: {{treeType}}
|
|
4211
|
+
|
|
4212
|
+
Implement:
|
|
4213
|
+
1. **Node structure**: value, left, right pointers (generic/typed)
|
|
4214
|
+
2. **Traversals** (recursive + iterative versions):
|
|
4215
|
+
- In-order: Left \u2192 Root \u2192 Right (gives sorted sequence for BST)
|
|
4216
|
+
- Pre-order: Root \u2192 Left \u2192 Right (useful for serialization)
|
|
4217
|
+
- Post-order: Left \u2192 Right \u2192 Root (useful for deletion, expression evaluation)
|
|
4218
|
+
- Level-order (BFS): use a queue, returns levels as array of arrays
|
|
4219
|
+
|
|
4220
|
+
3. **BST operations** (if {{treeType}} is BST):
|
|
4221
|
+
- insert(value): maintain BST property, O(log n) average
|
|
4222
|
+
- search(value): return node or null
|
|
4223
|
+
- delete(value): handle 3 cases (leaf, one child, two children \u2014 use in-order successor)
|
|
4224
|
+
- isValid(): verify BST property using min/max bounds
|
|
4225
|
+
|
|
4226
|
+
4. **Tree properties**:
|
|
4227
|
+
- height(): max depth from root to leaf
|
|
4228
|
+
- size(): total node count
|
|
4229
|
+
- isBalanced(): height difference \u2264 1 for all subtrees
|
|
4230
|
+
|
|
4231
|
+
{{#if (eq treeType "AVL tree")}}- Implement AVL rotations (LL, RR, LR, RL) to maintain balance after insert/delete{{/if}}
|
|
4232
|
+
|
|
4233
|
+
Include: edge cases (empty tree, single node, skewed tree), and complexity analysis for each operation.`,
|
|
4234
|
+
contextSchema: [
|
|
4235
|
+
{ key: "treeType", label: "Tree type", type: "select", options: ["Binary Search Tree (BST)", "AVL tree", "general binary tree"], default: "Binary Search Tree (BST)" }
|
|
4236
|
+
],
|
|
4237
|
+
tags: ["binary-tree", "bst", "data-structures", "academic"],
|
|
4238
|
+
version: "1.0.0"
|
|
4239
|
+
},
|
|
4240
|
+
{
|
|
4241
|
+
id: "academic/hash-table",
|
|
4242
|
+
name: "Hash Table Design",
|
|
4243
|
+
category: "academic",
|
|
4244
|
+
description: "Design and implement a hash table with collision handling",
|
|
4245
|
+
template: `Design and implement a hash table for {{project.name}} in {{project.language}}.
|
|
4246
|
+
|
|
4247
|
+
Collision strategy: {{collisionStrategy}}
|
|
4248
|
+
Key type: {{keyType}}
|
|
4249
|
+
|
|
4250
|
+
Implementation:
|
|
4251
|
+
1. **Hash function**:
|
|
4252
|
+
- For strings: polynomial rolling hash (djb2 or FNV-1a)
|
|
4253
|
+
- For integers: Knuth's multiplicative hashing
|
|
4254
|
+
- Properties: deterministic, uniform distribution, fast to compute
|
|
4255
|
+
|
|
4256
|
+
2. **Collision handling \u2014 {{collisionStrategy}}**:
|
|
4257
|
+
{{#if (eq collisionStrategy "chaining")}} - Each bucket holds a linked list of entries
|
|
4258
|
+
- Lookup: hash \u2192 bucket \u2192 linear scan of chain
|
|
4259
|
+
- Worst case: O(n) with bad hash function; O(1) average with good hash + load factor < 0.7{{/if}}
|
|
4260
|
+
{{#if (eq collisionStrategy "open addressing")}} - All entries stored in the array; probe on collision
|
|
4261
|
+
- Probe sequence: linear (h+1, h+2...) or quadratic (h+1\xB2, h+2\xB2...) or double hashing
|
|
4262
|
+
- Deletion: use tombstone markers, not null (null would break probe chains){{/if}}
|
|
4263
|
+
|
|
4264
|
+
3. **Load factor and resizing**:
|
|
4265
|
+
- Threshold: resize when load factor > 0.7 (chaining) or > 0.5 (open addressing)
|
|
4266
|
+
- Resize: allocate 2\xD7 array, re-hash all entries (do NOT copy \u2014 old indices are invalid)
|
|
4267
|
+
|
|
4268
|
+
4. **API**: get(key), set(key, value), delete(key), has(key), size()
|
|
4269
|
+
|
|
4270
|
+
Include: analysis of average vs worst-case complexity, and demonstration of clustering effect.
|
|
4271
|
+
|
|
4272
|
+
{{#if (eq project.type "student")}}- Show why a bad hash function (e.g., always returning 0) degrades to O(n){{/if}}`,
|
|
4273
|
+
contextSchema: [
|
|
4274
|
+
{ key: "collisionStrategy", label: "Collision handling", type: "select", options: ["chaining", "open addressing"], default: "chaining" },
|
|
4275
|
+
{ key: "keyType", label: "Key type", type: "select", options: ["string", "integer", "generic/any"], default: "string" }
|
|
4276
|
+
],
|
|
4277
|
+
tags: ["hash-table", "data-structures", "hashing", "academic"],
|
|
4278
|
+
version: "1.0.0"
|
|
4279
|
+
},
|
|
4280
|
+
// ── Review (continued) ────────────────────────────────────────────────────
|
|
4281
|
+
{
|
|
4282
|
+
id: "review/dependency-audit",
|
|
4283
|
+
name: "Dependency Audit",
|
|
4284
|
+
category: "review",
|
|
4285
|
+
description: "Audit project dependencies for security vulnerabilities, outdated packages, and bloat",
|
|
4286
|
+
template: `Audit the dependencies of {{project.name}} ({{project.language}}).
|
|
4287
|
+
|
|
4288
|
+
Scope: {{scope}}
|
|
4289
|
+
|
|
4290
|
+
Run these checks and report findings:
|
|
4291
|
+
|
|
4292
|
+
1. **Security vulnerabilities**
|
|
4293
|
+
- Run: \`npm audit --json\` / \`pip-audit\` / \`cargo audit\` / \`bundle audit\`
|
|
4294
|
+
- Classify: Critical/High/Medium/Low
|
|
4295
|
+
- For each Critical/High: identify the vulnerable package, the attack vector, and whether it's a direct or transitive dependency
|
|
4296
|
+
- Action: upgrade, replace, or accept risk with justification
|
|
4297
|
+
|
|
4298
|
+
2. **Outdated packages**
|
|
4299
|
+
- Run: \`npm outdated\` / \`pip list --outdated\`
|
|
4300
|
+
- Classify: major update (breaking), minor (new features), patch (bug fixes)
|
|
4301
|
+
- Prioritize: packages with known CVEs or that are no longer maintained
|
|
4302
|
+
|
|
4303
|
+
3. **Unused dependencies**
|
|
4304
|
+
- Check: packages in package.json not imported anywhere in source
|
|
4305
|
+
- Tools: \`depcheck\` (JS), \`deptry\` (Python)
|
|
4306
|
+
|
|
4307
|
+
4. **Bundle size impact** (for frontend):
|
|
4308
|
+
- Largest packages by install size
|
|
4309
|
+
- Suggest lighter alternatives (e.g., date-fns vs moment, nanoid vs uuid)
|
|
4310
|
+
|
|
4311
|
+
5. **License compatibility**
|
|
4312
|
+
- Flag: GPL licenses in a commercial project, unlicensed packages
|
|
4313
|
+
|
|
4314
|
+
Provide: prioritized action list with effort estimates.`,
|
|
4315
|
+
contextSchema: [
|
|
4316
|
+
{ key: "scope", label: "What to audit", type: "select", options: ["all dependencies", "production only", "direct dependencies only"], default: "all dependencies" }
|
|
4317
|
+
],
|
|
4318
|
+
tags: ["dependencies", "security", "audit", "review"],
|
|
4319
|
+
version: "1.0.0"
|
|
4320
|
+
},
|
|
4321
|
+
{
|
|
4322
|
+
id: "review/license-check",
|
|
4323
|
+
name: "License Compatibility Review",
|
|
4324
|
+
category: "review",
|
|
4325
|
+
description: "Audit open-source licenses in the dependency tree for legal compatibility",
|
|
4326
|
+
template: `Review open-source license compatibility for {{project.name}}.
|
|
4327
|
+
|
|
4328
|
+
Project license: {{projectLicense}}
|
|
4329
|
+
Use case: {{useCase}}
|
|
4330
|
+
|
|
4331
|
+
License compatibility matrix for {{projectLicense}} / {{useCase}}:
|
|
4332
|
+
|
|
4333
|
+
| License | Compatible? | Notes |
|
|
4334
|
+
|---------|-------------|-------|
|
|
4335
|
+
| MIT | \u2705 Yes | Permissive, only requires attribution |
|
|
4336
|
+
| Apache 2.0 | \u2705 Yes | Permissive, requires NOTICE file |
|
|
4337
|
+
| BSD 2/3-Clause | \u2705 Yes | Permissive |
|
|
4338
|
+
| ISC | \u2705 Yes | Equivalent to simplified MIT |
|
|
4339
|
+
| LGPL v2/v3 | \u26A0\uFE0F Conditional | OK if dynamically linked; static link requires source disclosure |
|
|
4340
|
+
| GPL v2/v3 | \u274C No (commercial) | Copyleft \u2014 derivative work must also be GPL |
|
|
4341
|
+
| AGPL | \u274C No | Strongest copyleft \u2014 SaaS use triggers disclosure |
|
|
4342
|
+
| CC0 | \u2705 Yes | Public domain dedication |
|
|
4343
|
+
| Unlicensed | \u274C No | All rights reserved by default |
|
|
4344
|
+
|
|
4345
|
+
Scan steps:
|
|
4346
|
+
1. Run \`license-checker --json\` (npm) or \`pip-licenses\` (Python) to list all licenses
|
|
4347
|
+
2. Flag all non-permissive licenses for legal review
|
|
4348
|
+
3. For flagged packages: find an MIT-licensed alternative or seek a commercial license
|
|
4349
|
+
|
|
4350
|
+
Output: a CSV of all dependencies with their license, and a list requiring legal review.`,
|
|
4351
|
+
contextSchema: [
|
|
4352
|
+
{ key: "projectLicense", label: "Your project's license", type: "select", options: ["MIT", "Apache 2.0", "Proprietary/Commercial", "GPL v3", "AGPL"], default: "Proprietary/Commercial" },
|
|
4353
|
+
{ key: "useCase", label: "Use case", type: "select", options: ["commercial SaaS", "open-source library", "internal tool", "academic project"], default: "commercial SaaS" }
|
|
4354
|
+
],
|
|
4355
|
+
tags: ["license", "legal", "compliance", "review"],
|
|
4356
|
+
version: "1.0.0"
|
|
4357
|
+
},
|
|
4358
|
+
{
|
|
4359
|
+
id: "review/documentation-review",
|
|
4360
|
+
name: "Documentation Review",
|
|
4361
|
+
category: "review",
|
|
4362
|
+
description: "Audit project documentation for completeness, accuracy, and clarity",
|
|
4363
|
+
template: `Review the documentation for {{project.name}}.
|
|
4364
|
+
|
|
4365
|
+
Documentation types to review: {{docTypes}}
|
|
4366
|
+
|
|
4367
|
+
For each document, evaluate:
|
|
4368
|
+
|
|
4369
|
+
**Completeness checklist:**
|
|
4370
|
+
- [ ] README: project purpose, installation, quick-start example, configuration, contributing guide
|
|
4371
|
+
- [ ] API docs: every public endpoint documented (method, path, auth, request shape, response shape, error codes)
|
|
4372
|
+
- [ ] Architecture: system diagram, component responsibilities, data flow, external dependencies
|
|
4373
|
+
- [ ] Deployment: step-by-step guide for each environment, rollback instructions
|
|
4374
|
+
- [ ] Runbooks: top 5 most likely incidents with diagnosis and resolution steps
|
|
4375
|
+
|
|
4376
|
+
**Quality checks:**
|
|
4377
|
+
- Accuracy: does it match the current code? (spot-check 5 random code samples against docs)
|
|
4378
|
+
- Clarity: would a new team member understand this in < 30 minutes?
|
|
4379
|
+
- Code examples: do they actually run? Are they up to date?
|
|
4380
|
+
- Broken links: check all internal and external links
|
|
4381
|
+
- Outdated screenshots or UI references
|
|
4382
|
+
|
|
4383
|
+
**Scoring (1-5 per dimension):**
|
|
4384
|
+
- Coverage: what % of the codebase is documented?
|
|
4385
|
+
- Accuracy: how many outdated/incorrect claims were found?
|
|
4386
|
+
- Clarity: readability and structure
|
|
4387
|
+
- Maintainability: is it easy to keep up to date?
|
|
4388
|
+
|
|
4389
|
+
Output: prioritized list of gaps with suggested content for the top 3.`,
|
|
4390
|
+
contextSchema: [
|
|
4391
|
+
{ key: "docTypes", label: "Documentation to review", type: "text", placeholder: "README, API docs, architecture diagrams, deployment guide" }
|
|
4392
|
+
],
|
|
4393
|
+
tags: ["documentation", "review", "quality"],
|
|
4394
|
+
version: "1.0.0"
|
|
4395
|
+
},
|
|
4396
|
+
{
|
|
4397
|
+
id: "review/load-testing-plan",
|
|
4398
|
+
name: "Load Testing Review",
|
|
4399
|
+
category: "review",
|
|
4400
|
+
description: "Review and improve an existing load testing setup",
|
|
4401
|
+
template: `Review the load testing setup for {{project.name}}.
|
|
4402
|
+
|
|
4403
|
+
Existing setup: {{existingSetup}}
|
|
4404
|
+
Target SLOs: {{slos}}
|
|
4405
|
+
|
|
4406
|
+
Review dimensions:
|
|
4407
|
+
|
|
4408
|
+
1. **Test scenario coverage**
|
|
4409
|
+
- Does the load test reflect real user behavior (not just single endpoints)?
|
|
4410
|
+
- Is the think time (pause between requests) realistic?
|
|
4411
|
+
- Does it cover concurrent sessions, not just concurrent requests?
|
|
4412
|
+
|
|
4413
|
+
2. **Ramp-up strategy**
|
|
4414
|
+
- Is traffic introduced gradually (step-up) or all at once (spike)?
|
|
4415
|
+
- Does it include a warm-up period to fill caches?
|
|
4416
|
+
|
|
4417
|
+
3. **Assertions and thresholds**
|
|
4418
|
+
- Are pass/fail thresholds defined against SLOs: {{slos}}?
|
|
4419
|
+
- Does it assert on error rate (not just response time)?
|
|
4420
|
+
|
|
4421
|
+
4. **Data realism**
|
|
4422
|
+
- Does each virtual user use unique data (not the same user ID)?
|
|
4423
|
+
- Is the request distribution realistic (20% POST, 80% GET)?
|
|
4424
|
+
|
|
4425
|
+
5. **Monitoring correlation**
|
|
4426
|
+
- Can you correlate load test timeline with APM/metrics data?
|
|
4427
|
+
- Are resource metrics (CPU, memory, DB connections) captured during the test?
|
|
4428
|
+
|
|
4429
|
+
6. **Findings and recommendations**
|
|
4430
|
+
- List what's good about the existing setup
|
|
4431
|
+
- List gaps and specific fixes
|
|
4432
|
+
|
|
4433
|
+
Provide: an improved test script and a results analysis checklist.`,
|
|
4434
|
+
contextSchema: [
|
|
4435
|
+
{ key: "existingSetup", label: "Existing setup description", type: "text", placeholder: "k6 script hitting /api/products with 100 VUs for 5 minutes" },
|
|
4436
|
+
{ key: "slos", label: "Target SLOs", type: "text", placeholder: "p95 < 500ms, error rate < 0.1%, 99.9% availability" }
|
|
4437
|
+
],
|
|
4438
|
+
tags: ["load-testing", "slo", "performance", "review"],
|
|
4439
|
+
version: "1.0.0"
|
|
4440
|
+
},
|
|
4441
|
+
// ── General (continued) ───────────────────────────────────────────────────
|
|
4442
|
+
{
|
|
4443
|
+
id: "general/feature-flag",
|
|
4444
|
+
name: "Feature Flag",
|
|
4445
|
+
category: "general",
|
|
4446
|
+
description: "Implement a feature flag system with gradual rollout and kill switch",
|
|
4447
|
+
template: `Implement a feature flag system for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
4448
|
+
|
|
4449
|
+
Flag backend: {{flagBackend}}
|
|
4450
|
+
Flags to implement: {{flags}}
|
|
4451
|
+
|
|
4452
|
+
Design:
|
|
4453
|
+
1. **Flag definition**: name, description, enabled (bool), rollout % (0-100), targeting rules
|
|
4454
|
+
2. **Evaluation**: isEnabled(flagName, context?) \u2014 context includes userId, environment, attributes
|
|
4455
|
+
3. **Rollout strategies**:
|
|
4456
|
+
- All/none: simple on/off
|
|
4457
|
+
- Percentage: hash(userId + flagName) % 100 < rolloutPct \u2014 consistent per user
|
|
4458
|
+
- Targeted: specific user IDs, email domains, or custom attributes
|
|
4459
|
+
|
|
4460
|
+
4. **Code integration**:
|
|
4461
|
+
\`\`\`
|
|
4462
|
+
if (await flags.isEnabled('new-checkout', { userId })) {
|
|
4463
|
+
// new path
|
|
4464
|
+
} else {
|
|
4465
|
+
// old path
|
|
4466
|
+
}
|
|
4467
|
+
\`\`\`
|
|
4468
|
+
|
|
4469
|
+
5. **Kill switch**: disable a flag immediately without a deploy (set rollout to 0%)
|
|
4470
|
+
6. **Flag cleanup**: when a flag reaches 100% and is stable, delete it and the old code path \u2014 track flag age to enforce cleanup
|
|
4471
|
+
|
|
4472
|
+
{{#if (eq flagBackend "environment variables")}}- Simple approach: flags as env vars; redeploy required to change{{/if}}
|
|
4473
|
+
{{#if (eq flagBackend "database")}}- Store in DB table: flags(name, config JSON, updated_at); cache with 60s TTL{{/if}}
|
|
4474
|
+
{{#if (eq flagBackend "LaunchDarkly / GrowthBook")}}- Use SDK; evaluate server-side; stream flag changes without redeploy{{/if}}
|
|
4475
|
+
|
|
4476
|
+
Include: useFeatureFlag(name) React hook and a flag admin endpoint.`,
|
|
4477
|
+
contextSchema: [
|
|
4478
|
+
{ key: "flagBackend", label: "Flag storage", type: "select", options: ["environment variables", "database", "LaunchDarkly / GrowthBook", "Redis"], default: "database" },
|
|
4479
|
+
{ key: "flags", label: "Flags to implement", type: "text", placeholder: "new-checkout-flow, ai-recommendations, dark-mode-v2" }
|
|
4480
|
+
],
|
|
4481
|
+
tags: ["feature-flags", "rollout", "experimentation", "general"],
|
|
4482
|
+
version: "1.0.0"
|
|
4483
|
+
},
|
|
4484
|
+
{
|
|
4485
|
+
id: "general/ab-testing",
|
|
4486
|
+
name: "A/B Test Plan",
|
|
4487
|
+
category: "general",
|
|
4488
|
+
description: "Design an A/B test with statistical validity and analysis plan",
|
|
4489
|
+
template: `Design an A/B test for {{project.name}}.
|
|
4490
|
+
|
|
4491
|
+
Hypothesis: {{hypothesis}}
|
|
4492
|
+
Primary metric: {{primaryMetric}}
|
|
4493
|
+
|
|
4494
|
+
Test design:
|
|
4495
|
+
|
|
4496
|
+
**1. Hypothesis**
|
|
4497
|
+
- Control (A): {{control}}
|
|
4498
|
+
- Variant (B): {{variant}}
|
|
4499
|
+
- Expected lift: {{expectedLift}}
|
|
4500
|
+
|
|
4501
|
+
**2. Statistical requirements**
|
|
4502
|
+
- Significance level: \u03B1 = 0.05 (5% false positive rate)
|
|
4503
|
+
- Statistical power: 1 - \u03B2 = 0.80 (80% chance to detect a real effect)
|
|
4504
|
+
- Minimum detectable effect: {{expectedLift}}
|
|
4505
|
+
- Required sample size: [calculate using power analysis for {{primaryMetric}} baseline]
|
|
4506
|
+
|
|
4507
|
+
**3. Randomization**
|
|
4508
|
+
- Unit of randomization: {{randomizationUnit}} (must be consistent across sessions)
|
|
4509
|
+
- Traffic split: 50/50 control/variant
|
|
4510
|
+
- Exclusion: new vs returning, mobile vs desktop (analyze as segments, not filters)
|
|
4511
|
+
|
|
4512
|
+
**4. Duration**
|
|
4513
|
+
- Run for at least 2 full weeks (captures weekly seasonality)
|
|
4514
|
+
- Stop early only for safety issues, not promising results (avoids peeking bias)
|
|
4515
|
+
|
|
4516
|
+
**5. Analysis**
|
|
4517
|
+
- Primary metric: {{primaryMetric}} \u2014 use t-test or z-test for proportions
|
|
4518
|
+
- Guardrail metrics: metrics that must not regress (page load time, error rate)
|
|
4519
|
+
- Segment analysis: check if effect is consistent across device, region, user age
|
|
4520
|
+
|
|
4521
|
+
**6. Decision criteria**
|
|
4522
|
+
- Ship if: significant lift in primary metric AND no guardrail regression
|
|
4523
|
+
- Iterate if: directional positive but not significant (run longer or redesign)
|
|
4524
|
+
- Kill if: negative impact on primary or guardrail metric`,
|
|
4525
|
+
contextSchema: [
|
|
4526
|
+
{ key: "hypothesis", label: "What you believe will improve", type: "text", placeholder: "Changing the CTA button color from gray to blue will increase clicks" },
|
|
4527
|
+
{ key: "control", label: "Control (current state)", type: "text", placeholder: 'Gray "Sign up" button' },
|
|
4528
|
+
{ key: "variant", label: "Variant (change to test)", type: "text", placeholder: 'Blue "Get started free" button' },
|
|
4529
|
+
{ key: "primaryMetric", label: "Primary metric", type: "text", placeholder: "Sign-up conversion rate" },
|
|
4530
|
+
{ key: "expectedLift", label: "Minimum expected lift", type: "text", default: "5%" },
|
|
4531
|
+
{ key: "randomizationUnit", label: "Randomization unit", type: "select", options: ["user ID", "session ID", "device ID", "IP address"], default: "user ID" }
|
|
4532
|
+
],
|
|
4533
|
+
tags: ["ab-testing", "experimentation", "analytics", "general"],
|
|
4534
|
+
version: "1.0.0"
|
|
4535
|
+
},
|
|
4536
|
+
{
|
|
4537
|
+
id: "general/logging-strategy",
|
|
4538
|
+
name: "Logging Strategy",
|
|
4539
|
+
category: "general",
|
|
4540
|
+
description: "Design a structured logging strategy with levels, fields, and sampling",
|
|
4541
|
+
template: `Design a structured logging strategy for {{project.name}} in {{project.language}}{{#if project.framework}} using {{project.framework}}{{/if}}.
|
|
4542
|
+
|
|
4543
|
+
Log destination: {{destination}}
|
|
4544
|
+
|
|
4545
|
+
**Log levels (use strictly):**
|
|
4546
|
+
- DEBUG: detailed internal state \u2014 disabled in production
|
|
4547
|
+
- INFO: business-significant events (user registered, order placed, job completed)
|
|
4548
|
+
- WARN: unexpected state that the system recovered from (retry succeeded, fallback used)
|
|
4549
|
+
- ERROR: failure requiring investigation (exception with stack trace, failed job, broken integration)
|
|
4550
|
+
- FATAL: process is about to exit
|
|
4551
|
+
|
|
4552
|
+
**Required fields on every log line:**
|
|
4553
|
+
\`\`\`json
|
|
4554
|
+
{
|
|
4555
|
+
"timestamp": "ISO 8601",
|
|
4556
|
+
"level": "info",
|
|
4557
|
+
"service": "{{project.name}}",
|
|
4558
|
+
"traceId": "propagated from incoming request",
|
|
4559
|
+
"message": "human-readable summary",
|
|
4560
|
+
... event-specific fields
|
|
4561
|
+
}
|
|
4562
|
+
\`\`\`
|
|
4563
|
+
|
|
4564
|
+
**What to log:**
|
|
4565
|
+
- \u2705 Request in/out (method, path, status, duration, userId)
|
|
4566
|
+
- \u2705 External API calls (provider, endpoint, status, latency)
|
|
4567
|
+
- \u2705 Database queries > 500ms (query, params redacted, duration)
|
|
4568
|
+
- \u2705 Background job lifecycle (started, completed, failed, duration)
|
|
4569
|
+
- \u274C Never log: passwords, tokens, full credit card numbers, PII (log user ID instead)
|
|
4570
|
+
|
|
4571
|
+
**Sampling:**
|
|
4572
|
+
- 100% for ERROR and FATAL
|
|
4573
|
+
- 10% for INFO in high-traffic paths (adjust via env var)
|
|
4574
|
+
- 1% for DEBUG (only in staging)
|
|
4575
|
+
|
|
4576
|
+
**Alerting rules** (set up in {{destination}}):
|
|
4577
|
+
- ERROR rate > 1% of requests \u2192 page oncall
|
|
4578
|
+
- FATAL \u2192 immediate page
|
|
4579
|
+
|
|
4580
|
+
Provide: logger setup code, middleware integration, and redaction utility.`,
|
|
4581
|
+
contextSchema: [
|
|
4582
|
+
{ key: "destination", label: "Log destination", type: "select", options: ["Datadog", "CloudWatch", "Elasticsearch (ELK)", "Grafana Loki", "stdout (self-hosted)"], default: "Datadog" }
|
|
4583
|
+
],
|
|
4584
|
+
tags: ["logging", "observability", "structured-logs", "general"],
|
|
4585
|
+
version: "1.0.0"
|
|
4586
|
+
},
|
|
4587
|
+
{
|
|
4588
|
+
id: "general/data-migration",
|
|
4589
|
+
name: "Data Migration Plan",
|
|
4590
|
+
category: "general",
|
|
4591
|
+
description: "Plan and implement a safe data migration with rollback",
|
|
4592
|
+
template: `Plan a data migration for {{project.name}}.
|
|
4593
|
+
|
|
4594
|
+
Migration: {{migrationDescription}}
|
|
4595
|
+
Data size: {{dataSize}}
|
|
4596
|
+
Downtime allowed: {{downtime}}
|
|
4597
|
+
|
|
4598
|
+
Migration strategy: {{#if (eq downtime "zero")}}online (zero-downtime){{else}}offline (maintenance window){{/if}}
|
|
4599
|
+
|
|
4600
|
+
**Pre-migration checklist:**
|
|
4601
|
+
- [ ] Create a full backup and verify restore works
|
|
4602
|
+
- [ ] Run migration against a production-sized staging copy
|
|
4603
|
+
- [ ] Measure migration duration on staging to estimate production time
|
|
4604
|
+
- [ ] Write and test the rollback script
|
|
4605
|
+
- [ ] Define success criteria (row count, checksum, smoke tests)
|
|
4606
|
+
- [ ] Notify affected users/teams of maintenance window (if any)
|
|
4607
|
+
|
|
4608
|
+
**Migration steps:**
|
|
4609
|
+
1. Deploy new code that supports BOTH old and new schema (expand phase)
|
|
4610
|
+
2. Run the data migration script
|
|
4611
|
+
3. Verify: row counts match, checksums pass, smoke tests green
|
|
4612
|
+
4. Deploy code that uses ONLY the new schema (contract phase)
|
|
4613
|
+
5. Drop old columns/tables (after 1 week of clean operation)
|
|
4614
|
+
|
|
4615
|
+
**Rollback plan:**
|
|
4616
|
+
- Trigger: any verification step fails
|
|
4617
|
+
- Action: restore from backup OR apply reverse migration script
|
|
4618
|
+
- RTO target: < {{rollbackRTO}}
|
|
4619
|
+
|
|
4620
|
+
**Large dataset strategy ({{dataSize}}):**
|
|
4621
|
+
- Batch processing: migrate in chunks of 1000 rows with a brief sleep between batches
|
|
4622
|
+
- Track progress: store last-processed ID; resume on failure without re-processing
|
|
4623
|
+
- Monitor: DB replication lag, table lock waits, query performance during migration`,
|
|
4624
|
+
contextSchema: [
|
|
4625
|
+
{ key: "migrationDescription", label: "What is being migrated", type: "text", placeholder: "Normalize user addresses from a single text field to a structured address table" },
|
|
4626
|
+
{ key: "dataSize", label: "Approximate data size", type: "text", placeholder: "10M rows, ~50GB" },
|
|
4627
|
+
{ key: "downtime", label: "Downtime tolerance", type: "select", options: ["zero", "30 minutes", "2 hours", "weekend window"], default: "zero" },
|
|
4628
|
+
{ key: "rollbackRTO", label: "Rollback time target", type: "text", default: "15 minutes" }
|
|
4629
|
+
],
|
|
4630
|
+
tags: ["migration", "database", "data", "general"],
|
|
4631
|
+
version: "1.0.0"
|
|
4632
|
+
},
|
|
4633
|
+
{
|
|
4634
|
+
id: "general/sla-definition",
|
|
4635
|
+
name: "SLA / SLO / SLI Definition",
|
|
4636
|
+
category: "general",
|
|
4637
|
+
description: "Define service level indicators, objectives, and agreements for a service",
|
|
4638
|
+
template: `Define SLIs, SLOs, and SLAs for {{project.name}}.
|
|
4639
|
+
|
|
4640
|
+
Service type: {{serviceType}}
|
|
4641
|
+
User expectations: {{userExpectations}}
|
|
4642
|
+
|
|
4643
|
+
**SLI \u2192 SLO \u2192 SLA hierarchy:**
|
|
4644
|
+
- **SLI** (what you measure): the metric
|
|
4645
|
+
- **SLO** (your internal target): the threshold you aim to meet
|
|
4646
|
+
- **SLA** (the promise to customers): the threshold with consequences if missed
|
|
4647
|
+
|
|
4648
|
+
**Recommended SLIs for {{serviceType}}:**
|
|
4649
|
+
|
|
4650
|
+
| SLI | Measurement | Good event definition |
|
|
4651
|
+
|-----|-------------|----------------------|
|
|
4652
|
+
| Availability | % of successful requests | HTTP status < 500 |
|
|
4653
|
+
| Latency | p95 response time | p95 < 500ms |
|
|
4654
|
+
| Error rate | % of requests with errors | Error rate < 0.1% |
|
|
4655
|
+
| Throughput | Requests per second | \u2265 {{minThroughput}} RPS |
|
|
4656
|
+
|
|
4657
|
+
**SLO targets:**
|
|
4658
|
+
| SLI | SLO | Error budget (30-day window) |
|
|
4659
|
+
|-----|-----|------------------------------|
|
|
4660
|
+
| Availability | 99.9% | 43.2 minutes downtime/month |
|
|
4661
|
+
| Latency (p95) | 99% of requests < 500ms | 1% slow requests allowed |
|
|
4662
|
+
| Error rate | < 0.1% | 1 in 1000 requests may fail |
|
|
4663
|
+
|
|
4664
|
+
**Error budget policy:**
|
|
4665
|
+
- > 50% budget remaining: deploy freely, invest in features
|
|
4666
|
+
- 25-50% remaining: reduce deploy frequency, increase testing
|
|
4667
|
+
- < 25% remaining: freeze non-essential deploys, focus on reliability
|
|
4668
|
+
|
|
4669
|
+
**SLA consequences** (if SLO is breached for customers):
|
|
4670
|
+
- 99.9% \u2192 99.5%: 10% service credit
|
|
4671
|
+
- < 99.5%: 25% service credit
|
|
4672
|
+
|
|
4673
|
+
Provide: monitoring queries (PromQL/DataDog) for each SLI, and an error budget burn rate alert.`,
|
|
4674
|
+
contextSchema: [
|
|
4675
|
+
{ key: "serviceType", label: "Service type", type: "select", options: ["HTTP API", "background job processor", "data pipeline", "real-time service"], default: "HTTP API" },
|
|
4676
|
+
{ key: "userExpectations", label: "User expectations", type: "text", placeholder: "Users expect < 1 second response times and 24/7 availability" },
|
|
4677
|
+
{ key: "minThroughput", label: "Minimum throughput (RPS)", type: "text", default: "100" }
|
|
4678
|
+
],
|
|
4679
|
+
tags: ["sla", "slo", "sli", "reliability", "sre", "general"],
|
|
4680
|
+
version: "1.0.0"
|
|
4681
|
+
},
|
|
4682
|
+
{
|
|
4683
|
+
id: "general/technical-spec",
|
|
4684
|
+
name: "Technical Specification",
|
|
4685
|
+
category: "general",
|
|
4686
|
+
description: "Write a technical specification document for a new feature or system",
|
|
4687
|
+
template: `Write a technical specification for {{feature}} in {{project.name}}.
|
|
4688
|
+
|
|
4689
|
+
Author: {{author}}
|
|
4690
|
+
Status: Draft
|
|
4691
|
+
|
|
4692
|
+
---
|
|
4693
|
+
|
|
4694
|
+
# Technical Spec: {{feature}}
|
|
4695
|
+
|
|
4696
|
+
## Problem Statement
|
|
4697
|
+
{{problem}}
|
|
4698
|
+
|
|
4699
|
+
## Goals
|
|
4700
|
+
- [Primary goal: what this feature achieves]
|
|
4701
|
+
- [Secondary goal]
|
|
4702
|
+
|
|
4703
|
+
## Non-Goals
|
|
4704
|
+
- [Explicitly what this spec does NOT cover]
|
|
4705
|
+
- [Related ideas intentionally deferred]
|
|
4706
|
+
|
|
4707
|
+
## Background & Context
|
|
4708
|
+
[Why now? What changed to make this necessary? Link to PRD/product brief]
|
|
4709
|
+
|
|
4710
|
+
## Proposed Solution
|
|
4711
|
+
|
|
4712
|
+
### High-Level Design
|
|
4713
|
+
[System diagram or description of the approach]
|
|
4714
|
+
|
|
4715
|
+
### Data Model Changes
|
|
4716
|
+
[New tables, columns, or schema changes with migration plan]
|
|
4717
|
+
|
|
4718
|
+
### API Changes
|
|
4719
|
+
[New or modified endpoints with request/response shapes]
|
|
4720
|
+
|
|
4721
|
+
### Key Algorithms / Logic
|
|
4722
|
+
[Pseudocode or description of non-obvious logic]
|
|
4723
|
+
|
|
4724
|
+
### Error Handling
|
|
4725
|
+
[How errors are surfaced to users and logged internally]
|
|
4726
|
+
|
|
4727
|
+
## Alternative Approaches Considered
|
|
4728
|
+
| Approach | Pros | Cons | Reason rejected |
|
|
4729
|
+
|----------|------|------|-----------------|
|
|
4730
|
+
|
|
4731
|
+
## Dependencies
|
|
4732
|
+
- [Other teams, services, or external systems required]
|
|
4733
|
+
|
|
4734
|
+
## Risks & Mitigations
|
|
4735
|
+
| Risk | Likelihood | Impact | Mitigation |
|
|
4736
|
+
|------|-----------|--------|------------|
|
|
4737
|
+
|
|
4738
|
+
## Implementation Plan
|
|
4739
|
+
| Phase | What | Estimated effort |
|
|
4740
|
+
|-------|------|-----------------|
|
|
4741
|
+
| 1 | Schema migration | 1 day |
|
|
4742
|
+
|
|
4743
|
+
## Open Questions
|
|
4744
|
+
- [ ] [Question that needs answering before implementation]
|
|
4745
|
+
|
|
4746
|
+
## Sign-off
|
|
4747
|
+
| Role | Name | Status |
|
|
4748
|
+
|------|------|--------|
|
|
4749
|
+
| Engineering | | Pending |
|
|
4750
|
+
| Product | | Pending |`,
|
|
4751
|
+
contextSchema: [
|
|
4752
|
+
{ key: "feature", label: "Feature or system name", type: "text", placeholder: "Real-time Collaboration Engine" },
|
|
4753
|
+
{ key: "problem", label: "Problem being solved", type: "text", placeholder: "Users cannot collaborate on documents simultaneously without conflicts" },
|
|
4754
|
+
{ key: "author", label: "Author name", type: "text", placeholder: "Your name" }
|
|
4755
|
+
],
|
|
4756
|
+
tags: ["spec", "technical-design", "documentation", "general"],
|
|
4757
|
+
version: "1.0.0"
|
|
4758
|
+
},
|
|
4759
|
+
{
|
|
4760
|
+
id: "general/capacity-planning",
|
|
4761
|
+
name: "Capacity Planning",
|
|
4762
|
+
category: "general",
|
|
4763
|
+
description: "Plan infrastructure capacity for projected growth",
|
|
4764
|
+
template: `Perform capacity planning for {{project.name}}.
|
|
4765
|
+
|
|
4766
|
+
Current load: {{currentLoad}}
|
|
4767
|
+
Growth projection: {{growthProjection}}
|
|
4768
|
+
Time horizon: {{horizon}}
|
|
4769
|
+
|
|
4770
|
+
**Step 1: Baseline measurements**
|
|
4771
|
+
Measure current resource utilization at peak:
|
|
4772
|
+
- CPU usage: ___% average, ___% peak
|
|
4773
|
+
- Memory usage: ___MB average, ___MB peak
|
|
4774
|
+
- Database connections: ___/{{maxConnections}} pool used
|
|
4775
|
+
- Storage growth: ___GB/month
|
|
4776
|
+
- Network: ___Mbps in/out
|
|
4777
|
+
|
|
4778
|
+
**Step 2: Project future load**
|
|
4779
|
+
If current load is {{currentLoad}} and growth is {{growthProjection}}:
|
|
4780
|
+
- Load at 3 months: [calculate]
|
|
4781
|
+
- Load at 6 months: [calculate]
|
|
4782
|
+
- Load at {{horizon}}: [calculate]
|
|
4783
|
+
|
|
4784
|
+
**Step 3: Identify bottlenecks (which resource hits limit first?)**
|
|
4785
|
+
For each resource, estimate when it saturates at projected growth:
|
|
4786
|
+
| Resource | Current | Limit | Time to saturation |
|
|
4787
|
+
|----------|---------|-------|-------------------|
|
|
4788
|
+
|
|
4789
|
+
**Step 4: Mitigation options**
|
|
4790
|
+
For each bottleneck, provide 3 options ranked by effort:
|
|
4791
|
+
1. Quick fix (< 1 week): scale vertically, tune config
|
|
4792
|
+
2. Medium term (1-3 months): architectural change (add cache, read replica, CDN)
|
|
4793
|
+
3. Long term (3-6 months): re-architect (sharding, microservices, rewrite hot path)
|
|
4794
|
+
|
|
4795
|
+
**Step 5: Cost projection**
|
|
4796
|
+
| Scenario | Current cost | Projected cost at {{horizon}} |
|
|
4797
|
+
|----------|-------------|-------------------------------|
|
|
4798
|
+
|
|
4799
|
+
Recommendation: the option with the best cost/effort/risk trade-off for the next {{horizon}}.`,
|
|
4800
|
+
contextSchema: [
|
|
4801
|
+
{ key: "currentLoad", label: "Current peak load", type: "text", placeholder: "500 req/s, 10k daily active users" },
|
|
4802
|
+
{ key: "growthProjection", label: "Growth rate", type: "text", placeholder: "20% month-over-month" },
|
|
4803
|
+
{ key: "horizon", label: "Planning horizon", type: "select", options: ["3 months", "6 months", "1 year", "2 years"], default: "6 months" },
|
|
4804
|
+
{ key: "maxConnections", label: "Current DB connection pool limit", type: "text", default: "100" }
|
|
4805
|
+
],
|
|
4806
|
+
tags: ["capacity", "scaling", "infrastructure", "planning", "general"],
|
|
4807
|
+
version: "1.0.0"
|
|
4808
|
+
},
|
|
4809
|
+
{
|
|
4810
|
+
id: "general/team-workflow",
|
|
4811
|
+
name: "Team Workflow & Git Strategy",
|
|
4812
|
+
category: "general",
|
|
4813
|
+
description: "Define a team git branching strategy and development workflow",
|
|
4814
|
+
template: `Define a team workflow and git branching strategy for {{project.name}}.
|
|
4815
|
+
|
|
4816
|
+
Team size: {{teamSize}}
|
|
4817
|
+
Release cadence: {{releaseCadence}}
|
|
4818
|
+
Workflow: {{workflow}}
|
|
4819
|
+
|
|
4820
|
+
## Branching Strategy: {{workflow}}
|
|
4821
|
+
|
|
4822
|
+
{{#if (eq workflow "GitHub Flow")}}
|
|
4823
|
+
**GitHub Flow** (simple, continuous deployment):
|
|
4824
|
+
- \`main\` is always deployable
|
|
4825
|
+
- Feature branches: \`feature/short-description\` \u2014 branch from main, merge back to main
|
|
4826
|
+
- No long-lived branches; PRs merged within 1-2 days
|
|
4827
|
+
- Deploy on every merge to main
|
|
4828
|
+
- Best for: teams shipping continuously, SaaS products
|
|
4829
|
+
{{/if}}
|
|
4830
|
+
{{#if (eq workflow "Git Flow")}}
|
|
4831
|
+
**Git Flow** (structured, release-based):
|
|
4832
|
+
- \`main\`: production code only, tagged at each release
|
|
4833
|
+
- \`develop\`: integration branch, always ahead of main
|
|
4834
|
+
- \`feature/*\`: branch from develop, merge to develop
|
|
4835
|
+
- \`release/*\`: branch from develop when ready, merge to main AND develop
|
|
4836
|
+
- \`hotfix/*\`: branch from main, merge to main AND develop
|
|
4837
|
+
- Best for: versioned products, multiple supported releases
|
|
4838
|
+
{{/if}}
|
|
4839
|
+
|
|
4840
|
+
## Conventions
|
|
4841
|
+
|
|
4842
|
+
**Branch names**: \`type/ticket-short-description\`
|
|
4843
|
+
Types: feat, fix, chore, refactor, docs, test
|
|
4844
|
+
|
|
4845
|
+
**Commit messages** (Conventional Commits):
|
|
4846
|
+
\`\`\`
|
|
4847
|
+
feat(auth): add OAuth2 login with Google
|
|
4848
|
+
fix(checkout): prevent double-charge on network retry
|
|
4849
|
+
docs(api): update authentication endpoint examples
|
|
4850
|
+
\`\`\`
|
|
4851
|
+
|
|
4852
|
+
**PR rules:**
|
|
4853
|
+
- Max 400 lines changed (larger = split the PR)
|
|
4854
|
+
- At least {{requiredReviewers}} approving review(s) before merge
|
|
4855
|
+
- CI must be green (lint, tests, security scan)
|
|
4856
|
+
- Squash merge to keep main history clean
|
|
4857
|
+
|
|
4858
|
+
**Code review SLA:** respond within {{reviewSLA}}; merge within 48 hours of approval.
|
|
4859
|
+
|
|
4860
|
+
Provide: a .github/PULL_REQUEST_TEMPLATE.md and branch protection rules configuration.`,
|
|
4861
|
+
contextSchema: [
|
|
4862
|
+
{ key: "teamSize", label: "Team size", type: "select", options: ["1-3 (solo/duo)", "4-8 (small team)", "9-20 (medium team)", "20+ (large team)"], default: "4-8 (small team)" },
|
|
4863
|
+
{ key: "releaseCadence", label: "Release cadence", type: "select", options: ["continuous (multiple per day)", "weekly", "bi-weekly sprint", "monthly", "quarterly"], default: "weekly" },
|
|
4864
|
+
{ key: "workflow", label: "Branching workflow", type: "select", options: ["GitHub Flow", "Git Flow"], default: "GitHub Flow" },
|
|
4865
|
+
{ key: "requiredReviewers", label: "Required reviewers", type: "text", default: "1" },
|
|
4866
|
+
{ key: "reviewSLA", label: "Review response SLA", type: "text", default: "4 hours" }
|
|
4867
|
+
],
|
|
4868
|
+
tags: ["workflow", "git", "team", "process", "general"],
|
|
4869
|
+
version: "1.0.0"
|
|
3303
4870
|
}
|
|
3304
4871
|
];
|
|
3305
4872
|
|