@sylphx/flow 2.11.0 → 2.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. package/CHANGELOG.md +6 -0
  2. package/assets/slash-commands/review-account-security.md +16 -23
  3. package/assets/slash-commands/review-admin.md +17 -32
  4. package/assets/slash-commands/review-auth.md +16 -17
  5. package/assets/slash-commands/review-billing.md +16 -25
  6. package/assets/slash-commands/review-code-quality.md +18 -19
  7. package/assets/slash-commands/review-data-architecture.md +19 -18
  8. package/assets/slash-commands/review-database.md +19 -15
  9. package/assets/slash-commands/review-delivery.md +19 -30
  10. package/assets/slash-commands/review-discovery.md +19 -15
  11. package/assets/slash-commands/review-growth.md +15 -32
  12. package/assets/slash-commands/review-i18n.md +15 -28
  13. package/assets/slash-commands/review-ledger.md +19 -14
  14. package/assets/slash-commands/review-observability.md +16 -18
  15. package/assets/slash-commands/review-operability.md +16 -24
  16. package/assets/slash-commands/review-performance.md +15 -21
  17. package/assets/slash-commands/review-pricing.md +17 -22
  18. package/assets/slash-commands/review-privacy.md +17 -28
  19. package/assets/slash-commands/review-pwa.md +15 -18
  20. package/assets/slash-commands/review-referral.md +16 -25
  21. package/assets/slash-commands/review-security.md +20 -28
  22. package/assets/slash-commands/review-seo.md +22 -33
  23. package/assets/slash-commands/review-storage.md +18 -15
  24. package/assets/slash-commands/review-support.md +18 -20
  25. package/assets/slash-commands/review-trust-safety.md +42 -0
  26. package/assets/slash-commands/review-uiux.md +14 -24
  27. package/package.json +1 -1
package/CHANGELOG.md CHANGED
@@ -1,5 +1,11 @@
1
1
  # @sylphx/flow
2
2
 
3
+ ## 2.12.0 (2025-12-17)
4
+
5
+ ### ✨ Features
6
+
7
+ - **commands:** add trust-safety and fill SSOT gaps ([fe67913](https://github.com/SylphxAI/flow/commit/fe67913db8183ae6c16070825f61744cf44acafa))
8
+
3
9
  ## 2.11.0 (2025-12-17)
4
10
 
5
11
  ### ✨ Features
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-account-security
3
- description: Review account security - sessions, devices, MFA, security events
3
+ description: Review account security - sessions, MFA, devices, security events
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,37 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify improvements for user safety and threat detection.
15
+ * **Explore beyond the spec**: identify threats users can't protect themselves from.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Auth**: better-auth
20
20
  * **Framework**: Next.js
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### Membership and Account Security
24
+ * Session/device visibility and revocation must exist
25
+ * All security-sensitive actions must be server-enforced and auditable
26
+ * Account recovery must require step-up verification
25
27
 
26
- * Membership is entitlement-driven and server-enforced.
27
- * Provide a dedicated **Account Security** surface.
28
- * **Account Security minimum acceptance**:
29
- * Session/device visibility and revocation
30
- * MFA/passkey management
31
- * Linked identity provider management
32
- * Key security event visibility (and export where applicable)
33
- * All server-enforced and auditable
28
+ ## Context
34
29
 
35
- ### Recovery Governance
30
+ Account security is about giving users control over their own safety. Users should be able to see what's accessing their account, remove suspicious sessions, and understand when something unusual happens.
36
31
 
37
- * Account recovery flow secure
38
- * Support-assisted recovery with strict audit logging
39
- * Step-up verification for sensitive actions
32
+ But it's also about protecting users from threats they don't know about. Compromised credentials, session hijacking, social engineering attacks on support — these require proactive detection, not just user vigilance.
40
33
 
41
- ## Key Areas to Explore
34
+ ## Driving Questions
42
35
 
43
- * What visibility do users have into their active sessions and devices?
44
- * How robust is the MFA implementation and enrollment flow?
45
- * What security events are logged and how accessible are they to users?
46
- * How does the account recovery flow prevent social engineering attacks?
47
- * What step-up authentication exists for sensitive actions?
48
- * How are compromised accounts detected and handled?
36
+ * Can a user tell if someone else has access to their account?
37
+ * What happens when an account is compromised how fast can we detect and respond?
38
+ * How does the recovery flow prevent social engineering attacks?
39
+ * What security events should trigger user notification?
40
+ * Where are we relying on user vigilance when we should be detecting threats?
41
+ * What would a truly paranoid user want that we don't offer?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-admin
3
- description: Review admin - RBAC, bootstrap, config management, feature flags
3
+ description: Review admin - RBAC, bootstrap, audit, operational tools
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,7 +12,7 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify operational improvements and safety enhancements.
15
+ * **Explore beyond the spec**: identify operational gaps and safety improvements.
16
16
 
17
17
  ## Tech Stack
18
18
 
@@ -20,39 +20,24 @@ agent: coder
20
20
  * **API**: tRPC
21
21
  * **Database**: Neon (Postgres)
22
22
 
23
- ## Review Scope
23
+ ## Non-Negotiables
24
24
 
25
- ### Admin Platform (Operational-Grade)
25
+ * Admin bootstrap must use secure allowlist, not file seeding; must be permanently disabled after first admin
26
+ * All privilege grants must be audited (who/when/why)
27
+ * Actions affecting money/access/security require step-up controls
28
+ * Secrets must never be exposed through admin UI
26
29
 
27
- * Baseline: RBAC (least privilege), audit logs, feature flags governance, optional impersonation with safeguards and auditing.
30
+ ## Context
28
31
 
29
- ### Admin Bootstrap (Critical)
32
+ The admin platform is where operational power lives — and where operational mistakes happen. A well-designed admin reduces the chance of human error while giving operators the tools they need to resolve issues quickly.
30
33
 
31
- * Admin bootstrap must not rely on file seeding:
32
- * Use a secure, auditable **first-login allowlist** for the initial SUPER_ADMIN.
33
- * Permanently disable bootstrap after completion.
34
- * All privilege grants must be server-enforced and recorded in the audit log.
35
- * The allowlist must be managed via secure configuration (environment/secret store), not code or DB seeding.
34
+ Consider: what does an operator need at 3am when something is broken? What would prevent an admin from accidentally destroying data? How do we know if someone is misusing admin access?
36
35
 
37
- ### Configuration Management (Mandatory)
36
+ ## Driving Questions
38
37
 
39
- * All **non-secret** product-level configuration must be manageable via admin (server-enforced), with validation and change history.
40
- * Secrets/credentials are environment-managed only; admin may expose safe readiness/health visibility, not raw secrets.
41
-
42
- ### Admin Analytics and Reporting (Mandatory)
43
-
44
- * Provide comprehensive dashboards/reports for business, growth, billing, referral, support, and security/abuse signals, governed by RBAC.
45
-
46
- ### Admin Operational Management (Mandatory)
47
-
48
- * Tools for user/account management, entitlements/access management, lifecycle actions, and issue resolution workflows.
49
- * Actions affecting access, money/credits, or security posture require appropriate step-up controls and must be fully auditable.
50
-
51
- ## Key Areas to Explore
52
-
53
- * How secure is the admin bootstrap process?
54
- * What RBAC gaps exist that could lead to privilege escalation?
55
- * How comprehensive is the audit logging for sensitive operations?
56
- * What admin workflows are missing or painful?
57
- * How does impersonation work and what safeguards exist?
58
- * What visibility do admins have into system health and issues?
38
+ * What would an operator need during an incident that doesn't exist today?
39
+ * Where could an admin accidentally cause serious damage?
40
+ * How would we detect if admin access was compromised or misused?
41
+ * What repetitive admin tasks should be automated?
42
+ * Where is audit logging missing or insufficient?
43
+ * What would make the admin experience both safer and faster?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-auth
3
- description: Review authentication - SSO providers, passkeys, verification, sign-in
3
+ description: Review authentication - sign-in, SSO, passkeys, verification
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,31 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify improvements for security, usability, and reliability.
15
+ * **Explore beyond the spec**: identify security gaps and UX friction in auth flows.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Auth**: better-auth
20
20
  * **Framework**: Next.js
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### Identity, Verification, and Sign-in
24
+ * All authorization decisions must be server-enforced (no client-trust)
25
+ * Email verification required for high-impact capabilities
26
+ * If SSO provider secrets are missing, hide the option (no broken UI)
25
27
 
26
- * SSO providers (minimum): **Google, Apple, Facebook, Microsoft, GitHub** (prioritize by audience).
27
- * If provider env/secrets are missing, **hide** the login option (no broken/disabled UI).
28
- * Allow linking multiple providers and safe unlinking; server-enforced and abuse-protected.
29
- * Passkeys (WebAuthn) are first-class with secure enrollment/usage/recovery.
28
+ ## Context
30
29
 
31
- ### Verification Requirements
30
+ Authentication is the front door to every user's data. It needs to be both secure and frictionless — a difficult balance. Users abandon products with painful sign-in flows, but weak auth leads to compromised accounts.
32
31
 
33
- * **Email verification is mandatory** baseline for high-impact capabilities.
34
- * **Phone verification is optional** and used as risk-based step-up (anti-abuse, higher-trust flows, recovery); consent-aware and data-minimizing.
32
+ Consider the entire auth journey: first sign-up, return visits, account linking, recovery flows. Where is there unnecessary friction? Where are there security gaps? What would make auth both more secure AND easier?
35
33
 
36
- ## Key Areas to Explore
34
+ ## Driving Questions
37
35
 
38
- * How does the current auth implementation compare to best practices?
39
- * What security vulnerabilities exist in the sign-in flows?
40
- * How can the user experience be improved while maintaining security?
41
- * What edge cases are not handled (account recovery, provider outages, etc.)?
42
- * How does session management handle concurrent devices?
36
+ * What's the sign-in experience for a first-time user vs. returning user?
37
+ * Where do users get stuck or abandon the auth flow?
38
+ * What happens when a user loses access to their primary auth method?
39
+ * How does the system handle auth provider outages gracefully?
40
+ * What would passwordless-first auth look like here?
41
+ * Where is auth complexity hiding bugs or security issues?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-billing
3
- description: Review billing - Stripe integration, webhooks, state machine
3
+ description: Review billing - Stripe integration, webhooks, subscription state
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,41 +12,32 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify revenue leakage and billing reliability improvements.
15
+ * **Explore beyond the spec**: identify revenue leakage and reliability improvements.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Payments**: Stripe
20
20
  * **Workflows**: Upstash Workflows + QStash
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### Billing and Payments (Stripe)
24
+ * Webhook signature must be verified (reject unverifiable events)
25
+ * Stripe event ID must be used for idempotency
26
+ * Webhooks must handle out-of-order delivery
27
+ * No dual-write: billing truth comes from Stripe events only
28
+ * UI must only display states derivable from server-truth
25
29
 
26
- * Support subscriptions and one-time payments as product needs require.
27
- * **Billing state machine follows mapping requirements**; UI must only surface explainable, non-ambiguous states aligned to server-truth.
28
- * Tax/invoicing and refund/dispute handling must be behaviorally consistent with product UX and entitlement state.
30
+ ## Context
29
31
 
30
- ### Webhook Requirements (High-Risk)
32
+ Billing is where trust meets money. A bug here isn't just annoying — it's a financial and legal issue. Users must always see accurate state, and the system must never lose or duplicate charges.
31
33
 
32
- * Webhooks must be idempotent, retry-safe, out-of-order safe, auditable
33
- * Billing UI reflects server-truth state without ambiguity
34
- * **Webhook trust is mandatory**: webhook origin must be verified (signature verification and replay resistance)
35
- * The Stripe **event id** must be used as the idempotency and audit correlation key
36
- * Unverifiable events must be rejected and must trigger alerting
37
- * **Out-of-order behavior must be explicit**: all webhook handlers must define and enforce a clear out-of-order strategy
34
+ Beyond correctness, consider the user experience of billing. Is the upgrade path frictionless? Are failed payments handled gracefully? Does the dunning process recover revenue or just annoy users?
38
35
 
39
- ### State Machine
36
+ ## Driving Questions
40
37
 
41
- * Define mapping: **Stripe state → internal subscription state → entitlements**
42
- * Handle: trial, past_due, unpaid, canceled, refund, dispute
43
- * UI only shows interpretable, non-ambiguous states
44
-
45
- ## Key Areas to Explore
46
-
47
- * How robust is the webhook handling for all Stripe events?
48
38
  * What happens when webhooks arrive out of order?
49
- * How does the UI handle ambiguous billing states?
50
- * What revenue leakage exists (failed renewals, dunning, etc.)?
39
+ * Where could revenue leak (failed renewals, unhandled states)?
40
+ * What billing states are confusing to users?
51
41
  * How are disputes and chargebacks handled end-to-end?
52
- * What is the testing strategy for billing edge cases?
42
+ * If Stripe is temporarily unavailable, what breaks?
43
+ * What would make the billing experience genuinely excellent?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-code-quality
3
- description: Review code quality - linting, TypeScript, testing, CI
3
+ description: Review code quality - architecture, types, testing, maintainability
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,7 +12,7 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify maintainability issues and technical debt.
15
+ * **Explore beyond the spec**: identify code that works but shouldn't exist in its current form.
16
16
 
17
17
  ## Tech Stack
18
18
 
@@ -21,24 +21,23 @@ agent: coder
21
21
  * **Testing**: Bun test
22
22
  * **Language**: TypeScript (strict)
23
23
 
24
- ## Review Scope
24
+ ## Non-Negotiables
25
25
 
26
- ### Non-Negotiable Engineering Principles
26
+ * No TODOs, hacks, or workarounds in production code
27
+ * Strict TypeScript with end-to-end type safety (DB → API → UI)
28
+ * No dead or unused code
27
29
 
28
- * No workarounds, hacks, or TODOs.
29
- * Feature-first with clean architecture; designed for easy extension; no "god files".
30
- * Type-first, strict end-to-end correctness (**DB → API → UI**).
31
- * Serverless-first and server-first; edge-compatible where feasible without sacrificing correctness, security, or observability.
32
- * Mobile-first responsive design; desktop-second.
33
- * Precise naming; remove dead/unused code.
34
- * Upgrade all packages to latest stable; avoid deprecated patterns.
30
+ ## Context
35
31
 
36
- ## Key Areas to Explore
32
+ Code quality isn't about following rules — it's about making the codebase a place where good work is easy and bad work is hard. High-quality code is readable, testable, and changeable. Low-quality code fights you on every change.
37
33
 
38
- * What areas of the codebase have the most technical debt?
39
- * Where are types weak or using `any` inappropriately?
40
- * What test coverage gaps exist for critical paths?
41
- * What code patterns are inconsistent across the codebase?
42
- * What dependencies are outdated or have known vulnerabilities?
43
- * Where do "god files" or overly complex modules exist?
44
- * What naming inconsistencies make the code harder to understand?
34
+ Don't just look for rule violations. Look for code that technically works but is confusing, fragile, or painful to modify. Look for patterns that will cause bugs. Look for complexity that doesn't need to exist.
35
+
36
+ ## Driving Questions
37
+
38
+ * What code would you be embarrassed to show a senior engineer?
39
+ * Where is complexity hiding that makes the codebase hard to understand?
40
+ * What would break if someone new tried to make changes here?
41
+ * Where are types lying (as any, incorrect generics, missing null checks)?
42
+ * What test coverage gaps exist for code that really matters?
43
+ * If we could rewrite one part of this codebase, what would have the highest impact?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-data-architecture
3
- description: Review data architecture - boundaries, consistency model, server enforcement
3
+ description: Review data architecture - boundaries, consistency, state machines
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,7 +12,7 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify improvements for consistency, reliability, and scalability.
15
+ * **Explore beyond the spec**: identify architectural weaknesses that will cause problems at scale.
16
16
 
17
17
  ## Tech Stack
18
18
 
@@ -21,23 +21,24 @@ agent: coder
21
21
  * **Database**: Neon (Postgres)
22
22
  * **ORM**: Drizzle
23
23
 
24
- ## Review Scope
24
+ ## Non-Negotiables
25
25
 
26
- ### Boundaries, Server Enforcement, and Consistency Model (Hard Requirement)
26
+ * All authorization must be server-enforced (no client-trust)
27
+ * No dual-write: billing/entitlement truth comes from Stripe events only
28
+ * UI must never contradict server-truth
29
+ * High-value mutations must have audit trail (who/when/why/before/after)
27
30
 
28
- * Define clear boundaries: domain rules, use-cases, integrations, UI.
29
- * All authorization/entitlements are **server-enforced**; no client-trust.
30
- * Runtime constraints (serverless/edge) must be explicit and validated.
31
- * **Consistency model is mandatory for high-value state**: for billing, entitlements, ledger, admin privilege grants, and security posture, the system must define and enforce an explicit consistency model (source-of-truth, allowed delay windows, retry/out-of-order handling, and acceptable eventual consistency bounds).
32
- * **Billing and access state machine is mandatory**: define and validate the mapping **Stripe state → internal subscription state → entitlements**, including trial, past_due, unpaid, canceled, refund, and dispute outcomes. UI must only present interpretable, non-ambiguous states derived from server-truth.
33
- * **No dual-write (hard requirement)**: subscription/payment truth must be derived from Stripe-driven events; internal systems must not directly rewrite billing truth or authorize entitlements based on non-Stripe truth, except for explicitly defined admin remediation flows that are fully server-enforced and fully audited.
34
- * **Server-truth is authoritative**: UI state must never contradict server-truth. Where asynchronous confirmation exists, UI must represent that state unambiguously and remain explainable.
35
- * **Auditability chain is mandatory** for any high-value mutation: who/when/why, before/after state, and correlation to the triggering request/job/webhook must be recorded and queryable.
31
+ ## Context
36
32
 
37
- ## Key Areas to Explore
33
+ Data architecture determines what's possible and what's painful. Good architecture makes new features easy; bad architecture makes everything hard. The question isn't "does it work today?" but "will it work when requirements change?"
38
34
 
39
- * How well are domain boundaries defined and enforced?
40
- * Where does client-side trust leak into authorization decisions?
41
- * What consistency guarantees exist and are they sufficient?
42
- * How does the system handle eventual consistency edge cases?
43
- * What would break if a webhook is processed out of order?
35
+ Consider the boundaries between domains, the flow of data through the system, and the consistency guarantees at each step. Where are implicit assumptions that will break? Where is complexity hidden that will cause bugs?
36
+
37
+ ## Driving Questions
38
+
39
+ * If we were designing this from scratch, what would be different?
40
+ * Where will the current architecture break as the product scales?
41
+ * What implicit assumptions are waiting to cause bugs?
42
+ * How do we know when state is inconsistent, and how do we recover?
43
+ * Where is complexity hiding that makes the system hard to reason about?
44
+ * What architectural decisions are we avoiding that we shouldn't?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-database
3
- description: Review database - Drizzle migrations, schema drift, CI gates
3
+ description: Review database - schema, migrations, performance, reliability
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -8,30 +8,34 @@ agent: coder
8
8
 
9
9
  ## Mandate
10
10
 
11
- * Perform a **deep, thorough review** of database architecture in this codebase.
11
+ * Perform a **deep, thorough review** of the database in this codebase.
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify improvements for performance, reliability, and maintainability.
15
+ * **Explore beyond the spec**: identify schema problems that will hurt at scale.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Database**: Neon (Postgres)
20
20
  * **ORM**: Drizzle
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### Drizzle Migrations (Non-Negotiable)
24
+ * Migration files must exist, be complete, and be committed
25
+ * CI must fail if schema changes aren't represented by migrations
26
+ * No schema drift between environments
25
27
 
26
- * Migration files must exist, be complete, and be committed.
27
- * Deterministic, reproducible, environment-safe; linear/auditable history; no drift.
28
- * CI must fail if schema changes are not represented by migrations.
28
+ ## Context
29
29
 
30
- ## Key Areas to Explore
30
+ The database schema is the foundation everything else is built on. A bad schema creates friction for every feature built on top of it. Schema changes are expensive and risky — it's worth getting the design right.
31
31
 
32
- * Is the schema well-designed for the domain requirements?
33
- * Are there missing indexes that could improve query performance?
34
- * How are database connections pooled and managed?
35
- * What is the backup and disaster recovery strategy?
36
- * Are there any N+1 query problems or inefficient access patterns?
37
- * How does the schema handle soft deletes, auditing, and data lifecycle?
32
+ Consider not just "does the schema work?" but "does this schema make the right things easy?" Are the relationships correct? Are we storing data in ways that will be painful to query? Are we missing constraints that would prevent bugs?
33
+
34
+ ## Driving Questions
35
+
36
+ * If we were designing the schema from scratch, what would be different?
37
+ * Where are missing indexes causing slow queries we haven't noticed yet?
38
+ * What data relationships are awkward or incorrectly modeled?
39
+ * How does the schema handle data lifecycle (soft deletes, archival, retention)?
40
+ * What constraints are missing that would prevent invalid state?
41
+ * Where will the current schema hurt at 10x or 100x scale?
@@ -1,10 +1,10 @@
1
1
  ---
2
2
  name: review-delivery
3
- description: Review delivery gates - release blocking checks, verification
3
+ description: Review delivery - CI gates, automated verification, release safety
4
4
  agent: coder
5
5
  ---
6
6
 
7
- # Delivery Gates Review
7
+ # Delivery Review
8
8
 
9
9
  ## Mandate
10
10
 
@@ -12,7 +12,7 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify gaps in automated verification and release safety.
15
+ * **Explore beyond the spec**: identify what could go wrong in production that we're not catching.
16
16
 
17
17
  ## Tech Stack
18
18
 
@@ -21,37 +21,26 @@ agent: coder
21
21
  * **Linting**: Biome
22
22
  * **Platform**: Vercel
23
23
 
24
- ## Review Scope
24
+ ## Non-Negotiables
25
25
 
26
- ### Delivery Gates and Completion
26
+ * All release gates must be automated (manual verification doesn't count)
27
+ * Build must fail-fast on missing required configuration
28
+ * CI must block on: lint, typecheck, tests, build
29
+ * `/en/*` must redirect (no duplicate content)
30
+ * Security headers (CSP, HSTS) must be verified by tests
31
+ * Consent gating must be verified by tests
27
32
 
28
- CI must block merges/deploys when failing:
33
+ ## Context
29
34
 
30
- * Code Quality: Biome lint/format, strict TS typecheck, unit + E2E tests, build
31
- * Data Integrity: Migration integrity checks, no schema drift
32
- * i18n: Missing translation keys fail build, `/en/*` redirects, hreflang correct
33
- * Performance: Budget verification, Core Web Vitals thresholds, regression detection
34
- * Security: CSP/HSTS/headers verified, CSRF protection tested
35
- * Consent: Analytics/marketing consent gating verified
35
+ Delivery gates are the last line of defense before code reaches users. Every manual verification step is a gate that will eventually fail. Every untested assumption is a bug waiting to ship.
36
36
 
37
- ### Automation Requirement
37
+ The question isn't "what tests do we have?" but "what could go wrong that we wouldn't catch?" Think about the deploy that breaks production at 2am — what would have prevented it?
38
38
 
39
- **All gates above must be enforced by automated tests or mechanized checks (non-manual); manual verification does not satisfy release gates.**
39
+ ## Driving Questions
40
40
 
41
- ### Configuration Gates
42
-
43
- * Build/startup must fail-fast when required configuration/secrets are missing or invalid.
44
-
45
- ### Operability Gates
46
-
47
- * Observability and alerting configured for critical anomalies
48
- * Workflow dead-letter handling is operable and supports controlled replay
49
-
50
- ## Key Areas to Explore
51
-
52
- * What release gates are missing or insufficient?
41
+ * What could ship to production that shouldn't?
53
42
  * Where does manual verification substitute for automation?
54
- * How fast is the CI pipeline and what slows it down?
55
- * What flaky tests exist and how do they affect reliability?
56
- * How does the deployment process handle rollbacks?
57
- * What post-deployment verification exists?
43
+ * What flaky tests are training people to ignore failures?
44
+ * How fast is the feedback loop, and what slows it down?
45
+ * If a deploy breaks production, how fast can we detect and rollback?
46
+ * What's the worst thing that shipped recently that tests should have caught?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-discovery
3
- description: Review discovery - competitive research, features, pricing opportunities
3
+ description: Review discovery - competitive research, opportunities, market positioning
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -8,27 +8,31 @@ agent: coder
8
8
 
9
9
  ## Mandate
10
10
 
11
- * Perform a **deep, thorough review** to discover opportunities in this codebase.
11
+ * Perform a **deep, thorough review** to discover opportunities for this product.
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: this review IS the exploration—think broadly and creatively.
15
+ * **This review IS exploration** think broadly and creatively about what could be.
16
16
 
17
- ## Review Scope
17
+ ## Tech Stack
18
18
 
19
- ### Review Requirements: Explore Beyond the Spec
19
+ * Research across the product's full stack as needed
20
20
 
21
- * **Feature design review**: define success criteria, journeys, state model, auth/entitlements, instrumentation; propose competitiveness improvements within constraints.
22
- * **Pricing/monetization review**: packaging/entitlements, lifecycle semantics, legal/tax/invoice implications; propose conversion/churn improvements within constraints.
23
- * **Competitive research**: features, extensibility, guidance patterns, pricing/packaging norms; convert insights into testable acceptance criteria.
21
+ ## Non-Negotiables
24
22
 
25
- ## Key Areas to Explore
23
+ * None this is pure exploration
26
24
 
27
- * What features are competitors offering that we lack?
28
- * What pricing models are common in the market and how do we compare?
29
- * What UX patterns are users expecting based on industry standards?
25
+ ## Context
26
+
27
+ Discovery is about finding what's missing, what's possible, and what would make the product significantly more competitive. It's not about fixing bugs — it's about identifying opportunities that don't yet exist.
28
+
29
+ Look at competitors, market trends, user expectations, and technological possibilities. What would make users choose this product over alternatives? What capabilities would unlock new business models?
30
+
31
+ ## Driving Questions
32
+
33
+ * What are competitors doing that we're not?
34
+ * What do users expect based on industry standards that we lack?
30
35
  * What integrations would add significant value?
31
- * What automation opportunities exist to reduce manual work?
32
- * What self-service capabilities are users asking for?
36
+ * What pricing models are common in the market and how do we compare?
33
37
  * What technical capabilities could enable new business models?
34
- * Where are the biggest gaps between current state and market expectations?
38
+ * What would make this product a category leader rather than a follower?