@sylphx/flow 2.9.0 → 2.11.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. package/CHANGELOG.md +25 -0
  2. package/assets/slash-commands/review-account-security.md +48 -0
  3. package/assets/slash-commands/review-admin.md +58 -0
  4. package/assets/slash-commands/review-auth.md +42 -0
  5. package/assets/slash-commands/review-billing.md +52 -0
  6. package/assets/slash-commands/review-code-quality.md +44 -0
  7. package/assets/slash-commands/review-data-architecture.md +43 -0
  8. package/assets/slash-commands/review-database.md +37 -0
  9. package/assets/slash-commands/review-delivery.md +57 -0
  10. package/assets/slash-commands/review-discovery.md +34 -0
  11. package/assets/slash-commands/review-growth.md +57 -0
  12. package/assets/slash-commands/review-i18n.md +54 -0
  13. package/assets/slash-commands/review-ledger.md +38 -0
  14. package/assets/slash-commands/review-observability.md +43 -0
  15. package/assets/slash-commands/review-operability.md +50 -0
  16. package/assets/slash-commands/review-performance.md +47 -0
  17. package/assets/slash-commands/review-pricing.md +45 -0
  18. package/assets/slash-commands/review-privacy.md +56 -0
  19. package/assets/slash-commands/review-pwa.md +43 -0
  20. package/assets/slash-commands/review-referral.md +50 -0
  21. package/assets/slash-commands/review-security.md +54 -0
  22. package/assets/slash-commands/review-seo.md +51 -0
  23. package/assets/slash-commands/review-storage.md +38 -0
  24. package/assets/slash-commands/review-support.md +43 -0
  25. package/assets/slash-commands/review-uiux.md +49 -0
  26. package/package.json +1 -1
  27. package/assets/slash-commands/saas-admin.md +0 -123
  28. package/assets/slash-commands/saas-auth.md +0 -78
  29. package/assets/slash-commands/saas-billing.md +0 -68
  30. package/assets/slash-commands/saas-discovery.md +0 -135
  31. package/assets/slash-commands/saas-growth.md +0 -94
  32. package/assets/slash-commands/saas-i18n.md +0 -66
  33. package/assets/slash-commands/saas-platform.md +0 -87
  34. package/assets/slash-commands/saas-review.md +0 -178
  35. package/assets/slash-commands/saas-security.md +0 -108
package/CHANGELOG.md CHANGED
@@ -1,5 +1,30 @@
1
1
  # @sylphx/flow
2
2
 
3
+ ## 2.11.0 (2025-12-17)
4
+
5
+ ### ✨ Features
6
+
7
+ - **commands:** add tech stack and replace checklists with exploration questions ([0772b1d](https://github.com/SylphxAI/flow/commit/0772b1d788ddf2074729d04e67ef3d94b138de10))
8
+
9
+ ## 2.10.0 (2025-12-17)
10
+
11
+ Replace saas-* commands with 24 focused /review-* commands.
12
+
13
+ Each domain now has a dedicated review command with mandate to delegate to multiple workers for parallel research.
14
+
15
+ Categories:
16
+ - Core Architecture (4): data-architecture, database, ledger, storage
17
+ - Identity & Security (4): auth, account-security, security, privacy
18
+ - Billing & Commerce (3): billing, pricing, referral
19
+ - Frontend & Experience (5): uiux, i18n, seo, pwa, performance
20
+ - Operations & Management (4): admin, observability, operability, support
21
+ - Growth & Research (2): growth, discovery
22
+ - Quality & Delivery (2): code-quality, delivery
23
+
24
+ ### ✨ Features
25
+
26
+ - **commands:** replace saas-* with 24 focused /review-* commands ([c2f9eb9](https://github.com/SylphxAI/flow/commit/c2f9eb9bad695714be08645163480b46b9286b99))
27
+
3
28
  ## 2.9.0 (2025-12-17)
4
29
 
5
30
  Add comprehensive SaaS review command suite with parallel worker delegation.
@@ -0,0 +1,48 @@
1
+ ---
2
+ name: review-account-security
3
+ description: Review account security - sessions, devices, MFA, security events
4
+ agent: coder
5
+ ---
6
+
7
+ # Account Security Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of account security in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify improvements for user safety and threat detection.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Auth**: better-auth
20
+ * **Framework**: Next.js
21
+
22
+ ## Review Scope
23
+
24
+ ### Membership and Account Security
25
+
26
+ * Membership is entitlement-driven and server-enforced.
27
+ * Provide a dedicated **Account Security** surface.
28
+ * **Account Security minimum acceptance**:
29
+ * Session/device visibility and revocation
30
+ * MFA/passkey management
31
+ * Linked identity provider management
32
+ * Key security event visibility (and export where applicable)
33
+ * All server-enforced and auditable
34
+
35
+ ### Recovery Governance
36
+
37
+ * Account recovery flow secure
38
+ * Support-assisted recovery with strict audit logging
39
+ * Step-up verification for sensitive actions
40
+
41
+ ## Key Areas to Explore
42
+
43
+ * What visibility do users have into their active sessions and devices?
44
+ * How robust is the MFA implementation and enrollment flow?
45
+ * What security events are logged and how accessible are they to users?
46
+ * How does the account recovery flow prevent social engineering attacks?
47
+ * What step-up authentication exists for sensitive actions?
48
+ * How are compromised accounts detected and handled?
@@ -0,0 +1,58 @@
1
+ ---
2
+ name: review-admin
3
+ description: Review admin - RBAC, bootstrap, config management, feature flags
4
+ agent: coder
5
+ ---
6
+
7
+ # Admin Platform Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of the admin platform in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify operational improvements and safety enhancements.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Framework**: Next.js
20
+ * **API**: tRPC
21
+ * **Database**: Neon (Postgres)
22
+
23
+ ## Review Scope
24
+
25
+ ### Admin Platform (Operational-Grade)
26
+
27
+ * Baseline: RBAC (least privilege), audit logs, feature flags governance, optional impersonation with safeguards and auditing.
28
+
29
+ ### Admin Bootstrap (Critical)
30
+
31
+ * Admin bootstrap must not rely on file seeding:
32
+ * Use a secure, auditable **first-login allowlist** for the initial SUPER_ADMIN.
33
+ * Permanently disable bootstrap after completion.
34
+ * All privilege grants must be server-enforced and recorded in the audit log.
35
+ * The allowlist must be managed via secure configuration (environment/secret store), not code or DB seeding.
36
+
37
+ ### Configuration Management (Mandatory)
38
+
39
+ * All **non-secret** product-level configuration must be manageable via admin (server-enforced), with validation and change history.
40
+ * Secrets/credentials are environment-managed only; admin may expose safe readiness/health visibility, not raw secrets.
41
+
42
+ ### Admin Analytics and Reporting (Mandatory)
43
+
44
+ * Provide comprehensive dashboards/reports for business, growth, billing, referral, support, and security/abuse signals, governed by RBAC.
45
+
46
+ ### Admin Operational Management (Mandatory)
47
+
48
+ * Tools for user/account management, entitlements/access management, lifecycle actions, and issue resolution workflows.
49
+ * Actions affecting access, money/credits, or security posture require appropriate step-up controls and must be fully auditable.
50
+
51
+ ## Key Areas to Explore
52
+
53
+ * How secure is the admin bootstrap process?
54
+ * What RBAC gaps exist that could lead to privilege escalation?
55
+ * How comprehensive is the audit logging for sensitive operations?
56
+ * What admin workflows are missing or painful?
57
+ * How does impersonation work and what safeguards exist?
58
+ * What visibility do admins have into system health and issues?
@@ -0,0 +1,42 @@
1
+ ---
2
+ name: review-auth
3
+ description: Review authentication - SSO providers, passkeys, verification, sign-in
4
+ agent: coder
5
+ ---
6
+
7
+ # Authentication Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of authentication in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify improvements for security, usability, and reliability.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Auth**: better-auth
20
+ * **Framework**: Next.js
21
+
22
+ ## Review Scope
23
+
24
+ ### Identity, Verification, and Sign-in
25
+
26
+ * SSO providers (minimum): **Google, Apple, Facebook, Microsoft, GitHub** (prioritize by audience).
27
+ * If provider env/secrets are missing, **hide** the login option (no broken/disabled UI).
28
+ * Allow linking multiple providers and safe unlinking; server-enforced and abuse-protected.
29
+ * Passkeys (WebAuthn) are first-class with secure enrollment/usage/recovery.
30
+
31
+ ### Verification Requirements
32
+
33
+ * **Email verification is mandatory** baseline for high-impact capabilities.
34
+ * **Phone verification is optional** and used as risk-based step-up (anti-abuse, higher-trust flows, recovery); consent-aware and data-minimizing.
35
+
36
+ ## Key Areas to Explore
37
+
38
+ * How does the current auth implementation compare to best practices?
39
+ * What security vulnerabilities exist in the sign-in flows?
40
+ * How can the user experience be improved while maintaining security?
41
+ * What edge cases are not handled (account recovery, provider outages, etc.)?
42
+ * How does session management handle concurrent devices?
@@ -0,0 +1,52 @@
1
+ ---
2
+ name: review-billing
3
+ description: Review billing - Stripe integration, webhooks, state machine
4
+ agent: coder
5
+ ---
6
+
7
+ # Billing Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of billing and payments in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify revenue leakage and billing reliability improvements.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Payments**: Stripe
20
+ * **Workflows**: Upstash Workflows + QStash
21
+
22
+ ## Review Scope
23
+
24
+ ### Billing and Payments (Stripe)
25
+
26
+ * Support subscriptions and one-time payments as product needs require.
27
+ * **Billing state machine follows mapping requirements**; UI must only surface explainable, non-ambiguous states aligned to server-truth.
28
+ * Tax/invoicing and refund/dispute handling must be behaviorally consistent with product UX and entitlement state.
29
+
30
+ ### Webhook Requirements (High-Risk)
31
+
32
+ * Webhooks must be idempotent, retry-safe, out-of-order safe, auditable
33
+ * Billing UI reflects server-truth state without ambiguity
34
+ * **Webhook trust is mandatory**: webhook origin must be verified (signature verification and replay resistance)
35
+ * The Stripe **event id** must be used as the idempotency and audit correlation key
36
+ * Unverifiable events must be rejected and must trigger alerting
37
+ * **Out-of-order behavior must be explicit**: all webhook handlers must define and enforce a clear out-of-order strategy
38
+
39
+ ### State Machine
40
+
41
+ * Define mapping: **Stripe state → internal subscription state → entitlements**
42
+ * Handle: trial, past_due, unpaid, canceled, refund, dispute
43
+ * UI only shows interpretable, non-ambiguous states
44
+
45
+ ## Key Areas to Explore
46
+
47
+ * How robust is the webhook handling for all Stripe events?
48
+ * What happens when webhooks arrive out of order?
49
+ * How does the UI handle ambiguous billing states?
50
+ * What revenue leakage exists (failed renewals, dunning, etc.)?
51
+ * How are disputes and chargebacks handled end-to-end?
52
+ * What is the testing strategy for billing edge cases?
@@ -0,0 +1,44 @@
1
+ ---
2
+ name: review-code-quality
3
+ description: Review code quality - linting, TypeScript, testing, CI
4
+ agent: coder
5
+ ---
6
+
7
+ # Code Quality Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of code quality in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify maintainability issues and technical debt.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Runtime**: Bun
20
+ * **Linting/Formatting**: Biome
21
+ * **Testing**: Bun test
22
+ * **Language**: TypeScript (strict)
23
+
24
+ ## Review Scope
25
+
26
+ ### Non-Negotiable Engineering Principles
27
+
28
+ * No workarounds, hacks, or TODOs.
29
+ * Feature-first with clean architecture; designed for easy extension; no "god files".
30
+ * Type-first, strict end-to-end correctness (**DB → API → UI**).
31
+ * Serverless-first and server-first; edge-compatible where feasible without sacrificing correctness, security, or observability.
32
+ * Mobile-first responsive design; desktop-second.
33
+ * Precise naming; remove dead/unused code.
34
+ * Upgrade all packages to latest stable; avoid deprecated patterns.
35
+
36
+ ## Key Areas to Explore
37
+
38
+ * What areas of the codebase have the most technical debt?
39
+ * Where are types weak or using `any` inappropriately?
40
+ * What test coverage gaps exist for critical paths?
41
+ * What code patterns are inconsistent across the codebase?
42
+ * What dependencies are outdated or have known vulnerabilities?
43
+ * Where do "god files" or overly complex modules exist?
44
+ * What naming inconsistencies make the code harder to understand?
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: review-data-architecture
3
+ description: Review data architecture - boundaries, consistency model, server enforcement
4
+ agent: coder
5
+ ---
6
+
7
+ # Data Architecture Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of data architecture in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify improvements for consistency, reliability, and scalability.
16
+
17
+ ## Tech Stack
18
+
19
+ * **API**: tRPC
20
+ * **Framework**: Next.js
21
+ * **Database**: Neon (Postgres)
22
+ * **ORM**: Drizzle
23
+
24
+ ## Review Scope
25
+
26
+ ### Boundaries, Server Enforcement, and Consistency Model (Hard Requirement)
27
+
28
+ * Define clear boundaries: domain rules, use-cases, integrations, UI.
29
+ * All authorization/entitlements are **server-enforced**; no client-trust.
30
+ * Runtime constraints (serverless/edge) must be explicit and validated.
31
+ * **Consistency model is mandatory for high-value state**: for billing, entitlements, ledger, admin privilege grants, and security posture, the system must define and enforce an explicit consistency model (source-of-truth, allowed delay windows, retry/out-of-order handling, and acceptable eventual consistency bounds).
32
+ * **Billing and access state machine is mandatory**: define and validate the mapping **Stripe state → internal subscription state → entitlements**, including trial, past_due, unpaid, canceled, refund, and dispute outcomes. UI must only present interpretable, non-ambiguous states derived from server-truth.
33
+ * **No dual-write (hard requirement)**: subscription/payment truth must be derived from Stripe-driven events; internal systems must not directly rewrite billing truth or authorize entitlements based on non-Stripe truth, except for explicitly defined admin remediation flows that are fully server-enforced and fully audited.
34
+ * **Server-truth is authoritative**: UI state must never contradict server-truth. Where asynchronous confirmation exists, UI must represent that state unambiguously and remain explainable.
35
+ * **Auditability chain is mandatory** for any high-value mutation: who/when/why, before/after state, and correlation to the triggering request/job/webhook must be recorded and queryable.
36
+
37
+ ## Key Areas to Explore
38
+
39
+ * How well are domain boundaries defined and enforced?
40
+ * Where does client-side trust leak into authorization decisions?
41
+ * What consistency guarantees exist and are they sufficient?
42
+ * How does the system handle eventual consistency edge cases?
43
+ * What would break if a webhook is processed out of order?
@@ -0,0 +1,37 @@
1
+ ---
2
+ name: review-database
3
+ description: Review database - Drizzle migrations, schema drift, CI gates
4
+ agent: coder
5
+ ---
6
+
7
+ # Database Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of database architecture in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify improvements for performance, reliability, and maintainability.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Database**: Neon (Postgres)
20
+ * **ORM**: Drizzle
21
+
22
+ ## Review Scope
23
+
24
+ ### Drizzle Migrations (Non-Negotiable)
25
+
26
+ * Migration files must exist, be complete, and be committed.
27
+ * Deterministic, reproducible, environment-safe; linear/auditable history; no drift.
28
+ * CI must fail if schema changes are not represented by migrations.
29
+
30
+ ## Key Areas to Explore
31
+
32
+ * Is the schema well-designed for the domain requirements?
33
+ * Are there missing indexes that could improve query performance?
34
+ * How are database connections pooled and managed?
35
+ * What is the backup and disaster recovery strategy?
36
+ * Are there any N+1 query problems or inefficient access patterns?
37
+ * How does the schema handle soft deletes, auditing, and data lifecycle?
@@ -0,0 +1,57 @@
1
+ ---
2
+ name: review-delivery
3
+ description: Review delivery gates - release blocking checks, verification
4
+ agent: coder
5
+ ---
6
+
7
+ # Delivery Gates Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of delivery gates in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify gaps in automated verification and release safety.
16
+
17
+ ## Tech Stack
18
+
19
+ * **CI**: GitHub Actions
20
+ * **Testing**: Bun test
21
+ * **Linting**: Biome
22
+ * **Platform**: Vercel
23
+
24
+ ## Review Scope
25
+
26
+ ### Delivery Gates and Completion
27
+
28
+ CI must block merges/deploys when failing:
29
+
30
+ * Code Quality: Biome lint/format, strict TS typecheck, unit + E2E tests, build
31
+ * Data Integrity: Migration integrity checks, no schema drift
32
+ * i18n: Missing translation keys fail build, `/en/*` redirects, hreflang correct
33
+ * Performance: Budget verification, Core Web Vitals thresholds, regression detection
34
+ * Security: CSP/HSTS/headers verified, CSRF protection tested
35
+ * Consent: Analytics/marketing consent gating verified
36
+
37
+ ### Automation Requirement
38
+
39
+ **All gates above must be enforced by automated tests or mechanized checks (non-manual); manual verification does not satisfy release gates.**
40
+
41
+ ### Configuration Gates
42
+
43
+ * Build/startup must fail-fast when required configuration/secrets are missing or invalid.
44
+
45
+ ### Operability Gates
46
+
47
+ * Observability and alerting configured for critical anomalies
48
+ * Workflow dead-letter handling is operable and supports controlled replay
49
+
50
+ ## Key Areas to Explore
51
+
52
+ * What release gates are missing or insufficient?
53
+ * Where does manual verification substitute for automation?
54
+ * How fast is the CI pipeline and what slows it down?
55
+ * What flaky tests exist and how do they affect reliability?
56
+ * How does the deployment process handle rollbacks?
57
+ * What post-deployment verification exists?
@@ -0,0 +1,34 @@
1
+ ---
2
+ name: review-discovery
3
+ description: Review discovery - competitive research, features, pricing opportunities
4
+ agent: coder
5
+ ---
6
+
7
+ # Discovery Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** to discover opportunities in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: this review IS the exploration—think broadly and creatively.
16
+
17
+ ## Review Scope
18
+
19
+ ### Review Requirements: Explore Beyond the Spec
20
+
21
+ * **Feature design review**: define success criteria, journeys, state model, auth/entitlements, instrumentation; propose competitiveness improvements within constraints.
22
+ * **Pricing/monetization review**: packaging/entitlements, lifecycle semantics, legal/tax/invoice implications; propose conversion/churn improvements within constraints.
23
+ * **Competitive research**: features, extensibility, guidance patterns, pricing/packaging norms; convert insights into testable acceptance criteria.
24
+
25
+ ## Key Areas to Explore
26
+
27
+ * What features are competitors offering that we lack?
28
+ * What pricing models are common in the market and how do we compare?
29
+ * What UX patterns are users expecting based on industry standards?
30
+ * What integrations would add significant value?
31
+ * What automation opportunities exist to reduce manual work?
32
+ * What self-service capabilities are users asking for?
33
+ * What technical capabilities could enable new business models?
34
+ * Where are the biggest gaps between current state and market expectations?
@@ -0,0 +1,57 @@
1
+ ---
2
+ name: review-growth
3
+ description: Review growth - onboarding, viral mechanics, retention
4
+ agent: coder
5
+ ---
6
+
7
+ # Growth Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of growth systems in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify growth opportunities and conversion improvements.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Analytics**: PostHog
20
+ * **Framework**: Next.js
21
+
22
+ ## Review Scope
23
+
24
+ ### Growth System (Onboarding, Share/Viral, Retention)
25
+
26
+ * The review must produce a coherent, measurable growth system for activation, sharing/virality, and retention, aligned with compliance and anti-abuse constraints.
27
+
28
+ ### Onboarding
29
+
30
+ * Onboarding must be:
31
+ * Outcome-oriented
32
+ * Localized
33
+ * Accessible
34
+ * Instrumented
35
+
36
+ ### Sharing/Virality
37
+
38
+ * Sharing/virality must be:
39
+ * Consent-aware
40
+ * Abuse-resistant
41
+ * Measurable end-to-end
42
+
43
+ ### Retention
44
+
45
+ * Retention must be:
46
+ * Intentionally engineered
47
+ * Monitored
48
+ * Protected against regressions
49
+
50
+ ## Key Areas to Explore
51
+
52
+ * What is the current activation rate and where do users drop off?
53
+ * How can time-to-value be reduced for new users?
54
+ * What viral mechanics exist and how effective are they?
55
+ * What retention patterns exist and what predicts churn?
56
+ * How does the product re-engage dormant users?
57
+ * What experiments could drive meaningful growth improvements?
@@ -0,0 +1,54 @@
1
+ ---
2
+ name: review-i18n
3
+ description: Review i18n - locales, routing, canonicalization, UGC
4
+ agent: coder
5
+ ---
6
+
7
+ # Internationalization Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of internationalization in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify improvements for coverage, quality, and user experience.
16
+
17
+ ## Tech Stack
18
+
19
+ * **i18n**: next-intl
20
+ * **Framework**: Next.js
21
+
22
+ ## Review Scope
23
+
24
+ ### Supported Locales
25
+
26
+ `en`, `zh-Hans`, `zh-Hant`, `es`, `ja`, `ko`, `de`, `fr`, `pt-BR`, `it`, `nl`, `pl`, `tr`, `id`, `th`, `vi`
27
+
28
+ ### URL Strategy: Prefix Except Default
29
+
30
+ * English is default and non-prefixed.
31
+ * `/en/*` must not exist; permanently redirect to non-prefixed equivalent.
32
+ * All non-default locales are `/<locale>/...`.
33
+
34
+ ### Globalization Rules
35
+
36
+ * Intl formatting for dates, numbers, currency
37
+ * Explicit fallback rules
38
+ * Missing translation keys must fail build
39
+ * No hardcoded user-facing strings outside localization
40
+
41
+ ### UGC Canonicalization
42
+
43
+ * Separate UI language from content language.
44
+ * Exactly one canonical URL per UGC resource determined by content language.
45
+ * No indexable locale-prefixed duplicates unless primary content is truly localized; otherwise redirect to canonical.
46
+ * Canonical/hreflang/sitemap must reflect only true localized variants.
47
+
48
+ ## Key Areas to Explore
49
+
50
+ * How complete and consistent are the translations across all locales?
51
+ * What user-facing strings are hardcoded and missing from localization?
52
+ * How does the routing handle edge cases (unknown locales, malformed URLs)?
53
+ * What is the translation workflow and how can it be improved?
54
+ * How does the system handle RTL languages if needed in the future?
@@ -0,0 +1,38 @@
1
+ ---
2
+ name: review-ledger
3
+ description: Review ledger - financial-grade balance system, immutable ledger
4
+ agent: coder
5
+ ---
6
+
7
+ # Ledger Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of any balance/credits/wallet system in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify improvements for accuracy, auditability, and reconciliation.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Payments**: Stripe
20
+ * **Database**: Neon (Postgres)
21
+ * **ORM**: Drizzle
22
+
23
+ ## Review Scope
24
+
25
+ ### Financial-Grade Balance System (Only if "balance/credits/wallet" exists)
26
+
27
+ * Any balance concept must be implemented as an **immutable ledger** (append-only source of truth), not a mutable balance field.
28
+ * Deterministic precision (no floats), idempotent posting, concurrency safety, transactional integrity, and auditability are required.
29
+ * Monetary flows must be currency-based and reconcilable with Stripe; credits (if used) must be governed as non-cash entitlements.
30
+
31
+ ## Key Areas to Explore
32
+
33
+ * Is there a balance/credits system and how is it implemented?
34
+ * If mutable balances exist, what are the risks and how to migrate to immutable ledger?
35
+ * How does the system handle concurrent transactions?
36
+ * What is the reconciliation process with Stripe?
37
+ * How are edge cases handled (refunds, disputes, partial payments)?
38
+ * What audit trail exists for financial mutations?
@@ -0,0 +1,43 @@
1
+ ---
2
+ name: review-observability
3
+ description: Review observability - logs, Sentry, correlation IDs, alerting
4
+ agent: coder
5
+ ---
6
+
7
+ # Observability Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of observability in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify blind spots and debugging improvements.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Error Tracking**: Sentry
20
+ * **Analytics**: PostHog
21
+ * **Platform**: Vercel
22
+
23
+ ## Review Scope
24
+
25
+ ### Observability and Alerting (Mandatory)
26
+
27
+ * Structured logs and correlation IDs must exist end-to-end (request/job/webhook) with consistent traceability
28
+ * Define critical-path SLO/SLI posture
29
+ * Define actionable alerts for:
30
+ * Webhook failures
31
+ * Ledger/entitlement drift
32
+ * Authentication attacks
33
+ * Abuse spikes
34
+ * Drift detection
35
+
36
+ ## Key Areas to Explore
37
+
38
+ * How easy is it to debug a production issue end-to-end?
39
+ * What blind spots exist where errors go unnoticed?
40
+ * How effective are the current alerts (signal vs noise)?
41
+ * What SLOs/SLIs are defined and are they meaningful?
42
+ * How does log correlation work across async boundaries?
43
+ * What dashboards exist and do they answer the right questions?