@sylphx/flow 2.30.0 → 3.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. package/CHANGELOG.md +32 -0
  2. package/assets/agents/builder.md +32 -1
  3. package/assets/slash-commands/issues.md +46 -0
  4. package/assets/slash-commands/refactor.md +60 -0
  5. package/package.json +1 -1
  6. package/src/config/servers.ts +1 -1
  7. package/src/core/flow-executor.ts +22 -9
  8. package/assets/agents/coder.md +0 -128
  9. package/assets/agents/reviewer.md +0 -123
  10. package/assets/agents/writer.md +0 -120
  11. package/assets/rules/code-standards.md +0 -176
  12. package/assets/rules/core.md +0 -197
  13. package/assets/skills/abuse-prevention/SKILL.md +0 -33
  14. package/assets/skills/account-security/SKILL.md +0 -35
  15. package/assets/skills/admin/SKILL.md +0 -37
  16. package/assets/skills/appsec/SKILL.md +0 -35
  17. package/assets/skills/auth/SKILL.md +0 -34
  18. package/assets/skills/billing/SKILL.md +0 -35
  19. package/assets/skills/code-quality/SKILL.md +0 -34
  20. package/assets/skills/competitive-analysis/SKILL.md +0 -29
  21. package/assets/skills/data-modeling/SKILL.md +0 -34
  22. package/assets/skills/database/SKILL.md +0 -34
  23. package/assets/skills/delivery/SKILL.md +0 -36
  24. package/assets/skills/deployments/SKILL.md +0 -33
  25. package/assets/skills/growth/SKILL.md +0 -31
  26. package/assets/skills/i18n/SKILL.md +0 -35
  27. package/assets/skills/ledger/SKILL.md +0 -32
  28. package/assets/skills/observability/SKILL.md +0 -32
  29. package/assets/skills/performance/SKILL.md +0 -33
  30. package/assets/skills/pricing/SKILL.md +0 -32
  31. package/assets/skills/privacy/SKILL.md +0 -36
  32. package/assets/skills/pwa/SKILL.md +0 -36
  33. package/assets/skills/referral/SKILL.md +0 -30
  34. package/assets/skills/seo/SKILL.md +0 -40
  35. package/assets/skills/storage/SKILL.md +0 -33
  36. package/assets/skills/support/SKILL.md +0 -31
  37. package/assets/skills/uiux/SKILL.md +0 -40
  38. package/assets/slash-commands/cleanup.md +0 -59
  39. package/assets/slash-commands/continue.md +0 -94
  40. package/assets/slash-commands/continue2.md +0 -61
  41. package/assets/slash-commands/improve.md +0 -153
  42. package/assets/slash-commands/init.md +0 -34
  43. package/assets/slash-commands/polish.md +0 -87
  44. package/assets/slash-commands/quality.md +0 -181
  45. package/assets/slash-commands/release.md +0 -103
  46. package/assets/slash-commands/review.md +0 -44
@@ -1,34 +0,0 @@
1
- ---
2
- name: database
3
- description: Database - schema, indexes, migrations. Use when working with databases.
4
- ---
5
-
6
- # Database Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Database**: Neon (Postgres)
11
- * **ORM**: Drizzle
12
- * **Migrations**: Drizzle Kit
13
-
14
- ## Non-Negotiables
15
-
16
- * All database access through Drizzle (no raw SQL unless necessary)
17
- * Migration files must exist, be complete, and be committed
18
- * CI must fail if schema changes aren't represented by migrations
19
- * No schema drift between environments
20
- * Drizzle schema is SSOT for database structure
21
-
22
- ## Context
23
-
24
- Database handles physical implementation — schema, indexes, migrations, query performance. Conceptual modeling (entities, relationships) lives in `data-modeling`.
25
-
26
- Drizzle is the SSOT for database access. Type-safe, end-to-end.
27
-
28
- ## Driving Questions
29
-
30
- * Is all database access through Drizzle?
31
- * Are migrations complete and committed?
32
- * What constraints are missing that would prevent invalid state?
33
- * Where are missing indexes causing slow queries?
34
- * What queries are N+1 or unbounded?
@@ -1,36 +0,0 @@
1
- ---
2
- name: delivery
3
- description: Delivery - CI/CD, testing, releases. Use when improving pipelines.
4
- ---
5
-
6
- # Delivery Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **CI**: GitHub Actions
11
- * **Testing**: Bun test
12
- * **Linting**: Biome
13
- * **Platform**: Vercel
14
-
15
- ## Non-Negotiables
16
-
17
- * All release gates must be automated (manual verification doesn't count)
18
- * Build must fail-fast on missing required configuration
19
- * CI must block on: lint, typecheck, tests, build
20
- * `/en/*` must redirect (no duplicate content)
21
- * Security headers must be verified by tests
22
- * Consent gating must be verified by tests
23
-
24
- ## Context
25
-
26
- Delivery handles pre-production — CI/CD pipeline, release gates, quality checks. Post-deployment operations (rollback, feature flags, runbooks) live in `deployments`.
27
-
28
- The question isn't "what tests do we have?" but "what could go wrong that we wouldn't catch?"
29
-
30
- ## Driving Questions
31
-
32
- * What could ship to production that shouldn't?
33
- * Where does manual verification substitute for automation?
34
- * What flaky tests are training people to ignore failures?
35
- * How fast is the feedback loop, and what slows it down?
36
- * What's the worst thing that shipped recently that tests should have caught?
@@ -1,33 +0,0 @@
1
- ---
2
- name: deployments
3
- description: Deployments - rollback, feature flags, ops tooling. Use when shipping to production.
4
- ---
5
-
6
- # Deployments Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Workflows**: Upstash Workflows + QStash
11
- * **Cache**: Upstash Redis
12
- * **Platform**: Vercel
13
-
14
- ## Non-Negotiables
15
-
16
- * Rollback plan must exist and be exercisable
17
- * Dead-letter handling must exist and be operable (visible, replayable)
18
- * Side-effects (email, billing, ledger) must be idempotent or safely re-entrant
19
- * Drift alerts must have remediation playbooks
20
-
21
- ## Context
22
-
23
- Deployments handles post-deployment operations — rollback, feature flags, runbooks, incident response. Pre-production CI/CD lives in `delivery`.
24
-
25
- When something goes wrong, can an operator fix it without deploying code?
26
-
27
- ## Driving Questions
28
-
29
- * What happens when a job fails permanently?
30
- * How would an operator know something is stuck?
31
- * Can failed workflows be safely replayed without duplicating side-effects?
32
- * What's the rollback plan if a deploy breaks something critical?
33
- * What runbooks exist, and what runbooks should exist but don't?
@@ -1,31 +0,0 @@
1
- ---
2
- name: growth
3
- description: Growth - onboarding, activation, retention. Use for growth features.
4
- ---
5
-
6
- # Growth Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Analytics**: PostHog
11
- * **Framework**: Next.js (with Turbopack)
12
-
13
- ## Non-Negotiables
14
-
15
- * Sharing/virality mechanics must be consent-aware
16
- * Growth instrumentation must not violate privacy constraints
17
-
18
- ## Context
19
-
20
- Growth isn't about tricks — it's about removing friction from value delivery. Users who quickly experience value stay; users who don't, leave.
21
-
22
- The review should consider: what's preventing users from reaching their "aha moment" faster? What would make them want to share? What brings them back? These aren't features to add — they're fundamental product questions.
23
-
24
- ## Driving Questions
25
-
26
- * Where do users drop off before experiencing value?
27
- * What would cut time-to-value in half?
28
- * Why would a user tell someone else about this product?
29
- * What brings users back after their first session?
30
- * What signals predict churn before it happens?
31
- * What would a 10x better onboarding look like?
@@ -1,35 +0,0 @@
1
- ---
2
- name: i18n
3
- description: Internationalization - localization, translations. Use when adding languages.
4
- ---
5
-
6
- # i18n Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **i18n**: next-intl
11
- * **Framework**: Next.js (with Turbopack)
12
-
13
- ## Non-Negotiables
14
-
15
- * All i18n via next-intl (no custom implementation)
16
- * `/en/*` must not exist (permanently redirect to non-prefixed)
17
- * Missing translation keys must fail build
18
- * No hardcoded user-facing strings outside localization
19
- * Translation bundles must be split by namespace (no monolithic files)
20
- * Server Components for translations wherever possible
21
- * Client bundles must not include server-only translations
22
-
23
- ## Context
24
-
25
- Internationalization isn't just translation — it's making the product feel native to each market. Bad i18n signals users are second-class citizens. Good i18n is invisible.
26
-
27
- next-intl is the SSOT for i18n. No custom implementations.
28
-
29
- ## Driving Questions
30
-
31
- * Is next-intl handling all translations?
32
- * Are bundles split by namespace?
33
- * What would make the product feel native to non-English users?
34
- * Where do translations feel awkward?
35
- * How large are client-side translation bundles?
@@ -1,32 +0,0 @@
1
- ---
2
- name: ledger
3
- description: Financial ledger - transactions, audit trails. Use when tracking money.
4
- ---
5
-
6
- # Ledger Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Database**: Neon (Postgres)
11
- * **ORM**: Drizzle
12
-
13
- ## Non-Negotiables
14
-
15
- * Balances must be immutable ledger (append-only), not mutable fields
16
- * No floating-point for money (use deterministic precision)
17
- * All financial mutations must be idempotent
18
- * Every balance must be provable by replaying the ledger
19
-
20
- ## Context
21
-
22
- Ledger handles financial integrity — transaction history, balance correctness, audit trail. Payment processing lives in `billing`, pricing strategy lives in `pricing`.
23
-
24
- A bug that creates or destroys money is a serious incident. This must be bulletproof.
25
-
26
- ## Driving Questions
27
-
28
- * Does a balance/credits system exist, and is it implemented correctly?
29
- * Where could money be created or destroyed by a bug?
30
- * What happens during concurrent transactions?
31
- * How would we detect if balances drifted from reality?
32
- * Can we prove every balance by replaying the ledger?
@@ -1,32 +0,0 @@
1
- ---
2
- name: observability
3
- description: Observability - logging, metrics, tracing. Use when adding monitoring.
4
- ---
5
-
6
- # Observability Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Error Tracking**: Sentry
11
- * **Analytics**: PostHog
12
- * **Platform**: Vercel
13
-
14
- ## Non-Negotiables
15
-
16
- * Correlation IDs must exist end-to-end (request → job → webhook)
17
- * Alerts must exist for critical failures
18
- * Errors must be actionable, not noise
19
-
20
- ## Context
21
-
22
- Observability is about answering questions when things go wrong. It's 3am, something is broken — can you figure out what happened? How fast?
23
-
24
- PII protection in logs is enforced via `privacy`. This skill focuses on making debugging effective.
25
-
26
- ## Driving Questions
27
-
28
- * If something breaks now, how would we find out?
29
- * What blind spots exist where errors go unnoticed?
30
- * How long to trace a request through the system?
31
- * What alerts exist, and do they fire correctly?
32
- * Where is noise training people to ignore alerts?
@@ -1,33 +0,0 @@
1
- ---
2
- name: performance
3
- description: Performance - Core Web Vitals, bundle size. Use when optimizing speed.
4
- ---
5
-
6
- # Performance Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Framework**: Next.js (with Turbopack)
11
- * **Platform**: Vercel
12
- * **Tooling**: Bun
13
-
14
- ## Non-Negotiables
15
-
16
- * Core Web Vitals must meet thresholds (LCP < 2.5s, CLS < 0.1, INP < 200ms)
17
- * Performance regressions must be detectable
18
- * JavaScript bundle size must be monitored and optimized
19
-
20
- ## Context
21
-
22
- Performance is a feature. Slow products feel broken, even when they're correct. Users don't read loading spinners — they leave. Every 100ms of latency costs engagement.
23
-
24
- Don't just measure — understand. Where does time go? What's blocking the critical path? What would make the product feel instant? Sometimes small architectural changes have bigger impact than optimization.
25
-
26
- ## Driving Questions
27
-
28
- * What makes the product feel slow to users?
29
- * Where are the biggest bottlenecks in the critical user journeys?
30
- * What's in the critical rendering path that shouldn't be?
31
- * How large is the JavaScript bundle, and what's bloating it?
32
- * What database queries are slow, and why?
33
- * If we could make one thing 10x faster, what would have the most impact?
@@ -1,32 +0,0 @@
1
- ---
2
- name: pricing
3
- description: Pricing strategy - tiers, feature gating. Use when designing pricing.
4
- ---
5
-
6
- # Pricing Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Payments**: Stripe
11
- * **Database**: Neon (Postgres)
12
- * **ORM**: Drizzle
13
-
14
- ## Non-Negotiables
15
-
16
- * Platform is source of truth — Stripe syncs FROM platform, never reverse
17
- * All pricing/product configuration must be in code
18
- * Feature entitlements derived from platform state, not Stripe metadata
19
- * Pricing drift must be detectable and auto-correctable
20
- * No manual Stripe dashboard configuration
21
-
22
- ## Context
23
-
24
- Pricing owns strategy — what tiers exist, what features each tier gets, how upgrades work. Billing handles the payment mechanics. This separation allows switching payment processors without repricing.
25
-
26
- ## Driving Questions
27
-
28
- * Is all pricing defined in code?
29
- * How do we test pricing changes before going live?
30
- * What would make upgrading feel like an obvious decision?
31
- * How do we communicate value at each tier?
32
- * Can we A/B test pricing without Stripe changes?
@@ -1,36 +0,0 @@
1
- ---
2
- name: privacy
3
- description: Privacy and data protection - GDPR, CCPA, consent. Use when handling user data.
4
- ---
5
-
6
- # Privacy Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Analytics**: PostHog
11
- * **Email**: Resend
12
- * **Tag Management**: GTM (marketing only)
13
- * **Observability**: Sentry
14
-
15
- ## Non-Negotiables
16
-
17
- * Analytics and marketing must not fire before user consent
18
- * PII must not leak into logs, Sentry, PostHog, or third-party services
19
- * Account deletion must propagate to all third-party processors
20
- * Marketing tags (GTM, Google Ads) must not load without consent
21
- * Conversion tracking must be server-truth aligned, idempotent, and deduplicated
22
-
23
- ## Context
24
-
25
- Privacy isn't just compliance — it's trust. Users share data expecting it to be handled responsibly. Every log line, every analytics event, every third-party integration is a potential privacy leak.
26
-
27
- The review should verify that actual behavior matches stated policy. If the privacy policy says "we don't track without consent," does the code actually enforce that? Mismatches are not just bugs — they're trust violations.
28
-
29
- ## Driving Questions
30
-
31
- * Does the consent implementation actually block tracking, or just record preference?
32
- * Where does PII leak that we haven't noticed?
33
- * If a user requests data deletion, what actually gets deleted vs. retained?
34
- * Does the privacy policy accurately reflect what the code actually does?
35
- * How would we handle a GDPR data subject access request today?
36
- * What data are we collecting that we don't actually need?
@@ -1,36 +0,0 @@
1
- ---
2
- name: pwa
3
- description: PWA - native app parity, offline-first, engagement. Use when building installable web apps.
4
- ---
5
-
6
- # PWA Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Framework**: Next.js (with Turbopack)
11
- * **Service Worker**: Serwist (Next.js integration)
12
- * **Platform**: Vercel
13
-
14
- ## Non-Negotiables
15
-
16
- * Service worker must not cache personalized/sensitive/authorized content
17
- * Cache invalidation on deploy must be correct (no stale content)
18
- * Offline experience must be functional, not just a fallback page
19
- * Installation prompt must be contextual (after value demonstrated)
20
- * Complete manifest.webmanifest with all required fields
21
- * All required icons present (favicon, apple-touch-icon, PWA icons, maskable)
22
-
23
- ## Context
24
-
25
- PWA goal: users forget they're in a browser. The gap between "website" and "app" is measured in micro-interactions — haptics, gestures, offline resilience, and engagement hooks.
26
-
27
- A bad PWA is worse than no PWA. But a great PWA can match 90% of native with 10% of the effort.
28
-
29
- ## Driving Questions
30
-
31
- * What breaks when the user goes offline mid-action?
32
- * What would make users install this instead of bookmarking?
33
- * Which native gestures (swipe, pull, long-press) are missing?
34
- * What local notifications would drive daily engagement?
35
- * Where does the app feel "webby" instead of native?
36
- * What's the first thing users see when they open the installed app?
@@ -1,30 +0,0 @@
1
- ---
2
- name: referral
3
- description: Referral systems - referral programs, viral loops. Use for referrals.
4
- ---
5
-
6
- # Referral Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Analytics**: PostHog
11
- * **Database**: Neon (Postgres)
12
- * **ORM**: Drizzle
13
-
14
- ## Non-Negotiables
15
-
16
- * Referral attribution must be accurate and auditable
17
- * Rewards must be fraud-resistant (no self-referral, duplicate abuse)
18
- * Terms must be clear and enforced consistently
19
-
20
- ## Context
21
-
22
- Referrals turn happy users into growth engines. But poorly designed referral programs create fraud opportunities and erode trust. The best referral programs feel generous, not gameable.
23
-
24
- ## Driving Questions
25
-
26
- * What would make users genuinely want to refer others?
27
- * How do we prevent referral fraud without punishing legitimate users?
28
- * Is referral attribution working correctly across all channels?
29
- * What's the referral conversion rate and what affects it?
30
- * Are referral rewards actually motivating behavior?
@@ -1,40 +0,0 @@
1
- ---
2
- name: seo
3
- description: SEO - meta tags, structured data. Use when optimizing for search.
4
- ---
5
-
6
- # SEO Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Framework**: Next.js (with Turbopack, SSR-first)
11
-
12
- ## Non-Negotiables
13
-
14
- * Every page must have complete metadata (title, description, canonical, OG, Twitter Cards)
15
- * Canonical URLs must be correct (no duplicate content)
16
- * Complete favicon set (ico, svg, apple-touch-icon, PWA icons)
17
- * Structured data (JSON-LD) for rich results where applicable
18
- * Security headers configured (CSP, HSTS, X-Frame-Options, X-Content-Type-Options)
19
-
20
- ## Required Files
21
-
22
- * `robots.txt` — search engine crawling rules
23
- * `sitemap.xml` — page discovery for search engines
24
- * `llms.txt` — LLM/AI crawler guidance
25
- * `security.txt` — security vulnerability reporting (at `/.well-known/security.txt`)
26
- * `ads.txt` — if ads exist
27
- * `app-ads.txt` — if mobile ads exist
28
-
29
- ## Context
30
-
31
- SEO is about being found when people are looking for what you offer. Good SEO isn't tricks — it's making content genuinely useful and technically accessible to search engines and AI crawlers.
32
-
33
- ## Driving Questions
34
-
35
- * Is every page's head complete with all required meta tags?
36
- * Are Open Graph and Twitter Cards generating correct previews?
37
- * Is structured data present and validated?
38
- * Do all required files exist and pass validation?
39
- * Is the site accessible to both search engines and LLM crawlers?
40
- * Are security headers properly configured?
@@ -1,33 +0,0 @@
1
- ---
2
- name: storage
3
- description: File storage - uploads, CDN, blobs. Use when handling files.
4
- ---
5
-
6
- # Storage Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Storage**: Vercel Blob
11
- * **Framework**: Next.js (with Turbopack)
12
- * **Platform**: Vercel
13
-
14
- ## Non-Negotiables
15
-
16
- * All file uploads must be validated (type, size, content)
17
- * Signed URLs for private content access
18
- * No user-uploaded content served from main domain (XSS prevention)
19
- * File deletion must cascade with parent entity deletion
20
-
21
- ## Context
22
-
23
- File storage is deceptively complex. Users expect uploads to just work, but there are many ways for it to fail — large files, slow connections, wrong formats, malicious content.
24
-
25
- Vercel Blob is the SSOT for file storage. No custom implementations.
26
-
27
- ## Driving Questions
28
-
29
- * What happens when an upload fails halfway?
30
- * How are large files handled without blocking?
31
- * What file types are allowed and how is it enforced?
32
- * How long are files retained after entity deletion?
33
- * What's the storage cost and is it sustainable?
@@ -1,31 +0,0 @@
1
- ---
2
- name: support
3
- description: Support - help center, tickets, docs. Use when building support.
4
- ---
5
-
6
- # Support Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Framework**: Next.js (with Turbopack)
11
- * **Email**: Resend
12
-
13
- ## Non-Negotiables
14
-
15
- * Self-service must be prioritized over contact forms
16
- * Support context must include user state (plan, usage, recent actions)
17
- * Response time expectations must be set and met
18
-
19
- ## Context
20
-
21
- Support is where trust is won or lost. Users who need help are often frustrated — the support experience either deepens their trust or confirms their fears.
22
-
23
- Great support isn't just fast responses — it's making users not need support in the first place. Every support ticket is a signal that something in the product could be clearer.
24
-
25
- ## Driving Questions
26
-
27
- * What are the most common support requests and can they be self-served?
28
- * What context does support need that they don't have?
29
- * Where does the product create confusion that leads to support tickets?
30
- * How would we handle a surge in support volume?
31
- * What would make users feel supported before they ask for help?
@@ -1,40 +0,0 @@
1
- ---
2
- name: uiux
3
- description: UI/UX - design system, accessibility. Use when building interfaces.
4
- ---
5
-
6
- # UI/UX Guideline
7
-
8
- ## Tech Stack
9
-
10
- * **Framework**: Next.js (with Turbopack)
11
- * **Components**: Radix UI (mandatory)
12
- * **Styling**: Tailwind CSS
13
- * **Icons**: @iconify-icon/react
14
-
15
- ## Radix UI (Mandatory)
16
-
17
- If Radix has a primitive for it, you MUST use it. No exceptions. No custom implementations.
18
- Any custom implementation of something Radix already provides is a bug.
19
-
20
- ## Non-Negotiables
21
-
22
- * WCAG 2.1 AA compliance (keyboard navigation, screen readers, contrast)
23
- * Touch targets minimum 44x44px on mobile
24
- * Responsive design (mobile-first)
25
- * Loading states for all async operations
26
- * Error states must be actionable
27
- * No layout shift on content load
28
-
29
- ## Context
30
-
31
- UI/UX is about removing friction between users and value. Bad UX doesn't just feel bad — it costs conversion, retention, and trust.
32
-
33
- ## Driving Questions
34
-
35
- * Where do users get confused or frustrated?
36
- * What would a first-time user struggle with?
37
- * Where are loading states missing or unhelpful?
38
- * What accessibility issues exist?
39
- * Where does the UI feel inconsistent?
40
- * What would make the product feel more polished?
@@ -1,59 +0,0 @@
1
- ---
2
- name: cleanup
3
- description: Clean technical debt, refactor, optimize, and simplify codebase
4
- ---
5
-
6
- # Cleanup & Refactor
7
-
8
- Scan codebase for technical debt and code smells. Clean, refactor, optimize.
9
-
10
- ## Scope
11
-
12
- **Code Smells:**
13
- - Functions >20 lines → extract
14
- - Duplication (3+ occurrences) → DRY
15
- - Complexity (>3 nesting levels) → flatten
16
- - Unused code/imports/variables → remove
17
- - Commented code → delete
18
- - Magic numbers → named constants
19
- - Poor naming → clarify
20
-
21
- **Technical Debt:**
22
- - TODOs/FIXMEs → implement or delete
23
- - Deprecated APIs → upgrade
24
- - Outdated patterns → modernize
25
- - Performance bottlenecks → optimize
26
- - Memory leaks → fix
27
- - Lint warnings → resolve
28
-
29
- **Optimization:**
30
- - Algorithm complexity → reduce
31
- - N+1 queries → batch
32
- - Unnecessary re-renders → memoize
33
- - Large bundles → code split
34
- - Unused dependencies → remove
35
-
36
- ## Execution
37
-
38
- 1. **Scan** entire codebase systematically
39
- 2. **Prioritize** by impact (critical → major → minor)
40
- 3. **Clean** incrementally with atomic commits
41
- 4. **Test** after every change
42
- 5. **Report** what was cleaned and impact
43
-
44
- ## Commit Strategy
45
-
46
- One commit per logical cleanup:
47
- - `refactor(auth): extract validateToken function`
48
- - `perf(db): batch user queries to fix N+1`
49
- - `chore: remove unused imports and commented code`
50
-
51
- ## Exit Criteria
52
-
53
- - [ ] No code smells remain
54
- - [ ] All tests pass
55
- - [ ] Lint clean
56
- - [ ] Performance improved (measure before/after)
57
- - [ ] Technical debt reduced measurably
58
-
59
- Report: Lines removed, complexity reduced, performance gains.