@sylphx/flow 2.10.0 → 2.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. package/CHANGELOG.md +12 -0
  2. package/assets/slash-commands/review-account-security.md +19 -29
  3. package/assets/slash-commands/review-admin.md +21 -32
  4. package/assets/slash-commands/review-auth.md +19 -25
  5. package/assets/slash-commands/review-billing.md +21 -25
  6. package/assets/slash-commands/review-code-quality.md +29 -49
  7. package/assets/slash-commands/review-data-architecture.md +26 -18
  8. package/assets/slash-commands/review-database.md +22 -21
  9. package/assets/slash-commands/review-delivery.md +25 -50
  10. package/assets/slash-commands/review-discovery.md +17 -28
  11. package/assets/slash-commands/review-growth.md +18 -40
  12. package/assets/slash-commands/review-i18n.md +18 -27
  13. package/assets/slash-commands/review-ledger.md +23 -20
  14. package/assets/slash-commands/review-observability.md +27 -41
  15. package/assets/slash-commands/review-operability.md +20 -32
  16. package/assets/slash-commands/review-performance.md +19 -34
  17. package/assets/slash-commands/review-pricing.md +19 -27
  18. package/assets/slash-commands/review-privacy.md +23 -28
  19. package/assets/slash-commands/review-pwa.md +22 -33
  20. package/assets/slash-commands/review-referral.md +27 -40
  21. package/assets/slash-commands/review-security.md +25 -32
  22. package/assets/slash-commands/review-seo.md +26 -46
  23. package/assets/slash-commands/review-storage.md +21 -21
  24. package/assets/slash-commands/review-support.md +27 -41
  25. package/assets/slash-commands/review-trust-safety.md +42 -0
  26. package/assets/slash-commands/review-uiux.md +25 -42
  27. package/package.json +1 -1
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-pwa
3
- description: Review PWA - manifest, service worker, caching, push notifications
3
+ description: Review PWA - offline experience, installation, engagement
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,40 +12,29 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify what would make the web experience feel native.
15
16
 
16
- ## Review Scope
17
+ ## Tech Stack
17
18
 
18
- ### PWA Requirements
19
+ * **Framework**: Next.js
20
+ * **Platform**: Vercel
19
21
 
20
- * Manifest file complete and valid
21
- * Service worker with explicit cache correctness
22
- * Push notifications using VAPID where applicable
23
-
24
- ### Service Worker Caching Boundary (Mandatory)
22
+ ## Non-Negotiables
25
23
 
26
24
  * Service worker must not cache personalized/sensitive/authorized content
27
- * Authenticated and entitlement-sensitive routes must have explicit cache-control and SW rules
28
- * Must be validated by tests to prevent stale or unauthorized state exposure
29
-
30
- ### PWA Best Practices
31
-
32
- * Installable (meets PWA criteria)
33
- * Offline fallback page
34
- * App icons (all sizes)
35
- * Splash screen configured
36
- * Theme color defined
37
- * Start URL configured
38
- * Display mode appropriate
39
- * Cache versioning strategy
40
- * Cache invalidation on deploy
41
-
42
- ## Verification Checklist
43
-
44
- - [ ] Valid manifest.json
45
- - [ ] Service worker registered
46
- - [ ] SW doesn't cache auth content
47
- - [ ] Cache-control headers correct
48
- - [ ] Push notifications work (if applicable)
49
- - [ ] Installable on mobile
50
- - [ ] Offline fallback exists
51
- - [ ] Cache invalidation tested
25
+ * Cache invalidation on deploy must be correct (no stale content)
26
+
27
+ ## Context
28
+
29
+ A PWA is an opportunity to deliver native-like experience without an app store. But a bad PWA is worse than no PWA — stale content, broken offline states, and confusing installation prompts erode trust.
30
+
31
+ Consider: what would make users want to install this? What should work offline? How do we handle the transition between online and offline gracefully?
32
+
33
+ ## Driving Questions
34
+
35
+ * Would users actually want to install this as an app? Why or why not?
36
+ * What should the offline experience be, and what is it today?
37
+ * What happens when users go offline in the middle of something important?
38
+ * How do we handle cache invalidation without breaking the experience?
39
+ * What push notification opportunities exist that we're not using?
40
+ * What would make the installed experience better than the browser experience?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-referral
3
- description: Review referral - attribution, anti-fraud, rewards, clawback
3
+ description: Review referral - attribution, rewards, fraud prevention
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,43 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify growth opportunities and fraud vectors.
15
16
 
16
- ## Review Scope
17
-
18
- ### Referral (Anti-Abuse Baseline Required)
19
-
20
- * Referral must be measurable, abuse-resistant, and governed:
21
- * Attribution semantics
22
- * Reward lifecycle governance (including revocation/clawbacks)
23
- * Anti-fraud measures
24
- * Admin reporting/audit
25
- * Localized and instrumented
26
-
27
- ### Referral Anti-Fraud Minimum Baseline (Mandatory)
28
-
29
- * Define a minimum set of risk signals and enforcement measures, including:
30
- * Velocity controls
31
- * Account/device linkage posture
32
- * Risk-tiered enforcement
33
- * Reward delay/hold/freeze
34
- * Clawback conditions
35
- * Auditable manual review/appeal posture where applicable
36
-
37
- ### Referral Best Practices
38
-
39
- * Clear attribution window
40
- * Reward triggers well-defined
41
- * Double-sided rewards (referrer + referee)
42
- * Fraud detection signals
43
- * Admin visibility into referral chains
44
- * Automated clawback on refund/chargeback
45
-
46
- ## Verification Checklist
47
-
48
- - [ ] Attribution tracking works
49
- - [ ] Reward lifecycle defined
50
- - [ ] Velocity controls implemented
51
- - [ ] Device/account linkage checked
52
- - [ ] Clawback on fraud/refund
53
- - [ ] Admin can review referrals
54
- - [ ] Localized referral messaging
17
+ ## Tech Stack
18
+
19
+ * **Analytics**: PostHog
20
+ * **Database**: Neon (Postgres)
21
+
22
+ ## Non-Negotiables
23
+
24
+ * Referral rewards must have clawback capability for fraud
25
+ * Attribution must be auditable (who referred whom, when, reward status)
26
+ * Velocity controls must exist to prevent abuse
27
+
28
+ ## Context
29
+
30
+ Referral programs can drive explosive growth or become fraud magnets. The best referral programs make sharing natural and rewarding. The worst become liability when abusers exploit them.
31
+
32
+ Consider both sides: what makes users want to share? And what prevents bad actors from gaming the system? A referral program that's easy to abuse is worse than no referral program.
33
+
34
+ ## Driving Questions
35
+
36
+ * Why would a user share this product with someone they know?
37
+ * How easy is it for a bad actor to generate fake referrals?
38
+ * What fraud patterns exist that we haven't addressed?
39
+ * What is the actual ROI of the referral program?
40
+ * Where do users drop off in the referral/share flow?
41
+ * If we redesigned referrals from scratch, what would be different?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-security
3
- description: Review security - OWASP, CSP/HSTS, CSRF, anti-bot, rate limiting
3
+ description: Review security - OWASP, headers, authentication, secrets
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -8,46 +8,39 @@ agent: coder
8
8
 
9
9
  ## Mandate
10
10
 
11
- * Perform a **deep, thorough review** of security controls in this codebase.
11
+ * Perform a **deep, thorough review** of security in this codebase.
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify vulnerabilities and hardening opportunities not listed here.
15
16
 
16
- ## Review Scope
17
+ ## Tech Stack
17
18
 
18
- ### Security Baseline
19
+ * **Rate Limiting**: Upstash Redis
20
+ * **Framework**: Next.js
21
+ * **Platform**: Vercel
19
22
 
20
- * OWASP Top 10:2025 taxonomy; OWASP ASVS (L2/L3) verification baseline.
21
- * Password UX masked + temporary reveal; no plaintext passwords in logs/returns/storage/telemetry.
22
- * MFA for Admin/SUPER_ADMIN; step-up for high-risk.
23
- * Risk-based anti-bot for auth and high-cost endpoints; integrate rate limits + consent gating.
23
+ ## Non-Negotiables
24
24
 
25
- ### Baseline Controls
25
+ * OWASP Top 10:2025 vulnerabilities must be addressed
26
+ * CSP, HSTS, X-Frame-Options, X-Content-Type-Options headers must be present
27
+ * CSRF protection on state-changing requests
28
+ * No plaintext passwords in logs, returns, storage, or telemetry
29
+ * MFA required for Admin/SUPER_ADMIN roles
30
+ * Required configuration must fail-fast at build/startup if missing
31
+ * Secrets must not be hardcoded or committed
26
32
 
27
- * CSP/HSTS/headers
28
- * CSRF where applicable
29
- * Upstash-backed rate limiting
30
- * PII scrubbing
31
- * Supply-chain hygiene
32
- * Measurable security
33
+ ## Context
33
34
 
34
- ### Verification Requirements
35
+ Security isn't a feature — it's a foundational property. A single vulnerability can compromise everything else. The review should think like an attacker: where are the weak points? What would I exploit?
35
36
 
36
- * **Security controls must be verifiable**: CSP/HSTS/security headers and CSRF (where applicable) must be covered by automated checks or security tests and included in release gates.
37
+ Beyond fixing vulnerabilities, consider the security architecture holistically. Is defense-in-depth implemented? Are there single points of failure? Would you trust this system with your own data?
37
38
 
38
- ### Configuration and Secrets Governance
39
+ ## Driving Questions
39
40
 
40
- * Required configuration must fail-fast at build/startup
41
- * Strict environment isolation (dev/stage/prod)
42
- * Rotation and incident remediation posture must be auditable and exercisable
43
-
44
- ## Verification Checklist
45
-
46
- - [ ] OWASP Top 10:2025 addressed
47
- - [ ] CSP headers configured
48
- - [ ] HSTS enabled
49
- - [ ] CSRF protection where needed
50
- - [ ] Rate limiting implemented
51
- - [ ] No plaintext passwords anywhere
52
- - [ ] MFA for admin roles
53
- - [ ] Security headers tested in CI
41
+ * What would an attacker target first?
42
+ * Where is rate limiting missing or insufficient?
43
+ * What attack vectors exist in authentication flows?
44
+ * How are secrets managed and what's the rotation strategy?
45
+ * What happens when a secret is compromised — is incident response exercisable?
46
+ * Where does "security by obscurity" substitute for real controls?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-seo
3
- description: Review SEO - metadata, Open Graph, canonical, hreflang, sitemap
3
+ description: Review SEO - discoverability, metadata, search rankings
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,49 +12,29 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify what would make this product dominate search results.
15
16
 
16
- ## Review Scope
17
-
18
- ### SEO Requirements
19
-
20
- * SEO-first + SSR-first for indexable/discovery
21
- * Required elements:
22
- * Metadata (title, description)
23
- * Open Graph tags
24
- * Favicon (all sizes)
25
- * Canonical URLs
26
- * hreflang + x-default
27
- * schema.org structured data
28
- * sitemap.xml
29
- * robots.txt
30
-
31
- ### SEO/i18n/Canonicalization Verification
32
-
33
- * `/en/*` non-existence (must redirect)
34
- * hreflang/x-default correct
35
- * Sitemap containing only true variants
36
- * UGC canonical redirects
37
- * Locale routing invariants
38
-
39
- ### SEO Best Practices
40
-
41
- * Unique titles per page
42
- * Meta descriptions present
43
- * Heading hierarchy (single H1)
44
- * Image alt text
45
- * Internal linking structure
46
- * Page speed optimization
47
- * Mobile-friendly
48
- * No duplicate content
49
-
50
- ## Verification Checklist
51
-
52
- - [ ] All pages have unique titles
53
- - [ ] Meta descriptions present
54
- - [ ] Open Graph tags complete
55
- - [ ] Canonical URLs correct
56
- - [ ] hreflang implemented
57
- - [ ] sitemap.xml exists and valid
58
- - [ ] robots.txt configured
59
- - [ ] schema.org markup present
60
- - [ ] SSR for indexable content
17
+ ## Tech Stack
18
+
19
+ * **Framework**: Next.js (SSR-first for indexable/discovery)
20
+
21
+ ## Non-Negotiables
22
+
23
+ * All pages must have metadata (title, description)
24
+ * Canonical URLs must be correct (no duplicate content)
25
+ * sitemap.xml and robots.txt must exist and be correct
26
+
27
+ ## Context
28
+
29
+ SEO is about being found when people are looking for what you offer. Good SEO isn't tricks — it's making content genuinely useful and technically accessible to search engines.
30
+
31
+ Consider: what queries should this product rank for? What content would genuinely serve those searchers? Is the technical foundation (metadata, structure, performance) supporting or hindering discoverability?
32
+
33
+ ## Driving Questions
34
+
35
+ * What queries should this product rank #1 for?
36
+ * What content gaps exist that competitors are filling?
37
+ * Where are we losing rankings due to technical issues?
38
+ * What structured data opportunities are we missing?
39
+ * How does Core Web Vitals performance affect our search rankings?
40
+ * What would make Google consider this site authoritative in our domain?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-storage
3
- description: Review storage - Vercel Blob upload governance, intent-based uploads
3
+ description: Review storage - uploads, file handling, security
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,30 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify security risks and cost optimization opportunities.
15
16
 
16
- ## Review Scope
17
+ ## Tech Stack
17
18
 
18
- ### Vercel Blob Upload Governance (Hard Requirement)
19
+ * **Storage**: Vercel Blob
20
+ * **Platform**: Vercel
19
21
 
20
- * All uploads must be **intent-based and server-verified**.
21
- * The client must upload to Vercel Blob first using short-lived, server-issued authorization (e.g., signed upload URL / token), then call a server finalize endpoint to persist the resulting Blob URL/key.
22
- * The server must validate the Blob URL/key ownership and namespace, and must match it against the originating upload intent (who/what/where/expiry/constraints) before attaching it to any resource.
23
- * The system must support safe retries and idempotent finalize; expired/abandoned intents must be cleanable and auditable.
22
+ ## Non-Negotiables
24
23
 
25
- ### Storage Best Practices
24
+ * Uploads must be intent-based and server-verified (no direct client uploads to permanent storage)
25
+ * Server must validate blob ownership before attaching to resources
26
+ * Abandoned uploads must be cleanable
26
27
 
27
- * No direct client-to-storage writes without server authorization
28
- * Upload size limits enforced server-side
29
- * File type validation (not just extension, actual content)
30
- * Malware scanning if applicable
31
- * Cleanup of orphaned/abandoned uploads
32
- * CDN caching strategy defined
28
+ ## Context
33
29
 
34
- ## Verification Checklist
30
+ File uploads are a common attack vector. Users upload things you don't expect. Files live longer than you plan. Storage costs accumulate quietly. A well-designed upload system is secure, efficient, and maintainable.
35
31
 
36
- - [ ] Intent-based upload flow implemented
37
- - [ ] Server-issued short-lived authorization
38
- - [ ] Server validates blob ownership before attach
39
- - [ ] Idempotent finalize endpoint
40
- - [ ] Abandoned uploads cleanable
41
- - [ ] File type/size validation server-side
32
+ Consider: what could a malicious user upload? What happens to files when the referencing entity is deleted? How does storage cost scale with usage?
33
+
34
+ ## Driving Questions
35
+
36
+ * What could a malicious user do through the upload flow?
37
+ * What happens to orphaned files when entities are deleted?
38
+ * How much are we spending on storage, and is it efficient?
39
+ * What file types do we accept, and should we?
40
+ * How do we handle upload failures gracefully?
41
+ * What content validation exists (type, size, safety)?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-support
3
- description: Review support - contact, communications, newsletter
3
+ description: Review support - help experience, communications, user satisfaction
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,44 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify what would make users feel genuinely supported.
15
16
 
16
- ## Review Scope
17
-
18
- ### Support and Communications
19
-
20
- * Support/Contact surface must be:
21
- * Discoverable
22
- * Localized
23
- * WCAG AA compliant
24
- * SEO-complete
25
- * Privacy-safe
26
- * Auditable where relevant
27
- * Newsletter subscription/preferences must be consent-aware
28
- * Unsubscribe enforcement must be reliable
29
-
30
- ### Support Best Practices
31
-
32
- * Clear contact methods
33
- * Response time expectations
34
- * Help center / FAQ
35
- * Ticket tracking
36
- * Escalation paths
37
- * Support-assisted account recovery (with audit)
38
-
39
- ### Communications
40
-
41
- * Transactional emails (Resend)
42
- * Marketing emails (consent-gated)
43
- * In-app notifications
44
- * Push notifications (consent-gated)
45
- * Email templates localized
46
-
47
- ## Verification Checklist
48
-
49
- - [ ] Contact page discoverable
50
- - [ ] Contact page localized
51
- - [ ] WCAG AA compliant
52
- - [ ] Newsletter consent-aware
53
- - [ ] Unsubscribe works reliably
54
- - [ ] Transactional emails work
55
- - [ ] Support recovery audited
17
+ ## Tech Stack
18
+
19
+ * **Email**: Resend
20
+ * **Framework**: Next.js
21
+
22
+ ## Non-Negotiables
23
+
24
+ * Unsubscribe must work reliably
25
+ * Support contact must be discoverable
26
+ * Newsletter/marketing must respect consent preferences
27
+
28
+ ## Context
29
+
30
+ Support is where user trust is won or lost. When something goes wrong, how easy is it to get help? When help arrives, does it actually solve the problem? Great support turns frustrated users into loyal advocates.
31
+
32
+ Consider the entire help-seeking journey: finding help, explaining the problem, getting resolution. Where is there friction? Where do users give up? What would make users feel genuinely cared for?
33
+
34
+ ## Driving Questions
35
+
36
+ * When users need help, can they find it easily?
37
+ * What problems do users have that they can't solve themselves?
38
+ * How long does it take to resolve a typical support request?
39
+ * What would reduce support volume without reducing user satisfaction?
40
+ * Where do users get stuck and give up on getting help?
41
+ * What would make the support experience genuinely delightful?
@@ -0,0 +1,42 @@
1
+ ---
2
+ name: review-trust-safety
3
+ description: Review trust & safety - abuse prevention, moderation, user protection
4
+ agent: coder
5
+ ---
6
+
7
+ # Trust & Safety Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of trust and safety in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify abuse vectors before bad actors find them.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Analytics**: PostHog
20
+ * **Database**: Neon (Postgres)
21
+ * **Workflows**: Upstash Workflows + QStash
22
+
23
+ ## Non-Negotiables
24
+
25
+ * All enforcement actions must be auditable (who/when/why)
26
+ * Appeals process must exist for affected users
27
+ * Graduated response levels must be defined (warn → restrict → suspend → ban)
28
+
29
+ ## Context
30
+
31
+ Trust & safety is about protecting users — from each other and from malicious actors. Every platform eventually attracts abuse. The question is whether you're prepared for it or scrambling to react.
32
+
33
+ Consider: what would a bad actor try to do? How would we detect it? How would we respond? What about the false positives — innocent users caught by automated systems? A good T&S system is effective against abuse AND fair to legitimate users.
34
+
35
+ ## Driving Questions
36
+
37
+ * What would a motivated bad actor try to do on this platform?
38
+ * How would we detect coordinated abuse or bot networks?
39
+ * What happens when automated moderation gets it wrong?
40
+ * How do affected users appeal decisions, and is it fair?
41
+ * What abuse patterns exist that we haven't addressed?
42
+ * What would make users trust that we're protecting them?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-uiux
3
- description: Review UI/UX - design system, tokens, accessibility, guidance
3
+ description: Review UI/UX - design system, accessibility, user experience
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,45 +12,28 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: if the current design needs fundamental rethinking, propose it.
15
16
 
16
- ## Review Scope
17
-
18
- ### UX, Design System, and Guidance
19
-
20
- * Design-system driven UI (tokens)
21
- * Dark/light theme support
22
- * WCAG AA compliance
23
- * CLS-safe (no layout shifts)
24
- * Responsive design (mobile-first, desktop-second)
25
- * Iconify for icons; no emoji in UI content
26
-
27
- ### Guidance Requirements
28
-
29
- * Guidance is mandatory for all user-facing features and monetization flows:
30
- * Discoverable
31
- * Clear
32
- * Dismissible with re-entry
33
- * Localized and measurable
34
- * Governed by eligibility and frequency controls
35
-
36
- ### Design System Best Practices
37
-
38
- * Consistent spacing scale
39
- * Typography scale defined
40
- * Color tokens (semantic, not hardcoded)
41
- * Component library documented
42
- * Motion/animation guidelines
43
- * Error state patterns
44
- * Loading state patterns
45
- * Empty state patterns
46
-
47
- ## Verification Checklist
48
-
49
- - [ ] Design tokens used consistently
50
- - [ ] Dark/light theme works
51
- - [ ] WCAG AA verified
52
- - [ ] No CLS issues
53
- - [ ] Mobile-first responsive
54
- - [ ] No emoji in UI
55
- - [ ] Guidance present on key flows
56
- - [ ] Guidance is dismissible + re-entrable
17
+ ## Tech Stack
18
+
19
+ * **Framework**: Next.js
20
+ * **Icons**: Iconify
21
+
22
+ ## Non-Negotiables
23
+
24
+ * WCAG AA accessibility compliance
25
+
26
+ ## Context
27
+
28
+ UI/UX determines how users perceive and interact with the product. A great UI isn't just "correct" — it's delightful, intuitive, and makes complex tasks feel simple.
29
+
30
+ The review should consider: does the current design truly serve users well? Or does it need fundamental rethinking? Small tweaks to a flawed design won't make it excellent. If redesign is warranted, propose it.
31
+
32
+ ## Driving Questions
33
+
34
+ * Where do users get confused, frustrated, or give up?
35
+ * If we were designing this from scratch today, what would be different?
36
+ * What would make this experience genuinely delightful, not just functional?
37
+ * How does the design system enable or constrain good UX?
38
+ * What are competitors doing that users might expect?
39
+ * Where is the UI just "okay" when it could be exceptional?
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@sylphx/flow",
3
- "version": "2.10.0",
3
+ "version": "2.12.0",
4
4
  "description": "One CLI to rule them all. Unified orchestration layer for Claude Code, OpenCode, Cursor and all AI development tools. Auto-detection, auto-installation, auto-upgrade.",
5
5
  "type": "module",
6
6
  "bin": {