@sylphx/flow 2.11.0 → 2.12.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. package/CHANGELOG.md +6 -0
  2. package/assets/slash-commands/review-account-security.md +16 -23
  3. package/assets/slash-commands/review-admin.md +17 -32
  4. package/assets/slash-commands/review-auth.md +16 -17
  5. package/assets/slash-commands/review-billing.md +16 -25
  6. package/assets/slash-commands/review-code-quality.md +18 -19
  7. package/assets/slash-commands/review-data-architecture.md +19 -18
  8. package/assets/slash-commands/review-database.md +19 -15
  9. package/assets/slash-commands/review-delivery.md +19 -30
  10. package/assets/slash-commands/review-discovery.md +19 -15
  11. package/assets/slash-commands/review-growth.md +15 -32
  12. package/assets/slash-commands/review-i18n.md +15 -28
  13. package/assets/slash-commands/review-ledger.md +19 -14
  14. package/assets/slash-commands/review-observability.md +16 -18
  15. package/assets/slash-commands/review-operability.md +16 -24
  16. package/assets/slash-commands/review-performance.md +15 -21
  17. package/assets/slash-commands/review-pricing.md +17 -22
  18. package/assets/slash-commands/review-privacy.md +17 -28
  19. package/assets/slash-commands/review-pwa.md +15 -18
  20. package/assets/slash-commands/review-referral.md +16 -25
  21. package/assets/slash-commands/review-security.md +20 -28
  22. package/assets/slash-commands/review-seo.md +22 -33
  23. package/assets/slash-commands/review-storage.md +18 -15
  24. package/assets/slash-commands/review-support.md +18 -20
  25. package/assets/slash-commands/review-trust-safety.md +42 -0
  26. package/assets/slash-commands/review-uiux.md +14 -24
  27. package/package.json +1 -1
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-security
3
- description: Review security - OWASP, CSP/HSTS, CSRF, anti-bot, rate limiting
3
+ description: Review security - OWASP, headers, authentication, secrets
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -8,11 +8,11 @@ agent: coder
8
8
 
9
9
  ## Mandate
10
10
 
11
- * Perform a **deep, thorough review** of security controls in this codebase.
11
+ * Perform a **deep, thorough review** of security in this codebase.
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify vulnerabilities and hardening opportunities.
15
+ * **Explore beyond the spec**: identify vulnerabilities and hardening opportunities not listed here.
16
16
 
17
17
  ## Tech Stack
18
18
 
@@ -20,35 +20,27 @@ agent: coder
20
20
  * **Framework**: Next.js
21
21
  * **Platform**: Vercel
22
22
 
23
- ## Review Scope
23
+ ## Non-Negotiables
24
24
 
25
- ### Security Baseline
25
+ * OWASP Top 10:2025 vulnerabilities must be addressed
26
+ * CSP, HSTS, X-Frame-Options, X-Content-Type-Options headers must be present
27
+ * CSRF protection on state-changing requests
28
+ * No plaintext passwords in logs, returns, storage, or telemetry
29
+ * MFA required for Admin/SUPER_ADMIN roles
30
+ * Required configuration must fail-fast at build/startup if missing
31
+ * Secrets must not be hardcoded or committed
26
32
 
27
- * OWASP Top 10:2025 taxonomy; OWASP ASVS (L2/L3) verification baseline.
28
- * Password UX masked + temporary reveal; no plaintext passwords in logs/returns/storage/telemetry.
29
- * MFA for Admin/SUPER_ADMIN; step-up for high-risk.
30
- * Risk-based anti-bot for auth and high-cost endpoints; integrate rate limits + consent gating.
33
+ ## Context
31
34
 
32
- ### Baseline Controls
35
+ Security isn't a feature — it's a foundational property. A single vulnerability can compromise everything else. The review should think like an attacker: where are the weak points? What would I exploit?
33
36
 
34
- * CSP/HSTS/headers
35
- * CSRF where applicable
36
- * Upstash-backed rate limiting
37
- * PII scrubbing
38
- * Supply-chain hygiene
39
- * Measurable security
37
+ Beyond fixing vulnerabilities, consider the security architecture holistically. Is defense-in-depth implemented? Are there single points of failure? Would you trust this system with your own data?
40
38
 
41
- ### Configuration and Secrets Governance
39
+ ## Driving Questions
42
40
 
43
- * Required configuration must fail-fast at build/startup
44
- * Strict environment isolation (dev/stage/prod)
45
- * Rotation and incident remediation posture must be auditable and exercisable
46
-
47
- ## Key Areas to Explore
48
-
49
- * What OWASP Top 10 vulnerabilities exist in the current implementation?
50
- * How comprehensive are the security headers (CSP, HSTS, etc.)?
41
+ * What would an attacker target first?
51
42
  * Where is rate limiting missing or insufficient?
52
- * How are secrets managed and what is the rotation strategy?
53
- * What attack vectors exist for the authentication flows?
54
- * How does the system detect and respond to security incidents?
43
+ * What attack vectors exist in authentication flows?
44
+ * How are secrets managed and what's the rotation strategy?
45
+ * What happens when a secret is compromised is incident response exercisable?
46
+ * Where does "security by obscurity" substitute for real controls?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-seo
3
- description: Review SEO - metadata, Open Graph, canonical, hreflang, sitemap
3
+ description: Review SEO - discoverability, metadata, search rankings
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,40 +12,29 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify improvements for discoverability and search rankings.
15
+ * **Explore beyond the spec**: identify what would make this product dominate search results.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Framework**: Next.js (SSR-first for indexable/discovery)
20
20
 
21
- ## Review Scope
22
-
23
- ### SEO Requirements
24
-
25
- * SEO-first + SSR-first for indexable/discovery
26
- * Required elements:
27
- * Metadata (title, description)
28
- * Open Graph tags
29
- * Favicon (all sizes)
30
- * Canonical URLs
31
- * hreflang + x-default
32
- * schema.org structured data
33
- * sitemap.xml
34
- * robots.txt
35
-
36
- ### SEO/i18n/Canonicalization Verification
37
-
38
- * `/en/*` non-existence (must redirect)
39
- * hreflang/x-default correct
40
- * Sitemap containing only true variants
41
- * UGC canonical redirects
42
- * Locale routing invariants
43
-
44
- ## Key Areas to Explore
45
-
46
- * How does the site perform in search engine results currently?
47
- * What pages are missing proper metadata or structured data?
48
- * How does the sitemap handle dynamic content and pagination?
49
- * Are there duplicate content issues or canonicalization problems?
50
- * What opportunities exist for featured snippets or rich results?
51
- * How does page load performance affect SEO rankings?
21
+ ## Non-Negotiables
22
+
23
+ * All pages must have metadata (title, description)
24
+ * Canonical URLs must be correct (no duplicate content)
25
+ * sitemap.xml and robots.txt must exist and be correct
26
+
27
+ ## Context
28
+
29
+ SEO is about being found when people are looking for what you offer. Good SEO isn't tricks — it's making content genuinely useful and technically accessible to search engines.
30
+
31
+ Consider: what queries should this product rank for? What content would genuinely serve those searchers? Is the technical foundation (metadata, structure, performance) supporting or hindering discoverability?
32
+
33
+ ## Driving Questions
34
+
35
+ * What queries should this product rank #1 for?
36
+ * What content gaps exist that competitors are filling?
37
+ * Where are we losing rankings due to technical issues?
38
+ * What structured data opportunities are we missing?
39
+ * How does Core Web Vitals performance affect our search rankings?
40
+ * What would make Google consider this site authoritative in our domain?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-storage
3
- description: Review storage - Vercel Blob upload governance, intent-based uploads
3
+ description: Review storage - uploads, file handling, security
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,27 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify improvements for security, reliability, and cost efficiency.
15
+ * **Explore beyond the spec**: identify security risks and cost optimization opportunities.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Storage**: Vercel Blob
20
20
  * **Platform**: Vercel
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### Vercel Blob Upload Governance (Hard Requirement)
24
+ * Uploads must be intent-based and server-verified (no direct client uploads to permanent storage)
25
+ * Server must validate blob ownership before attaching to resources
26
+ * Abandoned uploads must be cleanable
25
27
 
26
- * All uploads must be **intent-based and server-verified**.
27
- * The client must upload to Vercel Blob first using short-lived, server-issued authorization (e.g., signed upload URL / token), then call a server finalize endpoint to persist the resulting Blob URL/key.
28
- * The server must validate the Blob URL/key ownership and namespace, and must match it against the originating upload intent (who/what/where/expiry/constraints) before attaching it to any resource.
29
- * The system must support safe retries and idempotent finalize; expired/abandoned intents must be cleanable and auditable.
28
+ ## Context
30
29
 
31
- ## Key Areas to Explore
30
+ File uploads are a common attack vector. Users upload things you don't expect. Files live longer than you plan. Storage costs accumulate quietly. A well-designed upload system is secure, efficient, and maintainable.
32
31
 
33
- * How are uploads currently implemented and do they follow intent-based pattern?
34
- * What security vulnerabilities exist in the upload flow?
35
- * How are abandoned/orphaned uploads handled?
36
- * What is the cost implication of current storage patterns?
37
- * How does the system handle large files and upload failures?
38
- * What content validation (type, size, malware) exists?
32
+ Consider: what could a malicious user upload? What happens to files when the referencing entity is deleted? How does storage cost scale with usage?
33
+
34
+ ## Driving Questions
35
+
36
+ * What could a malicious user do through the upload flow?
37
+ * What happens to orphaned files when entities are deleted?
38
+ * How much are we spending on storage, and is it efficient?
39
+ * What file types do we accept, and should we?
40
+ * How do we handle upload failures gracefully?
41
+ * What content validation exists (type, size, safety)?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-support
3
- description: Review support - contact, communications, newsletter
3
+ description: Review support - help experience, communications, user satisfaction
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,32 +12,30 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify user satisfaction improvements and support efficiency gains.
15
+ * **Explore beyond the spec**: identify what would make users feel genuinely supported.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Email**: Resend
20
20
  * **Framework**: Next.js
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### Support and Communications
24
+ * Unsubscribe must work reliably
25
+ * Support contact must be discoverable
26
+ * Newsletter/marketing must respect consent preferences
25
27
 
26
- * Support/Contact surface must be:
27
- * Discoverable
28
- * Localized
29
- * WCAG AA compliant
30
- * SEO-complete
31
- * Privacy-safe
32
- * Auditable where relevant
33
- * Newsletter subscription/preferences must be consent-aware
34
- * Unsubscribe enforcement must be reliable
28
+ ## Context
35
29
 
36
- ## Key Areas to Explore
30
+ Support is where user trust is won or lost. When something goes wrong, how easy is it to get help? When help arrives, does it actually solve the problem? Great support turns frustrated users into loyal advocates.
37
31
 
38
- * How do users find help when they need it?
39
- * What self-service support options exist (FAQ, help center)?
40
- * How are support requests tracked and resolved?
41
- * What is the email deliverability and engagement rate?
42
- * How does the newsletter system handle bounces and complaints?
43
- * What support tools do agents have for helping users?
32
+ Consider the entire help-seeking journey: finding help, explaining the problem, getting resolution. Where is there friction? Where do users give up? What would make users feel genuinely cared for?
33
+
34
+ ## Driving Questions
35
+
36
+ * When users need help, can they find it easily?
37
+ * What problems do users have that they can't solve themselves?
38
+ * How long does it take to resolve a typical support request?
39
+ * What would reduce support volume without reducing user satisfaction?
40
+ * Where do users get stuck and give up on getting help?
41
+ * What would make the support experience genuinely delightful?
@@ -0,0 +1,42 @@
1
+ ---
2
+ name: review-trust-safety
3
+ description: Review trust & safety - abuse prevention, moderation, user protection
4
+ agent: coder
5
+ ---
6
+
7
+ # Trust & Safety Review
8
+
9
+ ## Mandate
10
+
11
+ * Perform a **deep, thorough review** of trust and safety in this codebase.
12
+ * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
+ * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
+ * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
+ * **Explore beyond the spec**: identify abuse vectors before bad actors find them.
16
+
17
+ ## Tech Stack
18
+
19
+ * **Analytics**: PostHog
20
+ * **Database**: Neon (Postgres)
21
+ * **Workflows**: Upstash Workflows + QStash
22
+
23
+ ## Non-Negotiables
24
+
25
+ * All enforcement actions must be auditable (who/when/why)
26
+ * Appeals process must exist for affected users
27
+ * Graduated response levels must be defined (warn → restrict → suspend → ban)
28
+
29
+ ## Context
30
+
31
+ Trust & safety is about protecting users — from each other and from malicious actors. Every platform eventually attracts abuse. The question is whether you're prepared for it or scrambling to react.
32
+
33
+ Consider: what would a bad actor try to do? How would we detect it? How would we respond? What about the false positives — innocent users caught by automated systems? A good T&S system is effective against abuse AND fair to legitimate users.
34
+
35
+ ## Driving Questions
36
+
37
+ * What would a motivated bad actor try to do on this platform?
38
+ * How would we detect coordinated abuse or bot networks?
39
+ * What happens when automated moderation gets it wrong?
40
+ * How do affected users appeal decisions, and is it fair?
41
+ * What abuse patterns exist that we haven't addressed?
42
+ * What would make users trust that we're protecting them?
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: review-uiux
3
- description: Review UI/UX - design system, tokens, accessibility, guidance
3
+ description: Review UI/UX - design system, accessibility, user experience
4
4
  agent: coder
5
5
  ---
6
6
 
@@ -12,38 +12,28 @@ agent: coder
12
12
  * **Delegate to multiple workers** to research different aspects in parallel; you act as the **final gate** to synthesize and verify quality.
13
13
  * Deliverables must be stated as **findings, gaps, and actionable recommendations**.
14
14
  * **Single-pass delivery**: no deferrals; deliver a complete assessment.
15
- * **Explore beyond the spec**: identify usability improvements and design inconsistencies.
15
+ * **Explore beyond the spec**: if the current design needs fundamental rethinking, propose it.
16
16
 
17
17
  ## Tech Stack
18
18
 
19
19
  * **Framework**: Next.js
20
20
  * **Icons**: Iconify
21
21
 
22
- ## Review Scope
22
+ ## Non-Negotiables
23
23
 
24
- ### UX, Design System, and Guidance
24
+ * WCAG AA accessibility compliance
25
25
 
26
- * Design-system driven UI (tokens)
27
- * Dark/light theme support
28
- * WCAG AA compliance
29
- * CLS-safe (no layout shifts)
30
- * Responsive design (mobile-first, desktop-second)
31
- * Iconify for icons; no emoji in UI content
26
+ ## Context
32
27
 
33
- ### Guidance Requirements
28
+ UI/UX determines how users perceive and interact with the product. A great UI isn't just "correct" — it's delightful, intuitive, and makes complex tasks feel simple.
34
29
 
35
- * Guidance is mandatory for all user-facing features and monetization flows:
36
- * Discoverable
37
- * Clear
38
- * Dismissible with re-entry
39
- * Localized and measurable
40
- * Governed by eligibility and frequency controls
30
+ The review should consider: does the current design truly serve users well? Or does it need fundamental rethinking? Small tweaks to a flawed design won't make it excellent. If redesign is warranted, propose it.
41
31
 
42
- ## Key Areas to Explore
32
+ ## Driving Questions
43
33
 
44
- * How consistent is the design system across the application?
45
- * What accessibility issues exist and affect real users?
46
- * Where do users get confused or drop off in key flows?
47
- * How does the mobile experience compare to desktop?
48
- * What guidance/onboarding is missing for complex features?
49
- * How does the dark/light theme implementation handle edge cases?
34
+ * Where do users get confused, frustrated, or give up?
35
+ * If we were designing this from scratch today, what would be different?
36
+ * What would make this experience genuinely delightful, not just functional?
37
+ * How does the design system enable or constrain good UX?
38
+ * What are competitors doing that users might expect?
39
+ * Where is the UI just "okay" when it could be exceptional?
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@sylphx/flow",
3
- "version": "2.11.0",
3
+ "version": "2.12.0",
4
4
  "description": "One CLI to rule them all. Unified orchestration layer for Claude Code, OpenCode, Cursor and all AI development tools. Auto-detection, auto-installation, auto-upgrade.",
5
5
  "type": "module",
6
6
  "bin": {