qa-workflow-cc 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. package/README.md +461 -0
  2. package/VERSION +1 -0
  3. package/bin/install.js +116 -0
  4. package/commands/qa/continue.md +77 -0
  5. package/commands/qa/full.md +149 -0
  6. package/commands/qa/init.md +105 -0
  7. package/commands/qa/resume.md +91 -0
  8. package/commands/qa/status.md +66 -0
  9. package/package.json +28 -0
  10. package/skills/qa/SKILL.md +420 -0
  11. package/skills/qa/references/continuation-format.md +58 -0
  12. package/skills/qa/references/exit-criteria.md +53 -0
  13. package/skills/qa/references/lifecycle.md +181 -0
  14. package/skills/qa/references/model-profiles.md +77 -0
  15. package/skills/qa/templates/agent-skeleton.md +733 -0
  16. package/skills/qa/templates/component-test.md +1088 -0
  17. package/skills/qa/templates/domain-research-queries.md +101 -0
  18. package/skills/qa/templates/domain-security-profiles.md +182 -0
  19. package/skills/qa/templates/e2e-test.md +1200 -0
  20. package/skills/qa/templates/nielsen-heuristics.md +274 -0
  21. package/skills/qa/templates/performance-benchmarks-base.md +321 -0
  22. package/skills/qa/templates/qa-report-template.md +271 -0
  23. package/skills/qa/templates/security-checklist-owasp.md +451 -0
  24. package/skills/qa/templates/stop-points/bootstrap-complete.md +36 -0
  25. package/skills/qa/templates/stop-points/certified.md +25 -0
  26. package/skills/qa/templates/stop-points/escalated.md +32 -0
  27. package/skills/qa/templates/stop-points/fix-ready.md +43 -0
  28. package/skills/qa/templates/stop-points/phase-transition.md +4 -0
  29. package/skills/qa/templates/stop-points/status-dashboard.md +32 -0
  30. package/skills/qa/templates/test-standards.md +652 -0
  31. package/skills/qa/templates/unit-test.md +998 -0
  32. package/skills/qa/templates/visual-regression.md +418 -0
  33. package/skills/qa/workflows/bootstrap.md +45 -0
  34. package/skills/qa/workflows/decision-gate.md +66 -0
  35. package/skills/qa/workflows/fix-execute.md +132 -0
  36. package/skills/qa/workflows/fix-plan.md +52 -0
  37. package/skills/qa/workflows/report-phase.md +64 -0
  38. package/skills/qa/workflows/test-phase.md +86 -0
  39. package/skills/qa/workflows/verify-phase.md +65 -0
@@ -0,0 +1,274 @@
1
+ # Nielsen's 10 Usability Heuristics — QA Evaluation Rubric
2
+
3
+ Reference: nngroup.com/articles/ten-usability-heuristics/
4
+
5
+ ## Scoring Scale
6
+
7
+ | Score | Level | Definition |
8
+ |-------|-------|-----------|
9
+ | 5 | Excellent | Exceeds expectations, delightful UX, no issues found |
10
+ | 4 | Good | Meets standards with minor gaps, no usability barriers |
11
+ | 3 | Acceptable | Functional but room for improvement, minor friction points |
12
+ | 2 | Below Standard | Noticeable UX issues affecting task completion |
13
+ | 1 | Poor | Significant usability problems preventing effective use |
14
+
15
+ ---
16
+
17
+ ## H1: Visibility of System Status
18
+
19
+ **Principle:** The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
20
+
21
+ ### What to Check
22
+
23
+ | Check | Expected Behavior |
24
+ |-------|-------------------|
25
+ | Loading states (skeleton/spinner) | All data fetches show loading indicators |
26
+ | Progress indicators | Multi-step operations show progress (upload, save) |
27
+ | Action feedback | Button presses, form submissions give immediate feedback |
28
+ | Real-time updates | Live data shows connection status (online/offline) |
29
+ | Error states | Failures show clear error with recovery option |
30
+ | Empty states | Empty collections show helpful CTA messaging |
31
+ | Network status | Offline detection with graceful degradation |
32
+
33
+ ### Scoring Guide
34
+ - 5: All async operations show loading, all actions give immediate feedback, real-time status shown
35
+ - 4: Most operations show status, minor gaps (e.g., no progress on batch uploads)
36
+ - 3: Loading states exist but inconsistent, some actions lack feedback
37
+ - 2: Major operations show no loading state, user unsure if action worked
38
+ - 1: No loading indicators, no action feedback, user left guessing
39
+
40
+ ---
41
+
42
+ ## H2: Match Between System and Real World
43
+
44
+ **Principle:** The system should speak the user's language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms.
45
+
46
+ ### What to Check
47
+
48
+ | Check | Expected |
49
+ |-------|----------|
50
+ | Terminology | Domain-specific language familiar to target users |
51
+ | Workflow mapping | UI flow matches real-world process order |
52
+ | Error messages | Human-readable, not technical codes |
53
+ | Navigation labels | Plain language describing destinations |
54
+ | Date formats | Relative or locale-appropriate formatting |
55
+ | Currency/numbers | Properly formatted with locale conventions |
56
+ | Status labels | Business-meaningful state names |
57
+
58
+ ### Scoring Guide
59
+ - 5: All terminology matches target user vocabulary, natural workflow mapping
60
+ - 4: Language mostly appropriate, occasional technical terms
61
+ - 3: Mix of user-friendly and technical language
62
+ - 2: Frequent use of developer jargon or database terms
63
+ - 1: Interface feels like a database admin tool
64
+
65
+ ---
66
+
67
+ ## H3: User Control and Freedom
68
+
69
+ **Principle:** Users often choose system functions by mistake and need a clearly marked "emergency exit" to leave the unwanted state without going through an extended dialogue.
70
+
71
+ ### What to Check
72
+
73
+ | Check | Expected |
74
+ |-------|----------|
75
+ | Modal dismiss | All modals have X button AND backdrop/escape dismiss |
76
+ | Back navigation | Back button works on all screens |
77
+ | Undo actions | State changes can be reversed |
78
+ | Cancel operations | All forms have Cancel option always available |
79
+ | Destructive actions | Confirmation dialog before delete/archive |
80
+ | Form preservation | Leaving partially filled form shows warning |
81
+ | Filter/search reset | Clear all filters button when filters active |
82
+
83
+ ### Scoring Guide
84
+ - 5: Every action is reversible, clear exits everywhere, form state preserved
85
+ - 4: Most actions reversible, modals dismissible, minor gaps
86
+ - 3: Some irreversible actions without warning, back nav mostly works
87
+ - 2: Significant actions can't be undone, modal traps exist
88
+ - 1: Users get stuck frequently, no undo/exit paths
89
+
90
+ ---
91
+
92
+ ## H4: Consistency and Standards
93
+
94
+ **Principle:** Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
95
+
96
+ ### What to Check
97
+
98
+ | Check | Expected |
99
+ |-------|----------|
100
+ | Design system | Consistent use of component library across all screens |
101
+ | Color palette | Consistent color usage per design tokens |
102
+ | Typography | Consistent font sizes and weights |
103
+ | Spacing | Consistent use of spacing tokens |
104
+ | Visual style | Consistent border, shadow, and corner radius patterns |
105
+ | Button hierarchy | Primary/Secondary/Tertiary consistent across screens |
106
+ | Icon usage | Consistent icon library, same icon for same action |
107
+ | Navigation patterns | Same navigation behavior across all sections |
108
+
109
+ ### Scoring Guide
110
+ - 5: Perfect design system adherence, visually cohesive, no inconsistencies
111
+ - 4: Mostly consistent, 1-2 minor deviations from the design system
112
+ - 3: Noticeable inconsistencies (mixed button styles, varying spacing)
113
+ - 2: Significant style mixing, feels like different apps
114
+ - 1: No consistent design system applied
115
+
116
+ ---
117
+
118
+ ## H5: Error Prevention
119
+
120
+ **Principle:** Even better than good error messages is a careful design which prevents a problem from occurring in the first place.
121
+
122
+ ### What to Check
123
+
124
+ | Check | Expected |
125
+ |-------|----------|
126
+ | Form validation | Inline validation before submission |
127
+ | Required fields | Clear marking of required vs optional |
128
+ | Destructive confirmation | Confirmation dialog before delete/archive |
129
+ | Input constraints | Format masks, character limits enforced |
130
+ | Duplicate prevention | Warning when creating duplicate entries |
131
+ | Data loss prevention | Unsaved changes warning on navigation |
132
+ | Double-submit prevention | Disable buttons during submission |
133
+
134
+ ### Scoring Guide
135
+ - 5: Comprehensive input validation, smart defaults, impossible to make critical errors
136
+ - 4: Good validation, most destructive actions confirmed, minor gaps
137
+ - 3: Basic validation exists, some destructive actions lack confirmation
138
+ - 2: Missing validation on important fields, easy to make mistakes
139
+ - 1: No input validation, destructive actions have no safeguards
140
+
141
+ ---
142
+
143
+ ## H6: Recognition Rather Than Recall
144
+
145
+ **Principle:** Minimize the user's memory load by making objects, actions, and options visible.
146
+
147
+ ### What to Check
148
+
149
+ | Check | Expected |
150
+ |-------|----------|
151
+ | Status visualization | Visual indicators show state without reading text |
152
+ | Recent actions | Activity feed or history shows what happened |
153
+ | Search | Global or contextual search for finding items |
154
+ | Contextual actions | Action buttons visible on each item |
155
+ | Active filters | Active filters shown as dismissible chips/tags |
156
+ | Navigation state | Current location highlighted in nav |
157
+ | Breadcrumbs | Deep hierarchies show navigation path |
158
+
159
+ ### Scoring Guide
160
+ - 5: All information visible at a glance, zero memorization required
161
+ - 4: Most information visible, occasional need to navigate for details
162
+ - 3: Key info visible but some important details require navigation
163
+ - 2: Users frequently need to remember IDs, names, or navigation paths
164
+ - 1: Heavy reliance on user memory, hidden functionality
165
+
166
+ ---
167
+
168
+ ## H7: Flexibility and Efficiency of Use
169
+
170
+ **Principle:** Accelerators — unseen by the novice user — may often speed up the interaction for the expert user.
171
+
172
+ ### What to Check
173
+
174
+ | Check | Expected |
175
+ |-------|----------|
176
+ | Quick actions | Fast paths for common tasks |
177
+ | Shortcuts | Keyboard shortcuts or gestures for power users |
178
+ | Bulk operations | Select multiple items for batch actions |
179
+ | Search/filter | Quick filtering to reduce cognitive load |
180
+ | Customization | User-configurable views or preferences |
181
+ | Recent items | Quick access to recently viewed/edited items |
182
+ | Smart defaults | Pre-filled fields based on context |
183
+
184
+ ### Scoring Guide
185
+ - 5: Multiple efficiency features, novice and expert workflows both smooth
186
+ - 4: Good shortcuts exist, most common tasks have fast paths
187
+ - 3: Basic efficiency features, some common tasks require too many steps
188
+ - 2: No shortcuts, every task requires full navigation
189
+ - 1: Excessive steps for common operations
190
+
191
+ ---
192
+
193
+ ## H8: Aesthetic and Minimalist Design
194
+
195
+ **Principle:** Dialogues should not contain information which is irrelevant or rarely needed.
196
+
197
+ ### What to Check
198
+
199
+ | Check | Expected |
200
+ |-------|----------|
201
+ | Information density | Items show key info only, details on interaction |
202
+ | Visual hierarchy | Clear primary/secondary/tertiary content |
203
+ | White space | Adequate breathing room between elements |
204
+ | Progressive disclosure | Details hidden behind expand/tap |
205
+ | No visual clutter | Clean backgrounds, focused content |
206
+ | Meaningful visuals | Icons add meaning, not decoration |
207
+
208
+ ### Scoring Guide
209
+ - 5: Clean, focused UI with perfect information hierarchy, nothing unnecessary
210
+ - 4: Mostly clean, minor visual clutter or unnecessary elements
211
+ - 3: Some screens cluttered, information hierarchy unclear in places
212
+ - 2: Significant visual noise, hard to focus on key information
213
+ - 1: Overwhelming amount of information, no clear hierarchy
214
+
215
+ ---
216
+
217
+ ## H9: Help Users Recognize, Diagnose, and Recover from Errors
218
+
219
+ **Principle:** Error messages should be expressed in plain language, precisely indicate the problem, and constructively suggest a solution.
220
+
221
+ ### What to Check
222
+
223
+ | Check | Expected |
224
+ |-------|----------|
225
+ | Error messages | Plain language, not technical codes |
226
+ | Recovery actions | "Try again" button, "Go back" option |
227
+ | Form errors | Inline under the specific field that failed |
228
+ | Network errors | Friendly message with retry option |
229
+ | Auth errors | Clear session expired message with re-login path |
230
+ | Empty results | Suggest adjusting search/filter criteria |
231
+ | Partial failures | Show what succeeded and offer retry for failures |
232
+
233
+ ### Scoring Guide
234
+ - 5: All errors are user-friendly with clear recovery paths
235
+ - 4: Most errors handled gracefully, minor cases show generic messages
236
+ - 3: Error handling exists but messages sometimes unhelpful
237
+ - 2: Raw error codes or generic "Something went wrong" frequently
238
+ - 1: Errors cause blank screens, crashes, or incomprehensible messages
239
+
240
+ ---
241
+
242
+ ## H10: Help and Documentation
243
+
244
+ **Principle:** Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation.
245
+
246
+ ### What to Check
247
+
248
+ | Check | Expected |
249
+ |-------|----------|
250
+ | Help pages | FAQ or help section accessible from the app |
251
+ | Onboarding | First-use guidance for new users |
252
+ | Tooltips | Hover/tap hints on non-obvious features |
253
+ | Contextual help | Explanation near complex features |
254
+ | Contact support | Easy way to reach help (phone, email, chat) |
255
+ | Empty state guidance | "Here's how to get started" messaging |
256
+
257
+ ### Scoring Guide
258
+ - 5: Comprehensive help system, contextual guidance, self-serve documentation
259
+ - 4: Good help content, most features have explanation, minor gaps
260
+ - 3: Basic help exists (FAQ), some features lack explanation
261
+ - 2: Minimal help content, users left to figure things out
262
+ - 1: No help system, no guidance, users are on their own
263
+
264
+ ---
265
+
266
+ ## Aggregation
267
+
268
+ Calculate the overall UX score as the average of all 10 heuristic scores per app.
269
+
270
+ | App | H1 | H2 | H3 | H4 | H5 | H6 | H7 | H8 | H9 | H10 | **Avg** |
271
+ |-----|----|----|----|----|----|----|----|----|----|----|---------|
272
+ | {app_name} | /5 | /5 | /5 | /5 | /5 | /5 | /5 | /5 | /5 | /5 | **/5** |
273
+
274
+ **Exit criteria target: >= 3.5 / 5.0 average across all apps.**
@@ -0,0 +1,321 @@
1
+ # Performance Benchmarks (Generic Template)
2
+
3
+ Category B template for QA bootstrapping. Contains universal performance standards (Core Web Vitals, Lighthouse) with `{{variable}}` placeholders for project-specific targets.
4
+
5
+ ## Variables
6
+
7
+ | Variable | Description | Example |
8
+ |----------|-------------|---------|
9
+ | `{{appTargets}}` | Per-app Lighthouse score targets table | Web App >= 90, Mobile PWA >= 80 |
10
+ | `{{apiEndpoints}}` | Project-specific API endpoint benchmarks | user.list < 150ms, order.create < 300ms |
11
+ | `{{bundleBudgets}}` | Per-app and per-page bundle size budgets | Web App < 400KB gzip first load |
12
+ | `{{criticalPaths}}` | Critical user journey page-level budgets | Login page < 50KB JS, Dashboard < 80KB JS |
13
+ | `{{projectName}}` | Name of the project | TruLine, Acme CRM |
14
+
15
+ ---
16
+
17
+ ## Core Web Vitals Targets (Universal)
18
+
19
+ These targets apply to ALL web projects. They are based on Google's Core Web Vitals thresholds (web.dev/vitals) and represent the standard for "Good" user experience.
20
+
21
+ | Metric | Good | Needs Improvement | Poor | Our Target |
22
+ |--------|------|-------------------|------|-----------|
23
+ | **LCP** (Largest Contentful Paint) | < 2.5s | 2.5s - 4.0s | > 4.0s | < 2.5s |
24
+ | **INP** (Interaction to Next Paint) | < 200ms | 200ms - 500ms | > 500ms | < 200ms |
25
+ | **CLS** (Cumulative Layout Shift) | < 0.1 | 0.1 - 0.25 | > 0.25 | < 0.1 |
26
+ | **FCP** (First Contentful Paint) | < 1.8s | 1.8s - 3.0s | > 3.0s | < 2.0s |
27
+ | **TTFB** (Time to First Byte) | < 800ms | 800ms - 1800ms | > 1800ms | < 800ms |
28
+
29
+ ### How to Measure
30
+
31
+ | Method | Tool | When to Use |
32
+ |--------|------|-------------|
33
+ | Lab data | Lighthouse CLI | During QA testing, CI/CD |
34
+ | Lab data | Chrome DevTools Performance tab | During development |
35
+ | Field data | Chrome UX Report (CrUX) | Post-launch monitoring |
36
+ | Field data | web.dev/measure | Quick spot checks |
37
+ | Synthetic monitoring | WebPageTest | Detailed waterfall analysis |
38
+
39
+ ---
40
+
41
+ ## Lighthouse Score Targets (Universal)
42
+
43
+ These minimum thresholds are universal. Projects may set higher targets but should never go below these floors.
44
+
45
+ | Category | Floor (Minimum) | Good Target | Stretch Goal |
46
+ |----------|----------------|-------------|-------------|
47
+ | Performance | 80 | 90+ | 95+ |
48
+ | Accessibility | 85 | 95+ | 100 |
49
+ | Best Practices | 80 | 90+ | 95+ |
50
+ | SEO | 80 | 90+ | 95+ |
51
+
52
+ ### Per-App Targets
53
+
54
+ {{appTargets}}
55
+
56
+ <!-- BOOTSTRAP INSTRUCTIONS:
57
+ Replace {{appTargets}} with a table of per-app Lighthouse targets.
58
+ Adjust targets based on app type (marketing sites should score higher on SEO,
59
+ PWAs higher on performance, admin panels can be slightly lower).
60
+
61
+ Example:
62
+
63
+ | App | Performance | Accessibility | Best Practices | SEO |
64
+ |-----|------------|---------------|----------------|-----|
65
+ | Web Dashboard (app.example.com) | >= 80 | >= 85 | >= 80 | N/A |
66
+ | Client Portal (portal.example.com) | >= 80 | >= 90 | >= 85 | >= 80 |
67
+ | Marketing Site (example.com) | >= 85 | >= 90 | >= 85 | >= 90 |
68
+ | Mobile PWA (m.example.com) | >= 75 | >= 85 | >= 80 | N/A |
69
+
70
+ Notes on target selection:
71
+ - Marketing/public sites: Higher performance and SEO targets
72
+ - Authenticated apps: Can relax SEO, focus on performance and a11y
73
+ - Mobile/PWA: May have slightly lower performance floor due to constraints
74
+ - Admin/internal tools: Can relax all targets slightly
75
+ -->
76
+
77
+ ---
78
+
79
+ ## API Response Time Budgets (Universal Tiers)
80
+
81
+ These tiers apply to most backend APIs regardless of framework (REST, GraphQL, tRPC, gRPC).
82
+
83
+ | Endpoint Category | p50 | p95 | p99 | Timeout |
84
+ |-------------------|-----|-----|-----|---------|
85
+ | **Simple reads** (get by ID, list with pagination) | < 100ms | < 300ms | < 500ms | 5s |
86
+ | **Write operations** (create, update, delete) | < 200ms | < 500ms | < 1s | 10s |
87
+ | **Complex queries** (aggregations, joins, analytics) | < 500ms | < 1.5s | < 3s | 15s |
88
+ | **AI/ML operations** (generation, classification, embedding) | < 3s | < 8s | < 15s | 30s |
89
+ | **File operations** (upload signature, download URL) | < 200ms | < 500ms | < 1s | 10s |
90
+ | **Webhook processing** (inbound webhook handlers) | < 1s | < 3s | < 5s | 30s |
91
+ | **Authentication** (token validation, session check) | < 50ms | < 150ms | < 300ms | 5s |
92
+ | **Search** (full-text, fuzzy, vector similarity) | < 200ms | < 800ms | < 2s | 10s |
93
+
94
+ ### Project-Specific Endpoint Benchmarks
95
+
96
+ {{apiEndpoints}}
97
+
98
+ <!-- BOOTSTRAP INSTRUCTIONS:
99
+ Replace {{apiEndpoints}} with a table of your project's specific API endpoints
100
+ and their budgets. Map each endpoint to the appropriate tier above, then
101
+ set specific budgets based on expected query complexity.
102
+
103
+ Example:
104
+
105
+ | Endpoint | Operation | Category | Budget (p95) |
106
+ |----------|-----------|----------|-------------|
107
+ | user.list | Paginated list (20 items) | Simple read | < 150ms |
108
+ | user.create | Create user + send welcome email | Write | < 300ms |
109
+ | user.get | Single user with relations | Simple read | < 100ms |
110
+ | order.list | Orders with line items | Complex query | < 500ms |
111
+ | search.query | Full-text product search | Search | < 800ms |
112
+ | report.generate | Monthly analytics report | Complex query | < 1.5s |
113
+ | ai.summarize | AI-generated summary | AI/ML | < 8s |
114
+ | upload.getSignature | Cloud storage signature | File operation | < 100ms |
115
+
116
+ Tips:
117
+ - Start with the tier defaults, then adjust per endpoint
118
+ - Endpoints with JOINs across 3+ tables: use "Complex query" tier
119
+ - Endpoints that trigger async work: measure only the sync portion
120
+ - AI endpoints: budget is for the synchronous response, not background processing
121
+ -->
122
+
123
+ ---
124
+
125
+ ## Bundle Size Budgets
126
+
127
+ ### Per-App Budgets
128
+
129
+ {{bundleBudgets}}
130
+
131
+ <!-- BOOTSTRAP INSTRUCTIONS:
132
+ Replace {{bundleBudgets}} with your project's per-app bundle size budgets.
133
+
134
+ Example:
135
+
136
+ | App | Total JS (gzip) | First Load JS (gzip) | Max Single Chunk (gzip) |
137
+ |-----|-----------------|---------------------|------------------------|
138
+ | Web Dashboard | < 500KB | < 200KB | < 100KB |
139
+ | Client Portal | < 400KB | < 150KB | < 80KB |
140
+ | Marketing Site | < 300KB | < 100KB | < 60KB |
141
+ | Mobile App (OTA bundle) | < 5MB | N/A | < 500KB |
142
+
143
+ Guidelines for setting budgets:
144
+ - Total JS: Sum of all JS chunks loaded for the heaviest page
145
+ - First Load JS: Shared chunks + page-specific chunk for initial route
146
+ - Max Single Chunk: Largest individual chunk (prevents code-split failures)
147
+ - Mobile OTA: Over-the-air update size (Expo/React Native)
148
+ -->
149
+
150
+ ### Critical Path Page Budgets
151
+
152
+ {{criticalPaths}}
153
+
154
+ <!-- BOOTSTRAP INSTRUCTIONS:
155
+ Replace {{criticalPaths}} with per-page budgets for your most important user journeys.
156
+
157
+ Example:
158
+
159
+ | Page | JS Budget (gzip) | CSS Budget (gzip) | Notes |
160
+ |------|------------------|--------------------|-------|
161
+ | Login/Auth | < 40KB | < 10KB | Must load fast on slow connections |
162
+ | Main Dashboard | < 80KB | < 20KB | First screen after login |
163
+ | List View (primary) | < 80KB | < 20KB | Most visited page |
164
+ | Detail View | < 60KB | < 15KB | Accessed from list |
165
+ | Settings | < 50KB | < 10KB | Low-traffic, can be lazy loaded |
166
+ | Marketing Home | < 50KB | < 15KB | Must score high on Lighthouse |
167
+
168
+ Tips:
169
+ - Identify the 5-8 most important pages by traffic/business value
170
+ - Auth pages and landing pages need the tightest budgets
171
+ - Pages behind auth can be slightly larger (cached after first load)
172
+ - Use route-based code splitting to keep individual page bundles small
173
+ -->
174
+
175
+ ---
176
+
177
+ ## Database Performance (Universal)
178
+
179
+ These query performance budgets apply to relational databases (PostgreSQL, MySQL, etc.).
180
+
181
+ | Query Pattern | Budget | Alert If Exceeds |
182
+ |--------------|--------|-----------------|
183
+ | Simple SELECT by primary key | < 5ms | > 50ms |
184
+ | SELECT with WHERE + LIMIT (indexed) | < 20ms | > 200ms |
185
+ | JOIN across 2 tables | < 50ms | > 500ms |
186
+ | JOIN across 3+ tables | < 100ms | > 1s |
187
+ | Aggregation (COUNT, SUM, AVG) | < 100ms | > 1s |
188
+ | Full-text search (tsvector/GIN) | < 200ms | > 2s |
189
+ | Vector similarity search (pgvector/HNSW) | < 300ms | > 3s |
190
+ | Bulk INSERT (100 rows) | < 200ms | > 2s |
191
+ | Bulk UPDATE (100 rows) | < 200ms | > 2s |
192
+ | Transaction (2-3 operations) | < 100ms | > 1s |
193
+
194
+ ### Database Health Indicators
195
+
196
+ | Metric | Healthy | Warning | Critical |
197
+ |--------|---------|---------|----------|
198
+ | Connection pool utilization | < 50% | 50-80% | > 80% |
199
+ | Slow query count (> 1s) per minute | 0 | 1-5 | > 5 |
200
+ | Active connections | < 50% of max | 50-80% | > 80% |
201
+ | Cache hit ratio | > 99% | 95-99% | < 95% |
202
+ | Index usage ratio | > 95% | 80-95% | < 80% |
203
+ | Deadlock count per hour | 0 | 1-2 | > 2 |
204
+
205
+ ---
206
+
207
+ ## Real-Time Performance
208
+
209
+ {{realtimeTargets}}
210
+
211
+ <!-- BOOTSTRAP INSTRUCTIONS:
212
+ Replace {{realtimeTargets}} with your project's real-time communication targets,
213
+ or remove this section entirely if your project has no real-time features.
214
+
215
+ For WebSocket-based systems (Socket.IO, ws, Pusher, etc.):
216
+
217
+ | Metric | Target |
218
+ |--------|--------|
219
+ | Connection establishment | < 500ms |
220
+ | Message delivery latency (client-to-client) | < 100ms |
221
+ | Typing indicator latency | < 50ms |
222
+ | Reconnection time after disconnect | < 2s |
223
+ | Max concurrent connections per instance | 1000 |
224
+ | Message throughput per second | > 500 |
225
+
226
+ For Server-Sent Events (SSE):
227
+
228
+ | Metric | Target |
229
+ |--------|--------|
230
+ | Connection establishment | < 300ms |
231
+ | Event delivery latency | < 200ms |
232
+ | Reconnection time | < 3s |
233
+
234
+ For polling-based systems:
235
+
236
+ | Metric | Target |
237
+ |--------|--------|
238
+ | Poll interval | 5-30s depending on data freshness needs |
239
+ | Poll response time | < 200ms |
240
+ | Data staleness tolerance | < 60s |
241
+
242
+ If your project has no real-time features, delete this entire section.
243
+ -->
244
+
245
+ ---
246
+
247
+ ## Image and Media Performance (Universal)
248
+
249
+ | Metric | Target | Notes |
250
+ |--------|--------|-------|
251
+ | Thumbnail load (< 50KB source) | < 500ms | Use CDN with transforms |
252
+ | Full-size image load (< 5MB source) | < 2s | Lazy load below fold |
253
+ | Image upload (5MB file) | < 5s | Show progress indicator |
254
+ | Batch upload (10 files) | < 30s | Parallel uploads with progress |
255
+ | Video thumbnail generation | < 2s | Server-side or CDN |
256
+ | CDN cache hit ratio | > 90% | Monitor via CDN dashboard |
257
+
258
+ ### Image Optimization Checklist
259
+
260
+ | Practice | Required |
261
+ |----------|----------|
262
+ | Serve WebP/AVIF with fallback | Yes |
263
+ | Responsive srcset for viewport sizes | Yes |
264
+ | Lazy loading for below-fold images | Yes |
265
+ | Width/height attributes to prevent CLS | Yes |
266
+ | CDN for asset delivery | Yes |
267
+ | Compression (quality 75-85 for photos) | Yes |
268
+ | Maximum upload file size enforced | Yes |
269
+
270
+ ---
271
+
272
+ ## Test Performance Criteria
273
+
274
+ For QA testing, map test IDs to measurement methods:
275
+
276
+ | Test ID Prefix | What to Measure | How to Measure |
277
+ |---------------|-----------------|----------------|
278
+ | PERF-CWV-* | Core Web Vitals | Lighthouse CLI or web.dev/measure |
279
+ | PERF-LH-* | Lighthouse category scores | `lighthouse` CLI with JSON output |
280
+ | PERF-API-* | API response times | HTTP client timing (p50/p95/p99) |
281
+ | PERF-BUNDLE-* | Bundle sizes | Build output analysis (next build, webpack stats) |
282
+ | PERF-DB-* | Database query times | Query EXPLAIN ANALYZE or ORM query logging |
283
+ | PERF-RT-* | Real-time latency | Round-trip timestamp comparison |
284
+ | PERF-IMG-* | Image/media load times | Resource timing API or Lighthouse |
285
+ | PERF-SEARCH-* | Search response times | API timing with representative queries |
286
+
287
+ ### Measurement Best Practices
288
+
289
+ | Practice | Why |
290
+ |----------|-----|
291
+ | Measure p95 and p99, not just average | Averages hide tail latency |
292
+ | Test with realistic data volumes | Empty databases are always fast |
293
+ | Test on throttled connections (3G/4G) | Users are not always on broadband |
294
+ | Measure cold start separately | First request after deploy is slower |
295
+ | Run measurements 3+ times, take median | Reduces noise from outliers |
296
+ | Test during simulated load | Single-user perf != concurrent perf |
297
+ | Record baseline before changes | Need a comparison point for regressions |
298
+
299
+ ---
300
+
301
+ ## Performance Regression Prevention
302
+
303
+ ### CI/CD Integration Points
304
+
305
+ | Gate | When | Blocks Deploy? |
306
+ |------|------|---------------|
307
+ | Bundle size check | Every PR | Yes, if > 10% increase |
308
+ | Lighthouse CI | Every PR to main | Yes, if below floor |
309
+ | API response time test | Pre-deploy | Yes, if p95 > budget |
310
+ | Database migration timing | Schema changes | No, but flag if > 30s |
311
+
312
+ ### Monitoring Recommendations
313
+
314
+ | What to Monitor | Tool Options | Alert Threshold |
315
+ |----------------|-------------|-----------------|
316
+ | Core Web Vitals (field) | CrUX, Vercel Analytics, Sentry | Any metric crosses "Needs Improvement" |
317
+ | API latency (p95) | APM (Datadog, New Relic, Sentry) | > 2x budget for any endpoint |
318
+ | Error rate | APM, Sentry | > 1% of requests |
319
+ | Database slow queries | pg_stat_statements, query logging | Any query > 1s |
320
+ | Memory usage | Infrastructure monitoring | > 80% of limit |
321
+ | CPU usage | Infrastructure monitoring | Sustained > 70% |