@take-out/docs 0.0.42

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/aggregates.md ADDED
@@ -0,0 +1,584 @@
1
+ ---
2
+ name: aggregates
3
+ description: Database aggregates guide using PostgreSQL triggers and stats tables. INVOKE WHEN: commentCount, followerCount, likesCount, reactionCount, replyCount, derived columns, computed values, denormalized data, INSERT ON CONFLICT, UPSERT pattern, maintaining counts, syncing counts, aggregate tables, stats tables.
4
+ ---
5
+
6
+ # Aggregates in your data
7
+
8
+ Zero doesn't support aggregates, but computing aggregates on read is slow for
9
+ any database. Queries like `SELECT COUNT(*)` seem innocent but become
10
+ problematic at scale—they lock rows, block writes, and get slower as data grows.
11
+ Denormalizing aggregates (storing pre-computed counts/sums) via triggers is a
12
+ well-established pattern used by most high-traffic applications.
13
+
14
+ We use Postgres triggers and stats tables to handle things like counting
15
+ replies or reactions. Note that we don't use materialized views since Zero
16
+ doesn't support them.
17
+
18
+ While these patterns seem complex, LLMs handle generating and maintaining them
19
+ well. In a future iteration we want to standardize how to define them and
20
+ automate migrations.
21
+
22
+ This guide explains how to implement efficient aggregate statistics using
23
+ PostgreSQL triggers, based on the privateChatsStats pattern used in this
24
+ codebase.
25
+
26
+ ## Overview
27
+
28
+ Non-materialized aggregate triggers maintain summary statistics in real-time by:
29
+
30
+ 1. Creating a dedicated stats table to store aggregated data
31
+ 2. Using triggers to incrementally update stats when source data changes
32
+ 3. Avoiding expensive full-table recalculations on every query
33
+
34
+ Use this approach when:
35
+
36
+ - You need real-time statistics that are always up-to-date
37
+ - The source data changes frequently but not massively
38
+ - You want to avoid the complexity and storage overhead of materialized views
39
+ - You need granular control over the aggregation logic
40
+
41
+ ## Step 1: Design Your Stats Table
42
+
43
+ First, identify what statistics you need to track. Consider:
44
+
45
+ - Primary keys that uniquely identify each aggregate row
46
+ - Foreign keys to maintain referential integrity
47
+ - Aggregate columns (counts, sums, averages, etc.)
48
+ - Time-based aggregates (weekly, monthly, yearly counts)
49
+ - Metadata (lastUpdatedAt, score calculations, etc.)
50
+
51
+ ### Example: privateChatsStats
52
+
53
+ ```sql
54
+ CREATE TABLE "privateChatsStats" (
55
+ "serverId" varchar NOT NULL REFERENCES "server"(id) ON DELETE CASCADE,
56
+ "userServerId" varchar NOT NULL REFERENCES "server"(id) ON DELETE CASCADE,
57
+ "userAId" varchar NOT NULL REFERENCES "userPublic"(id) ON DELETE CASCADE,
58
+ "userBId" varchar NOT NULL REFERENCES "userPublic"(id) ON DELETE CASCADE,
59
+ "score" integer NOT NULL DEFAULT 0,
60
+ "lastMessageAt" timestamp NOT NULL,
61
+ "messageCountWeek" integer NOT NULL DEFAULT 0,
62
+ "messageCountMonth" integer NOT NULL DEFAULT 0,
63
+ "messageCountYear" integer NOT NULL DEFAULT 0,
64
+ "lastRefreshedAt" timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
65
+ PRIMARY KEY ("serverId", "userServerId")
66
+ );
67
+
68
+ CREATE INDEX "idx_privateChatsStats_server"
69
+ ON "privateChatsStats" ("serverId", "score");
70
+ CREATE INDEX "idx_privateChatsStats_userA"
71
+ ON "privateChatsStats" ("serverId", "userAId", "score");
72
+ CREATE INDEX "idx_privateChatsStats_userB"
73
+ ON "privateChatsStats" ("serverId", "userBId", "score");
74
+ ```
75
+
76
+ ## Step 2: Initial Population
77
+
78
+ Populate the stats table with existing data. This is a one-time operation that
79
+ calculates all historical statistics.
80
+
81
+ Key patterns:
82
+
83
+ - Use CTEs (WITH clauses) to structure complex queries
84
+ - Apply business logic for scoring/ranking
85
+ - Handle edge cases (NULL values, missing relationships)
86
+
87
+ ```sql
88
+ INSERT INTO "privateChatsStats"
89
+ WITH private_chat_stats AS (
90
+ SELECT DISTINCT
91
+ sm1."serverId" as parent_server_id,
92
+ pcs.id as user_server_id,
93
+ LEAST(pcs."userId", pcs."friendId") as user_a_id,
94
+ GREATEST(pcs."userId", pcs."friendId") as user_b_id,
95
+ COUNT(DISTINCT m.id) as total_messages,
96
+ MAX(m."createdAt") as last_message_at,
97
+ COUNT(DISTINCT CASE
98
+ WHEN m."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '7 days'
99
+ THEN m.id
100
+ END) as message_count_week,
101
+ COUNT(DISTINCT CASE
102
+ WHEN m."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '30 days'
103
+ THEN m.id
104
+ END) as message_count_month,
105
+ COUNT(DISTINCT CASE
106
+ WHEN m."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '365 days'
107
+ THEN m.id
108
+ END) as message_count_year
109
+ FROM server pcs
110
+ INNER JOIN "serverMember" sm1 ON sm1."userId" = pcs."userId"
111
+ INNER JOIN "serverMember" sm2 ON sm2."userId" = pcs."friendId"
112
+ AND sm2."serverId" = sm1."serverId"
113
+ LEFT JOIN channel c ON c."serverId" = pcs.id
114
+ LEFT JOIN message m ON m."channelId" = c.id
115
+ AND m.deleted = false
116
+ AND m.type != 'draft'
117
+ WHERE pcs."isPrivateChat" = true
118
+ AND pcs."userId" IS NOT NULL
119
+ AND pcs."friendId" IS NOT NULL
120
+ AND sm1."serverId" != pcs.id
121
+ GROUP BY sm1."serverId", pcs.id, pcs."userId", pcs."friendId"
122
+ )
123
+ SELECT
124
+ parent_server_id as "serverId",
125
+ user_server_id as "userServerId",
126
+ user_a_id as "userAId",
127
+ user_b_id as "userBId",
128
+ CASE
129
+ WHEN message_count_week > 0
130
+ THEN message_count_week * 10 + message_count_month
131
+ ELSE message_count_month * 2 + message_count_year
132
+ END::integer as score,
133
+ COALESCE(last_message_at, CURRENT_TIMESTAMP) as "lastMessageAt",
134
+ message_count_week::integer as "messageCountWeek",
135
+ message_count_month::integer as "messageCountMonth",
136
+ message_count_year::integer as "messageCountYear",
137
+ CURRENT_TIMESTAMP as "lastRefreshedAt"
138
+ FROM private_chat_stats
139
+ WHERE parent_server_id IS NOT NULL;
140
+ ```
141
+
142
+ ## Step 3: Create the Trigger Function
143
+
144
+ The trigger function contains the logic for incrementally updating statistics.
145
+
146
+ Critical considerations:
147
+
148
+ 1. Performance: Minimize database operations
149
+ 2. Correctness: Handle all edge cases
150
+ 3. Idempotency: Updates should be consistent regardless of order
151
+ 4. Atomicity: Use proper transaction handling
152
+
153
+ ```sql
154
+ CREATE OR REPLACE FUNCTION update_private_chats_stats_on_message()
155
+ RETURNS TRIGGER AS $$
156
+ DECLARE
157
+ private_server RECORD;
158
+ parent_server_id varchar;
159
+ new_week_count integer;
160
+ new_month_count integer;
161
+ new_year_count integer;
162
+ BEGIN
163
+ IF NEW.type = 'draft' OR NEW.deleted = true THEN
164
+ RETURN NEW;
165
+ END IF;
166
+
167
+ SELECT s.* INTO private_server
168
+ FROM server s
169
+ INNER JOIN channel c ON c."serverId" = s.id
170
+ WHERE c.id = NEW."channelId"
171
+ AND s."isPrivateChat" = true
172
+ AND s."userId" IS NOT NULL
173
+ AND s."friendId" IS NOT NULL;
174
+
175
+ IF private_server IS NULL THEN
176
+ RETURN NEW;
177
+ END IF;
178
+
179
+ FOR parent_server_id IN
180
+ SELECT DISTINCT sm1."serverId"
181
+ FROM "serverMember" sm1
182
+ INNER JOIN "serverMember" sm2 ON sm2."serverId" = sm1."serverId"
183
+ WHERE sm1."userId" = private_server."userId"
184
+ AND sm2."userId" = private_server."friendId"
185
+ AND sm1."serverId" != private_server.id
186
+ LOOP
187
+ SELECT
188
+ COALESCE("messageCountWeek", 0) + CASE
189
+ WHEN NEW."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '7 days' THEN 1
190
+ ELSE 0
191
+ END,
192
+ COALESCE("messageCountMonth", 0) + CASE
193
+ WHEN NEW."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '30 days' THEN 1
194
+ ELSE 0
195
+ END,
196
+ COALESCE("messageCountYear", 0) + CASE
197
+ WHEN NEW."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '365 days' THEN 1
198
+ ELSE 0
199
+ END
200
+ INTO new_week_count, new_month_count, new_year_count
201
+ FROM "privateChatsStats"
202
+ WHERE "serverId" = parent_server_id
203
+ AND "userServerId" = private_server.id;
204
+
205
+ IF new_week_count IS NULL THEN
206
+ new_week_count := CASE
207
+ WHEN NEW."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '7 days'
208
+ THEN 1 ELSE 0 END;
209
+ new_month_count := CASE
210
+ WHEN NEW."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '30 days'
211
+ THEN 1 ELSE 0 END;
212
+ new_year_count := CASE
213
+ WHEN NEW."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '365 days'
214
+ THEN 1 ELSE 0 END;
215
+ END IF;
216
+
217
+ INSERT INTO "privateChatsStats" (
218
+ "serverId", "userServerId", "userAId", "userBId",
219
+ score, "lastMessageAt", "messageCountWeek",
220
+ "messageCountMonth", "messageCountYear", "lastRefreshedAt"
221
+ )
222
+ VALUES (
223
+ parent_server_id,
224
+ private_server.id,
225
+ LEAST(private_server."userId", private_server."friendId"),
226
+ GREATEST(private_server."userId", private_server."friendId"),
227
+ CASE
228
+ WHEN new_week_count > 0
229
+ THEN new_week_count * 10 + new_month_count
230
+ ELSE new_month_count * 2 + new_year_count
231
+ END,
232
+ NEW."createdAt",
233
+ new_week_count,
234
+ new_month_count,
235
+ new_year_count,
236
+ CURRENT_TIMESTAMP
237
+ )
238
+ ON CONFLICT ("serverId", "userServerId") DO UPDATE SET
239
+ "lastMessageAt" = GREATEST("privateChatsStats"."lastMessageAt", NEW."createdAt"),
240
+ "messageCountWeek" = new_week_count,
241
+ "messageCountMonth" = new_month_count,
242
+ "messageCountYear" = new_year_count,
243
+ score = CASE
244
+ WHEN new_week_count > 0
245
+ THEN new_week_count * 10 + new_month_count
246
+ ELSE new_month_count * 2 + new_year_count
247
+ END,
248
+ "lastRefreshedAt" = CURRENT_TIMESTAMP;
249
+ END LOOP;
250
+
251
+ RETURN NEW;
252
+ END;
253
+ $$ LANGUAGE plpgsql;
254
+ ```
255
+
256
+ ## Step 4: Create the Trigger
257
+
258
+ ```sql
259
+ CREATE TRIGGER update_private_chats_stats_on_message_trigger
260
+ AFTER INSERT ON message
261
+ FOR EACH ROW
262
+ EXECUTE FUNCTION update_private_chats_stats_on_message();
263
+ ```
264
+
265
+ ## Step 5: Maintenance Functions
266
+
267
+ Create utility functions for maintenance tasks:
268
+
269
+ - Full refresh when data gets out of sync
270
+ - Cleanup of stale statistics
271
+ - Periodic recalculation of time-based aggregates
272
+
273
+ ### Full Refresh Function
274
+
275
+ ```sql
276
+ CREATE OR REPLACE FUNCTION "refreshPrivateChatsStats"()
277
+ RETURNS void AS $$
278
+ BEGIN
279
+ TRUNCATE TABLE "privateChatsStats";
280
+ INSERT INTO "privateChatsStats"
281
+ WITH private_chat_stats AS (
282
+ -- same CTE as initial population
283
+ )
284
+ SELECT
285
+ -- same selection as initial population
286
+ FROM private_chat_stats
287
+ WHERE parent_server_id IS NOT NULL;
288
+ END;
289
+ $$ LANGUAGE plpgsql;
290
+ ```
291
+
292
+ ### Periodic Update Function
293
+
294
+ ```sql
295
+ CREATE OR REPLACE FUNCTION "updateTimeBasedAggregates"()
296
+ RETURNS void AS $$
297
+ BEGIN
298
+ UPDATE "privateChatsStats" pcs
299
+ SET
300
+ "messageCountWeek" = (
301
+ SELECT COUNT(*)
302
+ FROM message m
303
+ INNER JOIN channel c ON c.id = m."channelId"
304
+ WHERE c."serverId" = pcs."userServerId"
305
+ AND m.type != 'draft'
306
+ AND m.deleted = false
307
+ AND m."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '7 days'
308
+ ),
309
+ "messageCountMonth" = (
310
+ SELECT COUNT(*)
311
+ FROM message m
312
+ INNER JOIN channel c ON c.id = m."channelId"
313
+ WHERE c."serverId" = pcs."userServerId"
314
+ AND m.type != 'draft'
315
+ AND m.deleted = false
316
+ AND m."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '30 days'
317
+ ),
318
+ "messageCountYear" = (
319
+ SELECT COUNT(*)
320
+ FROM message m
321
+ INNER JOIN channel c ON c.id = m."channelId"
322
+ WHERE c."serverId" = pcs."userServerId"
323
+ AND m.type != 'draft'
324
+ AND m.deleted = false
325
+ AND m."createdAt" >= CURRENT_TIMESTAMP - INTERVAL '365 days'
326
+ ),
327
+ score = CASE
328
+ WHEN "messageCountWeek" > 0
329
+ THEN "messageCountWeek" * 10 + "messageCountMonth"
330
+ ELSE "messageCountMonth" * 2 + "messageCountYear"
331
+ END,
332
+ "lastRefreshedAt" = CURRENT_TIMESTAMP
333
+ WHERE "lastRefreshedAt" < CURRENT_TIMESTAMP - INTERVAL '1 day';
334
+ END;
335
+ $$ LANGUAGE plpgsql;
336
+ ```
337
+
338
+ ## LLM Prompting Guide
339
+
340
+ When asking an LLM to implement aggregate triggers, use this template:
341
+
342
+ ```markdown
343
+ I need to implement a non-materialized aggregate trigger system in PostgreSQL.
344
+
345
+ Context:
346
+
347
+ - Source table: [describe your source table and its purpose]
348
+ - Aggregation needs: [what statistics do you need to track]
349
+ - Update frequency: [how often does source data change]
350
+ - Query patterns: [how will the aggregated data be queried]
351
+
352
+ Requirements:
353
+
354
+ 1. Create a stats table that stores aggregated data efficiently
355
+ 2. Implement triggers that incrementally update statistics when source data
356
+ changes
357
+ 3. Avoid full table scans on every update - use incremental calculations
358
+ 4. Handle edge cases: NULL values, deletions, updates to existing records
359
+ 5. Include proper indexes for query performance
360
+ 6. Provide a full refresh function for maintenance
361
+
362
+ Specific statistics needed:
363
+
364
+ - [List each aggregate: counts, sums, averages, time-based aggregates]
365
+ - [Scoring/ranking algorithms if applicable]
366
+ - [Relationships between entities]
367
+
368
+ Performance constraints:
369
+
370
+ - Trigger execution should be < [X]ms
371
+ - Stats queries should return in < [Y]ms
372
+ - Support [Z] concurrent updates
373
+
374
+ Please provide:
375
+
376
+ 1. CREATE TABLE statement for the stats table with appropriate indexes
377
+ 2. Initial population query using CTEs
378
+ 3. Trigger function with incremental update logic
379
+ 4. CREATE TRIGGER statement
380
+ 5. Maintenance functions (full refresh, cleanup)
381
+ 6. Migration script structure (up/down functions)
382
+
383
+ Follow these patterns:
384
+
385
+ - Use LEAST/GREATEST for consistent ordering of paired values
386
+ - Use ON CONFLICT ... DO UPDATE for upserts
387
+ - Calculate time-based aggregates using CASE statements
388
+ - Return NEW from trigger functions for AFTER triggers
389
+ - Include proper error handling and NULL checks
390
+ ```
391
+
392
+ ## Best Practices
393
+
394
+ ### Performance
395
+
396
+ - Use early exits in trigger functions to skip irrelevant records
397
+ - Batch updates when possible using `INSERT ... ON CONFLICT`
398
+ - Create indexes on foreign keys and commonly queried columns
399
+ - Consider partitioning stats tables for very large datasets
400
+ - Use `EXPLAIN ANALYZE` to optimize trigger performance
401
+
402
+ ### Correctness
403
+
404
+ - Always normalize data order (e.g., `LEAST/GREATEST` for user pairs)
405
+ - Handle NULL values explicitly in all calculations
406
+ - Use transactions for complex multi-step updates
407
+ - Test edge cases: first record, deletions, updates
408
+ - Implement idempotent operations where possible
409
+
410
+ ### Maintenance
411
+
412
+ - Create a full refresh function for recovering from corruption
413
+ - Implement periodic cleanup for time-based aggregates
414
+ - Monitor trigger execution time and table bloat
415
+ - Document the scoring/ranking algorithms clearly
416
+ - Version your trigger functions with migration scripts
417
+
418
+ ### Testing
419
+
420
+ - Write integration tests that verify trigger behavior
421
+ - Test concurrent updates to ensure data consistency
422
+ - Verify that stats remain accurate after bulk operations
423
+ - Test the full refresh function against production-like data
424
+ - Benchmark trigger performance under load
425
+
426
+ ## Troubleshooting
427
+
428
+ ### Stats are out of sync
429
+
430
+ - Diagnosis: Run a query comparing aggregated stats with actual counts
431
+ - Solution: Execute the full refresh function, investigate trigger failures
432
+ - Prevention: Add monitoring queries, implement periodic validation
433
+
434
+ ### Trigger is too slow
435
+
436
+ - Diagnosis: Use `EXPLAIN ANALYZE` on the trigger function queries
437
+ - Solution: Add missing indexes, simplify calculations, consider async
438
+ processing
439
+ - Prevention: Profile trigger performance before production deployment
440
+
441
+ ### Time-based aggregates are incorrect
442
+
443
+ - Diagnosis: Check timezone handling, verify INTERVAL calculations
444
+ - Solution: Run the updateTimeBasedAggregates function, fix timezone issues
445
+ - Prevention: Use consistent timezone handling (preferably UTC)
446
+
447
+ ### Deadlocks during concurrent updates
448
+
449
+ - Diagnosis: Check `pg_stat_activity` and `deadlock_timeout` settings
450
+ - Solution: Reorder operations to acquire locks consistently
451
+ - Prevention: Use advisory locks or queue-based processing for high contention
452
+
453
+ ### Stats table is growing too large
454
+
455
+ - Diagnosis: Check for orphaned records, analyze table bloat
456
+ - Solution: Implement cleanup for deleted entities, `VACUUM` regularly
457
+ - Prevention: Add `ON DELETE CASCADE`, implement retention policies
458
+
459
+ ## Migration Structure
460
+
461
+ ```typescript
462
+ import type { PoolClient } from 'pg'
463
+
464
+ export async function up(client: PoolClient) {
465
+ await client.query(`...`) // 1. create the stats table
466
+ await client.query(`...`) // 2. create indexes
467
+ await client.query(`...`) // 3. populate initial data
468
+ await client.query(`...`) // 4. create trigger function
469
+ await client.query(`...`) // 5. create trigger
470
+ await client.query(`...`) // 6. create maintenance functions
471
+ }
472
+ ```
473
+
474
+ ## Advanced Patterns
475
+
476
+ ### Multi-Table Aggregates
477
+
478
+ ```sql
479
+ -- aggregate data from multiple related tables
480
+ -- use JOINs carefully to avoid cartesian products
481
+ -- consider using LATERAL joins for correlated subqueries
482
+ ```
483
+
484
+ ### Conditional Aggregates
485
+
486
+ ```sql
487
+ -- use FILTER clauses in PostgreSQL 9.4+
488
+ COUNT(*) FILTER (WHERE condition) as conditional_count
489
+
490
+ -- or use CASE statements for older versions
491
+ COUNT(CASE WHEN condition THEN 1 END) as conditional_count
492
+ ```
493
+
494
+ ### Window Functions
495
+
496
+ ```sql
497
+ -- use window functions for ranking within groups
498
+ ROW_NUMBER() OVER (PARTITION BY group_col ORDER BY score DESC) as rank
499
+
500
+ -- calculate running totals
501
+ SUM(amount) OVER (ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
502
+ ```
503
+
504
+ ### Recursive Aggregates
505
+
506
+ ```sql
507
+ -- use recursive CTEs for hierarchical data
508
+ WITH RECURSIVE hierarchy AS (
509
+ SELECT id, parent_id, value FROM nodes WHERE parent_id IS NULL
510
+ UNION ALL
511
+ SELECT n.id, n.parent_id, n.value + h.value
512
+ FROM nodes n
513
+ INNER JOIN hierarchy h ON n.parent_id = h.id
514
+ )
515
+ SELECT * FROM hierarchy;
516
+ ```
517
+
518
+ ## Complete Example: Channel User Statistics
519
+
520
+ Here's a simplified example showing all pieces working together:
521
+
522
+ ### Stats Table
523
+
524
+ ```sql
525
+ CREATE TABLE channel_user_stats (
526
+ channel_id UUID REFERENCES channels(id) ON DELETE CASCADE,
527
+ user_id UUID REFERENCES users(id) ON DELETE CASCADE,
528
+ message_count INTEGER NOT NULL DEFAULT 0,
529
+ last_message_at TIMESTAMP,
530
+ first_message_at TIMESTAMP,
531
+ PRIMARY KEY (channel_id, user_id)
532
+ );
533
+
534
+ CREATE INDEX idx_channel_user_stats_user
535
+ ON channel_user_stats(user_id, message_count DESC);
536
+ ```
537
+
538
+ ### Trigger Function
539
+
540
+ ```sql
541
+ CREATE OR REPLACE FUNCTION update_channel_user_stats()
542
+ RETURNS TRIGGER AS $$
543
+ BEGIN
544
+ IF TG_OP = 'INSERT' THEN
545
+ INSERT INTO channel_user_stats (
546
+ channel_id, user_id, message_count,
547
+ last_message_at, first_message_at
548
+ )
549
+ VALUES (
550
+ NEW.channel_id, NEW.user_id, 1,
551
+ NEW.created_at, NEW.created_at
552
+ )
553
+ ON CONFLICT (channel_id, user_id) DO UPDATE SET
554
+ message_count = channel_user_stats.message_count + 1,
555
+ last_message_at = NEW.created_at,
556
+ first_message_at = COALESCE(
557
+ channel_user_stats.first_message_at,
558
+ NEW.created_at
559
+ );
560
+ ELSIF TG_OP = 'DELETE' THEN
561
+ UPDATE channel_user_stats
562
+ SET message_count = GREATEST(0, message_count - 1)
563
+ WHERE channel_id = OLD.channel_id AND user_id = OLD.user_id;
564
+ END IF;
565
+ RETURN NULL;
566
+ END;
567
+ $$ LANGUAGE plpgsql;
568
+ ```
569
+
570
+ ### Trigger
571
+
572
+ ```sql
573
+ CREATE TRIGGER trigger_update_channel_user_stats
574
+ AFTER INSERT OR DELETE ON messages
575
+ FOR EACH ROW
576
+ EXECUTE FUNCTION update_channel_user_stats();
577
+ ```
578
+
579
+ ## Conclusion
580
+
581
+ Non-materialized aggregate triggers provide a powerful pattern for maintaining
582
+ real-time statistics in PostgreSQL. By following this guide and the
583
+ privateChatsStats implementation pattern, you can build efficient, maintainable
584
+ aggregate systems that scale with your application's needs.
@@ -0,0 +1,41 @@
1
+ ---
2
+ name: cloudflare-dev-tunnel
3
+ description: Cloudflare dev tunnel guide for exposing local development servers publicly. INVOKE WHEN: dev tunnel, cloudflare tunnel, cfargotunnel, dev:tunnel, local tunnel, testing webhooks, webhook testing, share local server, expose localhost, ngrok alternative.
4
+ ---
5
+
6
+ # Development Tunnel
7
+
8
+ The dev tunnel feature provides a stable public URL for your local development
9
+ server, perfect for testing webhooks and sharing your dev environment with team
10
+ members.
11
+
12
+ ## Features
13
+
14
+ Each developer gets their own persistent tunnel URL using Cloudflare Tunnel
15
+ (cloudflared) installed via npm. Single command to start. Webhooks automatically
16
+ use the tunnel URL when available.
17
+
18
+ ## Setup
19
+
20
+ Run `bun install`, then run `bun dev:tunnel` once to set up cloudflare. After
21
+ that you can just run your dev server as normal with `bun dev`.
22
+
23
+ ## How it Works
24
+
25
+ The tunnel creates a stable subdomain on `cfargotunnel.com`. Your tunnel URL is
26
+ saved and reused across sessions. Webhooks automatically detect and use the
27
+ tunnel URL. The tunnel persists until you stop it with Ctrl+C.
28
+
29
+ ## Usage with Webhooks
30
+
31
+ When the tunnel is running, webhook URLs will automatically use your public
32
+ tunnel URL instead of localhost.
33
+
34
+ Without tunnel: `http://localhost:8081/api/webhook/...` With tunnel:
35
+ `https://your-tunnel-id.cfargotunnel.com/api/webhook/...`
36
+
37
+ ## Troubleshooting
38
+
39
+ If you get permission errors, the script will try to install cloudflared
40
+ automatically. Your tunnel configuration is stored in `~/.onechat-tunnel/`. To
41
+ reset your tunnel, delete `~/.onechat-tunnel/tunnel-id.txt`.