@cubis/foundry 0.3.10 → 0.3.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (56) hide show
  1. package/Ai Agent Workflow/powers/database-skills/POWER.md +15 -2
  2. package/Ai Agent Workflow/powers/database-skills/SKILL.md +26 -2
  3. package/Ai Agent Workflow/powers/database-skills/engines/mongodb/POWER.md +10 -0
  4. package/Ai Agent Workflow/powers/database-skills/engines/mysql/POWER.md +10 -0
  5. package/Ai Agent Workflow/powers/database-skills/engines/neki/POWER.md +10 -0
  6. package/Ai Agent Workflow/powers/database-skills/engines/postgres/POWER.md +10 -0
  7. package/Ai Agent Workflow/powers/database-skills/engines/redis/POWER.md +10 -0
  8. package/Ai Agent Workflow/powers/database-skills/engines/sqlite/POWER.md +10 -0
  9. package/Ai Agent Workflow/powers/database-skills/engines/supabase/POWER.md +10 -0
  10. package/Ai Agent Workflow/powers/database-skills/engines/vitess/POWER.md +10 -0
  11. package/Ai Agent Workflow/powers/database-skills/steering/readme.md +18 -6
  12. package/Ai Agent Workflow/skills/database-skills/LATEST_VERSIONS.md +36 -0
  13. package/Ai Agent Workflow/skills/database-skills/README.md +11 -2
  14. package/Ai Agent Workflow/skills/database-skills/SKILL.md +85 -20
  15. package/Ai Agent Workflow/skills/database-skills/skills/mongodb/SKILL.md +29 -7
  16. package/Ai Agent Workflow/skills/database-skills/skills/mongodb/references/aggregation.md +153 -0
  17. package/Ai Agent Workflow/skills/database-skills/skills/mongodb/references/modeling.md +95 -4
  18. package/Ai Agent Workflow/skills/database-skills/skills/mongodb/references/mongoose-nestjs.md +133 -4
  19. package/Ai Agent Workflow/skills/database-skills/skills/mysql/SKILL.md +33 -7
  20. package/Ai Agent Workflow/skills/database-skills/skills/mysql/references/locking-ddl.md +103 -4
  21. package/Ai Agent Workflow/skills/database-skills/skills/mysql/references/query-indexing.md +103 -4
  22. package/Ai Agent Workflow/skills/database-skills/skills/mysql/references/replication.md +142 -0
  23. package/Ai Agent Workflow/skills/database-skills/skills/neki/SKILL.md +18 -7
  24. package/Ai Agent Workflow/skills/database-skills/skills/neki/references/architecture.md +135 -4
  25. package/Ai Agent Workflow/skills/database-skills/skills/neki/references/operations.md +76 -4
  26. package/Ai Agent Workflow/skills/database-skills/skills/postgres/SKILL.md +31 -7
  27. package/Ai Agent Workflow/skills/database-skills/skills/postgres/references/connection-pooling.md +142 -0
  28. package/Ai Agent Workflow/skills/database-skills/skills/postgres/references/migrations.md +126 -0
  29. package/Ai Agent Workflow/skills/database-skills/skills/postgres/references/performance-ops.md +116 -4
  30. package/Ai Agent Workflow/skills/database-skills/skills/postgres/references/schema-indexing.md +78 -4
  31. package/Ai Agent Workflow/skills/database-skills/skills/redis/SKILL.md +28 -7
  32. package/Ai Agent Workflow/skills/database-skills/skills/redis/references/cache-patterns.md +153 -4
  33. package/Ai Agent Workflow/skills/database-skills/skills/redis/references/data-modeling.md +152 -0
  34. package/Ai Agent Workflow/skills/database-skills/skills/redis/references/operations.md +143 -4
  35. package/Ai Agent Workflow/skills/database-skills/skills/sqlite/SKILL.md +28 -7
  36. package/Ai Agent Workflow/skills/database-skills/skills/sqlite/references/local-first.md +94 -4
  37. package/Ai Agent Workflow/skills/database-skills/skills/sqlite/references/performance.md +104 -4
  38. package/Ai Agent Workflow/skills/database-skills/skills/supabase/SKILL.md +27 -7
  39. package/Ai Agent Workflow/skills/database-skills/skills/supabase/references/performance-operations.md +94 -4
  40. package/Ai Agent Workflow/skills/database-skills/skills/supabase/references/rls-auth.md +105 -4
  41. package/Ai Agent Workflow/skills/database-skills/skills/vitess/SKILL.md +27 -7
  42. package/Ai Agent Workflow/skills/database-skills/skills/vitess/references/operational-safety.md +104 -4
  43. package/Ai Agent Workflow/skills/database-skills/skills/vitess/references/sharding-routing.md +124 -4
  44. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/antigravity/agents/backend-specialist.md +1 -1
  45. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/antigravity/agents/database-architect.md +8 -1
  46. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/antigravity/agents/performance-optimizer.md +2 -0
  47. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/antigravity/workflows/database.md +11 -6
  48. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/codex/agents/backend-specialist.md +1 -1
  49. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/codex/agents/database-architect.md +8 -1
  50. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/codex/agents/performance-optimizer.md +2 -0
  51. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/codex/workflows/database.md +11 -6
  52. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/copilot/agents/backend-specialist.md +1 -1
  53. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/copilot/agents/database-architect.md +8 -1
  54. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/copilot/agents/performance-optimizer.md +2 -0
  55. package/Ai Agent Workflow/workflows/agent-environment-setup/platforms/copilot/workflows/database.md +11 -6
  56. package/package.json +1 -1
@@ -1,5 +1,79 @@
1
- # Postgres Schema and Indexing
1
+ # Postgres Schema Design and Indexing
2
2
 
3
- - Use explicit constraints for invariants.
4
- - Prefer composite indexes for equality + range access patterns.
5
- - Validate index usage via plans and statistics.
3
+ ## Schema design
4
+
5
+ - Declare `NOT NULL` on every column that should never be null; reduces planner uncertainty and storage overhead.
6
+ - Use `CHECK` constraints for domain rules (`amount > 0`, `status IN (...)`) — they are enforced transactionally and inform the planner.
7
+ - Use `FOREIGN KEY` constraints for referential integrity; add indexes on FK columns to prevent sequential scans during cascades and joins.
8
+ - Prefer `BIGINT` generated identity columns or UUIDs (v7 for sortability) as primary keys. Avoid random UUIDs as clustered keys on write-heavy tables — they fragment B-tree pages.
9
+ - Use `TEXT` over `VARCHAR(n)` unless you need the length constraint enforced at the DB layer.
10
+ - Prefer `TIMESTAMPTZ` over `TIMESTAMP` for all time columns.
11
+
12
+ ## Index types and when to use them
13
+
14
+ | Type | When to use |
15
+ | --- | --- |
16
+ | **B-tree** (default) | Equality, range, `ORDER BY`, `LIKE 'prefix%'` |
17
+ | **GIN** | JSONB containment (`@>`), full-text search, array overlap (`&&`) |
18
+ | **GiST** | Geometric types, range type overlap, nearest-neighbor search |
19
+ | **BRIN** | Append-mostly tables (time-series, logs) with high physical correlation |
20
+ | **Hash** | Pure equality lookups only (`=`), smaller than B-tree |
21
+
22
+ ## Multicolumn (composite) indexes
23
+
24
+ - Column order matters: place equality predicates first, then range/sort columns.
25
+ - The planner can use a composite index for any leading prefix of its columns.
26
+ - Example: `(status, created_at)` supports `WHERE status = 'open' ORDER BY created_at` but NOT `WHERE created_at > ...` alone without a full scan.
27
+
28
+ ```sql
29
+ CREATE INDEX idx_orders_status_created ON orders (status, created_at DESC);
30
+ ```
31
+
32
+ ## Partial indexes
33
+
34
+ - Index only the rows that queries actually need — dramatically smaller, faster to update.
35
+ - Use when a hot query always includes a constant predicate.
36
+
37
+ ```sql
38
+ -- Only index unpaid invoices, avoiding index bloat from millions of paid rows
39
+ CREATE INDEX idx_invoices_unpaid ON invoices (due_date) WHERE paid = false;
40
+ ```
41
+
42
+ ## Covering indexes (INCLUDE)
43
+
44
+ - Add non-predicate columns to the index to allow index-only scans (no heap fetch).
45
+ - Put predicate columns in the key; put projection-only columns in `INCLUDE`.
46
+
47
+ ```sql
48
+ CREATE INDEX idx_orders_user_covering ON orders (user_id, status)
49
+ INCLUDE (total_amount, created_at);
50
+ ```
51
+
52
+ ## BRIN for append-mostly tables
53
+
54
+ - Works by storing min/max per block range. Tiny index size, fast sequential insans.
55
+ - Best when table rows are physically written in correlation with the indexed column (e.g., `created_at` on an append-only event table).
56
+ - Useless for randomly ordered data.
57
+
58
+ ```sql
59
+ CREATE INDEX idx_events_created_brin ON events USING BRIN (created_at);
60
+ ```
61
+
62
+ ## Index maintenance rules
63
+
64
+ - Run `ANALYZE` after large data loads to refresh planner statistics.
65
+ - Detect unused indexes:
66
+ ```sql
67
+ SELECT schemaname, relname, indexrelname, idx_scan
68
+ FROM pg_stat_user_indexes
69
+ WHERE idx_scan = 0;
70
+ ```
71
+ - Drop unused indexes — they slow writes and increase vacuum cost.
72
+ - Use `CREATE INDEX CONCURRENTLY` in production to avoid table locks.
73
+
74
+ ## Sources
75
+ - PostgreSQL docs: Indexes — https://www.postgresql.org/docs/current/indexes.html
76
+ - Multicolumn: https://www.postgresql.org/docs/current/indexes-multicolumn.html
77
+ - Partial: https://www.postgresql.org/docs/current/indexes-partial.html
78
+ - Covering (INCLUDE): https://www.postgresql.org/docs/current/sql-createindex.html
79
+ - BRIN: https://www.postgresql.org/docs/current/brin.html
@@ -1,15 +1,36 @@
1
1
  ---
2
2
  name: redis
3
- description: Redis data modeling, caching strategy, latency tuning, and operational safety.
3
+ description: Redis data modeling, caching strategy, throughput/latency optimization, and operational safety.
4
4
  ---
5
5
 
6
6
  # Redis
7
7
 
8
- Load references as needed:
8
+ ## Optimization workflow
9
+
10
+ 1. Define key schema and TTL policy first.
11
+ 2. Reduce round-trips with pipelining and batching.
12
+ 3. Tune memory footprint by data structure choice.
13
+ 4. Diagnose latency with server + system context.
14
+ 5. Validate hot-key and cluster-slot distribution for scale.
15
+
16
+ ## Indexing-style patterns in Redis
17
+
18
+ - Redis is key-based; design key schema as your primary access index.
19
+ - Use sorted sets and secondary lookup structures for query-like access.
20
+ - Use `SCAN`-family commands for incremental traversal; avoid `KEYS` in production.
21
+
22
+ ## Pagination techniques
23
+
24
+ - For ordered feeds/leaderboards, paginate with sorted set score/member boundaries.
25
+ - For keyspace traversal, cursor-based `SCAN` pagination only.
26
+
27
+ ## Performance guardrails
28
+
29
+ - Keep value payloads bounded; avoid giant hot keys.
30
+ - Monitor expiry storms and eviction behavior.
31
+ - Use realistic load tests for pipeline and memory tuning.
32
+
33
+ ## References
34
+
9
35
  - `references/cache-patterns.md`
10
36
  - `references/operations.md`
11
-
12
- Key rules:
13
- - Treat Redis as a data structure server, not generic storage.
14
- - Define TTL, invalidation, and consistency strategy upfront.
15
- - Monitor memory, eviction policy, and command latency.
@@ -1,5 +1,154 @@
1
- # Redis Cache Patterns
1
+ # Redis Cache and Throughput Patterns
2
2
 
3
- - Prefer explicit key design and namespacing.
4
- - Use write-through/write-behind intentionally.
5
- - Prevent cache stampede with locking or jittered TTL.
3
+ ## Choosing the right data structure
4
+
5
+ This is the most important Redis decision. Using the wrong type causes unnecessary memory use and complexity.
6
+
7
+ | Use case | Data structure | Key pattern |
8
+ | --- | --- | --- |
9
+ | Single value / counter | `String` | `user:{id}:session` |
10
+ | Object with fields | `Hash` | `user:{id}` |
11
+ | Ordered leaderboard / timeline | `Sorted Set` (ZSET) | `leaderboard:global` |
12
+ | Unique membership / deduplication | `Set` | `online_users` |
13
+ | Message queue / feed | `List` (LPUSH/RPOP) or `Stream` | `queue:emails` |
14
+ | Feature flags per user | `Hash` or `Bitmap` | `flags:{user_id}` |
15
+ | Rate limiting | `String` + `INCR` + `EXPIRE` | `rate:{user_id}:{window}` |
16
+ | Time-series / event log | `Stream` | `events:{service}` |
17
+ | Pub/Sub fan-out | `Pub/Sub` channel | `channel:notifications` |
18
+
19
+ **Never** store serialized JSON in a `String` if you only need individual fields — use `Hash` and `HGET`/`HSET` to avoid deserializing the whole blob.
20
+
21
+ ## Key naming conventions
22
+
23
+ Consistent naming prevents key collisions and makes `SCAN`-based debugging possible.
24
+
25
+ ```
26
+ <namespace>:<entity>:<id>[:<field>]
27
+
28
+ user:42:session → session token for user 42
29
+ product:99:views → view counter for product 99
30
+ rate:192.168.1.1:1706 → rate limit bucket for IP + minute window
31
+ leaderboard:global → global ZSET leaderboard
32
+ queue:emails → email send queue (List)
33
+ ```
34
+
35
+ Rules:
36
+ - Use `:` as separator, not `.` or `/`.
37
+ - Keep keys short — key names count toward memory.
38
+ - Avoid dynamic segments that produce unbounded key space without TTL.
39
+
40
+ ## TTL strategy
41
+
42
+ Always set TTLs on cache keys. Omitting TTL = memory leak.
43
+
44
+ ```redis
45
+ SET user:42:session "token" EX 3600 # expires in 1 hour
46
+ SETEX user:42:profile 300 "{...}" # 5 minutes
47
+ EXPIRE user:42:temp 60 # set TTL on existing key
48
+
49
+ # Atomic set + expire (preferred over SETEX)
50
+ SET key value EX 300 NX # set only if not exists + TTL
51
+ ```
52
+
53
+ Check TTL on a key:
54
+ ```redis
55
+ TTL user:42:session # -1 = no TTL (danger!), -2 = key gone, N = seconds left
56
+ ```
57
+
58
+ ## Pipelining — batch commands to reduce round-trips
59
+
60
+ Each Redis command is a network round-trip. Pipeline batches commands into a single trip.
61
+
62
+ ```ts
63
+ // Node.js (ioredis) example
64
+ const pipeline = redis.pipeline();
65
+ pipeline.get('user:1:session');
66
+ pipeline.incr('user:1:views');
67
+ pipeline.expire('user:1:views', 3600);
68
+ const results = await pipeline.exec();
69
+ ```
70
+
71
+ Use pipelining for any hot path that issues 3+ Redis commands in sequence.
72
+
73
+ ## SCAN instead of KEYS in production
74
+
75
+ `KEYS pattern` blocks the Redis event loop for its entire duration — it will freeze Redis under load.
76
+
77
+ ```redis
78
+ # NEVER in production
79
+ KEYS user:*
80
+
81
+ # CORRECT: iterative cursor scan
82
+ SCAN 0 MATCH user:* COUNT 100
83
+ # Use returned cursor until it returns 0
84
+ ```
85
+
86
+ In code:
87
+ ```ts
88
+ let cursor = '0';
89
+ do {
90
+ const [newCursor, keys] = await redis.scan(cursor, 'MATCH', 'user:*', 'COUNT', 100);
91
+ cursor = newCursor;
92
+ // process keys
93
+ } while (cursor !== '0');
94
+ ```
95
+
96
+ ## Rate limiting pattern
97
+
98
+ ```ts
99
+ // Sliding window rate limit: max 100 requests per minute per user
100
+ async function isRateLimited(userId: string): Promise<boolean> {
101
+ const key = `rate:${userId}:${Math.floor(Date.now() / 60000)}`;
102
+ const count = await redis.incr(key);
103
+ if (count === 1) await redis.expire(key, 120); // 2 min window safety margin
104
+ return count > 100;
105
+ }
106
+ ```
107
+
108
+ ## Sorted sets for leaderboards and pagination
109
+
110
+ ```redis
111
+ # Add score
112
+ ZADD leaderboard:global 1500 "user:42"
113
+
114
+ # Top 10
115
+ ZREVRANGEBYSCORE leaderboard:global +inf -inf WITHSCORES LIMIT 0 10
116
+
117
+ # Rank of a user (0-indexed)
118
+ ZREVRANK leaderboard:global "user:42"
119
+ ```
120
+
121
+ Cursor-based pagination with sorted sets:
122
+ ```redis
123
+ # Page 1: get top 20
124
+ ZREVRANGE leaderboard:global 0 19 WITHSCORES
125
+
126
+ # Page 2: skip 20
127
+ ZREVRANGE leaderboard:global 20 39 WITHSCORES
128
+ ```
129
+
130
+ ## Cache invalidation patterns
131
+
132
+ | Pattern | When to use |
133
+ | --- | --- |
134
+ | **TTL expiry** | Acceptable staleness (most cases) |
135
+ | **Write-through** | Update cache on every write — consistent but coupled |
136
+ | **Write-behind** | Write to cache; async flush to DB — fast writes, risk of loss |
137
+ | **Cache-aside** | App reads cache → miss → read DB → populate cache |
138
+ | **Tag-based invalidation** | Delete multiple related keys via a shared tag key |
139
+
140
+ Tag-based with a Set:
141
+ ```ts
142
+ // On write: register the key under its tag
143
+ await redis.sadd('tag:user:42', 'user:42:profile', 'user:42:orders');
144
+
145
+ // On invalidation: delete all tagged keys
146
+ const keys = await redis.smembers('tag:user:42');
147
+ await redis.del(...keys, 'tag:user:42');
148
+ ```
149
+
150
+ ## Sources
151
+ - Pipelining: https://redis.io/docs/latest/develop/using-commands/pipelining/
152
+ - SCAN command: https://redis.io/docs/latest/commands/scan/
153
+ - Sorted sets: https://redis.io/docs/latest/develop/data-types/sorted-sets/
154
+ - Data types overview: https://redis.io/docs/latest/develop/data-types/
@@ -0,0 +1,152 @@
1
+ # Redis — Data Modeling and Key Design
2
+
3
+ ## The golden rule
4
+
5
+ In Redis, the **data structure determines your access pattern** — pick the structure based on what operations you need, not what the data looks like.
6
+
7
+ ## Data structure decision guide
8
+
9
+ ### String — simplest value
10
+
11
+ Use for: counters, single scalar values, serialized blobs, feature flags.
12
+
13
+ ```redis
14
+ SET user:42:points 1500
15
+ INCR user:42:points # atomic increment
16
+ INCRBY user:42:points 50
17
+ GET user:42:points
18
+
19
+ # Serialized JSON (avoid if you only need individual fields — use Hash instead)
20
+ SET product:99 '{"name":"Widget","price":9.99}'
21
+ ```
22
+
23
+ ### Hash — object with fields
24
+
25
+ Use for: user profiles, session data, config objects with named fields. Avoids deserializing a full JSON blob to update one field.
26
+
27
+ ```redis
28
+ HSET user:42 name "Alice" email "alice@example.com" plan "pro"
29
+ HGET user:42 email
30
+ HMGET user:42 name plan # fetch specific fields
31
+ HINCRBY user:42 login_count 1 # atomic field increment
32
+ HDEL user:42 temp_field # remove one field
33
+ HGETALL user:42 # get all fields (watch size)
34
+ ```
35
+
36
+ Memory note: Hashes with ≤128 fields and short values use a compact ziplist encoding — much cheaper than separate String keys.
37
+
38
+ ### List — ordered sequence / queue
39
+
40
+ Use for: message queues, activity feeds (append + trim), task lists.
41
+
42
+ ```redis
43
+ LPUSH queue:emails '{"to":"a@b.com"}' # push to head
44
+ RPUSH queue:emails '{"to":"c@d.com"}' # push to tail
45
+ LPOP queue:emails # dequeue from head (FIFO)
46
+ RPOP queue:emails # dequeue from tail (LIFO)
47
+ LLEN queue:emails # current length
48
+ LTRIM feed:user:42 0 99 # keep only last 100 items
49
+ ```
50
+
51
+ For producer/consumer queues in production prefer **Streams** (more features, consumer groups, acknowledgment).
52
+
53
+ ### Set — unordered unique values
54
+
55
+ Use for: membership checks, unique visitors, tagging, deduplication.
56
+
57
+ ```redis
58
+ SADD online_users user:42 user:99
59
+ SISMEMBER online_users user:42 # O(1) membership check
60
+ SMEMBERS online_users # all members (avoid on large sets)
61
+ SCARD online_users # count
62
+ SREM online_users user:42 # remove
63
+
64
+ # Set operations (great for permission/feature checks)
65
+ SUNION editors:org:1 editors:org:2 # union
66
+ SINTER subscribers premium_users # intersection
67
+ ```
68
+
69
+ ### Sorted Set (ZSET) — ordered by score
70
+
71
+ Use for: leaderboards, rate limiting, scheduled jobs, range queries, time-series indexes.
72
+
73
+ ```redis
74
+ ZADD leaderboard:global 1500 "user:42" # score = 1500
75
+ ZINCRBY leaderboard:global 100 "user:42" # atomic score increment
76
+ ZREVRANK leaderboard:global "user:42" # rank (0-indexed, desc)
77
+ ZREVRANGE leaderboard:global 0 9 WITHSCORES # top 10 with scores
78
+
79
+ # Range by score (e.g., all jobs due in next 60s)
80
+ ZRANGEBYSCORE scheduled_jobs 0 1706000060 WITHSCORES LIMIT 0 10
81
+ ZREM scheduled_jobs "job:123" # remove after processing
82
+ ```
83
+
84
+ ### Stream — append-only log with consumer groups
85
+
86
+ Use for: event sourcing, reliable message delivery, audit logs, notifications.
87
+
88
+ ```redis
89
+ XADD events:orders '*' type order_created orderId 99 userId 42 # auto-ID
90
+ XLEN events:orders
91
+ XRANGE events:orders - + # all events
92
+ XRANGE events:orders 1706000000000-0 + # from timestamp
93
+
94
+ # Consumer group for reliable delivery
95
+ XGROUP CREATE events:orders workers $ MKSTREAM
96
+ XREADGROUP GROUP workers consumer1 COUNT 10 STREAMS events:orders > # read new
97
+ XACK events:orders workers <message-id> # acknowledge after processing
98
+ ```
99
+
100
+ ## Key naming
101
+
102
+ ```
103
+ <entity>:<id>:<field>
104
+
105
+ user:42:points String — points counter
106
+ user:42 Hash — user profile object
107
+ leaderboard:global ZSET — global score board
108
+ queue:emails List — email send queue
109
+ online_users Set — currently online user IDs
110
+ events:orders Stream — order event log
111
+ rate:user:42:202501 String — rate limit bucket
112
+ session:abc123 String/Hash — session data
113
+ ```
114
+
115
+ Rules:
116
+ - Use `:` as separator.
117
+ - Keep namespaces consistent across the codebase.
118
+ - Include an entity type so `SCAN type:*` works for debugging.
119
+ - Include a time bucket in rate limit keys so they auto-expire when the window passes.
120
+
121
+ ## TTL strategy
122
+
123
+ | Key type | TTL strategy |
124
+ | --- | --- |
125
+ | Sessions | Explicit TTL on set, slide it on read (`EXPIRE key 3600`) |
126
+ | Cache keys | Fixed TTL matching acceptable staleness |
127
+ | Rate limit buckets | 2× the window duration |
128
+ | Leaderboards | No TTL (explicit reset with `DEL`) |
129
+ | Streams | Use `MAXLEN` to cap length, or `EXPIRE` on the whole stream |
130
+
131
+ ```redis
132
+ # Sliding session TTL
133
+ SET session:abc123 "{...}" EX 3600
134
+ # On each read, reset the TTL:
135
+ EXPIRE session:abc123 3600
136
+ ```
137
+
138
+ ## Avoid these patterns
139
+
140
+ | Antipattern | Problem | Fix |
141
+ | --- | --- | --- |
142
+ | Storing JSON in String when you need individual fields | Full deser on every access | Use Hash |
143
+ | Unbounded List/Set with no trim | Memory growth without bound | `LTRIM`, `SREM`, or `MAXLEN` |
144
+ | `KEYS *` in production | Blocks event loop | Use `SCAN` |
145
+ | No TTL on cache keys | Memory leak | Always set TTL |
146
+ | Huge Hash (`HGETALL`) on hot path | Transfers more data than needed | Project with `HMGET` |
147
+ | Sequences as String with `INCR` shared globally | Bottleneck at high QPS | Shard or batch |
148
+
149
+ ## Sources
150
+ - Redis data types: https://redis.io/docs/latest/develop/data-types/
151
+ - Streams: https://redis.io/docs/latest/develop/data-types/streams/
152
+ - Sorted sets: https://redis.io/docs/latest/develop/data-types/sorted-sets/
@@ -1,5 +1,144 @@
1
- # Redis Operations
1
+ # Redis Operations, Memory, and Latency
2
2
 
3
- - Set maxmemory and eviction policy intentionally.
4
- - Monitor keyspace size and slowlog.
5
- - Validate persistence/replication settings for data risk profile.
3
+ ## Memory management
4
+
5
+ Redis stores all data in memory. Running out of memory is the #1 operational failure mode.
6
+
7
+ ### Monitor memory
8
+
9
+ ```redis
10
+ INFO memory
11
+ # Key fields:
12
+ # used_memory_human — actual data memory
13
+ # used_memory_rss_human — RSS from OS (includes fragmentation)
14
+ # mem_fragmentation_ratio — rss / used. >1.5 = high fragmentation
15
+ # maxmemory — configured limit (0 = unlimited — dangerous)
16
+ # maxmemory_human — human readable limit
17
+ ```
18
+
19
+ Always set `maxmemory`:
20
+ ```redis
21
+ CONFIG SET maxmemory 2gb
22
+ ```
23
+
24
+ ### Eviction policies
25
+
26
+ When `maxmemory` is reached, Redis uses the configured eviction policy:
27
+
28
+ | Policy | Behavior |
29
+ | --- | --- |
30
+ | `noeviction` | Returns error on writes — safe for primary data stores |
31
+ | `allkeys-lru` | Evicts least recently used keys — good for caches |
32
+ | `volatile-lru` | Evicts LRU keys that have a TTL set |
33
+ | `allkeys-lfu` | Evicts least frequently used — better than LRU for skewed access |
34
+ | `volatile-ttl` | Evicts keys with shortest remaining TTL first |
35
+ | `allkeys-random` | Random eviction — rarely useful |
36
+
37
+ For pure caches: `allkeys-lru` or `allkeys-lfu`.
38
+ For mixed primary + cache data: `volatile-lru` (only evicts keys with TTL).
39
+
40
+ ```redis
41
+ CONFIG SET maxmemory-policy allkeys-lru
42
+ ```
43
+
44
+ ### Handle fragmentation
45
+
46
+ When `mem_fragmentation_ratio > 1.5`, trigger active defragmentation (Redis 4.0+):
47
+ ```redis
48
+ CONFIG SET activedefrag yes
49
+ CONFIG SET active-defrag-ignore-bytes 100mb
50
+ CONFIG SET active-defrag-threshold-lower 10
51
+ ```
52
+
53
+ ## Latency diagnostics
54
+
55
+ Redis should respond in microseconds. If you're seeing millisecond latency, investigate:
56
+
57
+ ```redis
58
+ # Check for slow commands (logged automatically)
59
+ SLOWLOG GET 10
60
+ SLOWLOG LEN
61
+ SLOWLOG RESET
62
+
63
+ # Default slow threshold is 10ms — lower for tighter monitoring:
64
+ CONFIG SET slowlog-log-slower-than 1000 # microseconds = 1ms
65
+
66
+ # Latency history (built-in latency monitoring)
67
+ LATENCY LATEST
68
+ LATENCY HISTORY event_name
69
+ LATENCY RESET
70
+ ```
71
+
72
+ ### Common latency causes
73
+
74
+ | Cause | Symptom | Fix |
75
+ | --- | --- | --- |
76
+ | `KEYS *` in production | Periodic spikes | Replace with `SCAN` |
77
+ | Large key values (MB-range) | Slow reads/writes | Chunk data or use a different store |
78
+ | AOF fsync on every write | Consistent high latency | Switch to `fsync everysec` |
79
+ | Memory pressure / eviction | Increasing latency trend | Add memory or tune eviction |
80
+ | Blocking commands (`BLPOP`, `BRPOP`) | Thread holds connection | Use timeouts |
81
+ | Fork for RDB/AOF rewrite | Latency spike every N minutes | Schedule rewrites in off-peak windows |
82
+
83
+ ## Persistence configuration
84
+
85
+ Choose based on durability requirements:
86
+
87
+ | Config | Durability | Performance |
88
+ | --- | --- | --- |
89
+ | No persistence | None (cache only) | Fastest |
90
+ | RDB snapshots only | Up to snapshot interval | Fast |
91
+ | AOF `everysec` | Up to ~1s of data loss | Good |
92
+ | AOF `always` | No data loss | Slowest — 1 fsync per write |
93
+ | RDB + AOF | Best of both | Moderate |
94
+
95
+ For caches where losing data on restart is acceptable, disable persistence:
96
+ ```redis
97
+ CONFIG SET save "" # disable RDB
98
+ CONFIG SET appendonly no # disable AOF
99
+ ```
100
+
101
+ ## Monitor key space
102
+
103
+ ```redis
104
+ # Count keys per database
105
+ INFO keyspace
106
+
107
+ # Sample key patterns (does not block)
108
+ redis-cli --scan --pattern 'user:*' | head -20
109
+
110
+ # Key size distribution — pick a sample
111
+ redis-cli --bigkeys # finds largest keys by type
112
+ ```
113
+
114
+ ## Connection limits
115
+
116
+ ```redis
117
+ INFO clients
118
+ # connected_clients — current connections
119
+ # maxclients — configured limit (default 10000)
120
+ ```
121
+
122
+ Redis is single-threaded for command execution. High connection counts add overhead via:
123
+ - Select/poll overhead on the socket set.
124
+ - Memory per connection (~20KB).
125
+
126
+ Use a connection pool in your application. Most Redis clients (ioredis, redis-py) pool by default. Check pool size matches your concurrency model.
127
+
128
+ ## Key expiry and eviction monitoring
129
+
130
+ ```redis
131
+ INFO stats
132
+ # expired_keys — keys removed by TTL expiry since start
133
+ # evicted_keys — keys removed by maxmemory policy
134
+ # keyspace_hits — cache hit count
135
+ # keyspace_misses — cache miss count
136
+
137
+ # Hit rate = hits / (hits + misses). Below 80% suggests key design issues.
138
+ ```
139
+
140
+ ## Sources
141
+ - Memory optimization: https://redis.io/docs/latest/operate/oss_and_stack/management/optimization/memory-optimization/
142
+ - Latency optimization: https://redis.io/docs/latest/operate/oss_and_stack/management/optimization/latency/
143
+ - Persistence options: https://redis.io/docs/latest/operate/oss_and_stack/management/persistence/
144
+ - SLOWLOG: https://redis.io/docs/latest/commands/slowlog/
@@ -1,15 +1,36 @@
1
1
  ---
2
2
  name: sqlite
3
- description: SQLite local/edge data strategy, schema design, indexing, and WAL tuning.
3
+ description: SQLite local/edge data strategy, schema/index design, query planning, and WAL tuning.
4
4
  ---
5
5
 
6
6
  # SQLite
7
7
 
8
- Load references as needed:
8
+ ## Optimization workflow
9
+
10
+ 1. Inspect plan with `EXPLAIN QUERY PLAN`.
11
+ 2. Add multicolumn or covering indexes for hot read paths.
12
+ 3. Use keyset-style pagination for large datasets.
13
+ 4. Tune WAL/checkpoint behavior for write-heavy workloads.
14
+ 5. Re-check plans after schema/data-distribution changes.
15
+
16
+ ## Indexing techniques
17
+
18
+ - Composite indexes for combined filter+sort paths.
19
+ - Covering indexes when read latency is critical.
20
+ - Keep indexes minimal on write-heavy local stores.
21
+
22
+ ## Pagination techniques
23
+
24
+ - Offset is acceptable for small tables and shallow lists.
25
+ - For large traversals, use deterministic keyset pagination.
26
+
27
+ ## Operational guardrails
28
+
29
+ - Batch writes in explicit transactions.
30
+ - Use WAL mode for mixed read/write concurrency.
31
+ - Validate checkpoint strategy for mobile/edge IO limits.
32
+
33
+ ## References
34
+
9
35
  - `references/local-first.md`
10
36
  - `references/performance.md`
11
-
12
- Key rules:
13
- - Use WAL mode for mixed read/write workloads.
14
- - Keep write transactions short.
15
- - Plan upgrade path if concurrency demands exceed SQLite limits.