@abtnode/db-cache 1.16.45-beta-20250609-025419-7fd1f86c

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,13 @@
1
+ Copyright 2018-2025 ArcBlock
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License");
4
+ you may not use this file except in compliance with the License.
5
+ You may obtain a copy of the License at
6
+
7
+ http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+ Unless required by applicable law or agreed to in writing, software
10
+ distributed under the License is distributed on an "AS IS" BASIS,
11
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ See the License for the specific language governing permissions and
13
+ limitations under the License.
package/README.md ADDED
@@ -0,0 +1,170 @@
1
+ # DB Cache
2
+
3
+ Use Redis or Sqlite3 LRC cache, multithread safe
4
+
5
+ 约定: DB Cache 不能作为数据的单一来源, 我们要假定 Cache 的数据随时可能被释放来进行业务开发.
6
+ 不限制 cache 个数: Redis 我们约定直接在 redis 配置好最大内存和达到内存上限时自动移除旧的; Sqlite 由于使用的是硬盘, 也没有必要限制上限.
7
+
8
+ 约定 Redis 启动配置:
9
+
10
+ - 确保 redis 仅在内网
11
+ - 不做持久化
12
+ - 最大只用 512MB 内存
13
+ - 内存满时对所有键做 LRU (最近最久未使用) 淘汰
14
+
15
+ ```
16
+ docker run -d \
17
+ --name db-cache-redis \
18
+ -p 127.0.0.1:40409:6379 \
19
+ redis:8.0.2 \
20
+ redis-server \
21
+ --save "" \
22
+ --appendonly no \
23
+ --maxmemory 512mb \
24
+ --maxmemory-policy allkeys-lru
25
+ ```
26
+
27
+ ## Usage
28
+
29
+ ```shell
30
+ yarn add @abtnode/db-cache
31
+ ```
32
+
33
+ Then:
34
+
35
+ ```javascript
36
+ const { DBCache } = require('@abtnode/db-cache');
37
+
38
+ // 当第一次使用时, 才会去读取配置
39
+ const dbCache = new DBCache(() => ({
40
+ prefix: 'the prefix key',
41
+ ttl: 60 * 1000,
42
+ sqlitePath: 'test.db',
43
+ redisURL: process.env.REDIS_URL,
44
+ }));
45
+
46
+ await dbCache.set('key', { foo: 'bar' });
47
+ await dbCache.set('key', { foo: 'bar' }, { ttl: 1000 * 60 * 10 });
48
+ await dbCache.get('key');
49
+ await dbCache.delete('key');
50
+ await dbCache.has('key');
51
+ ```
52
+
53
+ ## hash set/get 的使用
54
+
55
+ ```javascript
56
+ const cache = new DBCache(() => ({
57
+ prefix: 'the prefix key',
58
+ ttl: 5 * 1000,
59
+ sqlitePath: 'test.db',
60
+ redisURL: process.env.REDIS_URL,
61
+ }));
62
+
63
+ await cache.groupSet('group-key', 'sub-key', { a: 'b' });
64
+ const data = await cache.groupGet('group-key', 'sub-key');
65
+ await cache.groupDel('group-key', 'sub-key');
66
+ await cache.del('group-key'); // delete group all key
67
+ ```
68
+
69
+ ## 锁的使用
70
+
71
+ ```javascript
72
+ const lock = new DBCache(() => ({
73
+ prefix: 'the prefix key',
74
+ ttl: 5 * 1000,
75
+ sqlitePath: 'test.db',
76
+ redisURL: process.env.REDIS_URL,
77
+ }));
78
+ await lock.acquire('key name');
79
+ // do something
80
+ await lock.releaseLock('key name');
81
+
82
+ // or wait ttl, auto release lock
83
+ ```
84
+
85
+ ## SQLite 性能
86
+
87
+ 默认设置压测性能很低,
88
+
89
+ 多线程测试安全, 但是并发非常慢, 放弃所有安全性 (不适合)
90
+
91
+ 使用 WAL (最后使用的方案)
92
+
93
+ ```
94
+ PRAGMA journal_mode = WAL; -- 写前日志并行读写
95
+ PRAGMA synchronous = OFF; -- 放弃每次 fsync,追求速度
96
+ PRAGMA busy_timeout = 10000; -- 锁冲突时最多等 10 秒
97
+ PRAGMA wal_autocheckpoint = 2000; -- 每 2000 次写后自动做一次 checkpoint
98
+ ```
99
+
100
+ 压测结果:
101
+
102
+ ```
103
+ === SQLite Backend Benchmark ===
104
+ SET 50000 ops in 5.36s -> 9325 ops/sec
105
+ GET 50000 ops in 5.83s -> 8582 ops/sec
106
+
107
+ === Redis Backend Benchmark ===
108
+ SET 50000 ops in 0.14s -> 370370 ops/sec
109
+ GET 50000 ops in 0.14s -> 349650 ops/sec
110
+ ```
111
+
112
+ 磁盘或硬盘开销:
113
+
114
+ ```
115
+
116
+ 10w 条数据, 数据样本:
117
+ {
118
+ idx: i,
119
+ v: `value-${i}`,
120
+ other: `other-${i}`,
121
+ other2: `other2-${i}`,
122
+ other3: `other3-${i}`,
123
+ other4: `other4-${i}`,
124
+ other5: `other5-${i}`,
125
+ other6: `other6-${i}`,
126
+ other7: `other7-${i}`,
127
+ other8: `other8-${i}`,
128
+ other9: `other9-${i}`,
129
+ other10: `other10-${i}`,
130
+ }
131
+
132
+
133
+ === Redis Memory Info ===
134
+ # Memory
135
+ used_memory_rss_human:57.41M
136
+
137
+ SQLite file size: 31.63 MB
138
+ ```
139
+
140
+ 并发测试是安全的, 取决于 busy_timeout 的时长和并发数
141
+
142
+ 结论, Redis 大概会是 SQLite 的 20-40 倍性能, 但是 SQLite 不会成为业务的 QPS 瓶颈. SQLite 可以在 RPS 要求不是这么高的情况下, 用硬盘换内存, 还是挺划算的.
143
+
144
+ #### group 实现差异
145
+
146
+ group 在 redis 使用 hash set 数据结构, 性能差异较小
147
+
148
+ sqlite group 相当于使用两个 key 进行查询, 性能也很不错
149
+
150
+ 下面是 group set/get 的性能:
151
+
152
+ ```
153
+ === SQLite Backend Benchmark ===
154
+ SET 50000 ops in 5.75s -> 8703 ops/sec
155
+ GET 50000 ops in 3.11s -> 16077 ops/sec
156
+
157
+ === Redis Backend Benchmark ===
158
+ SET 50000 ops in 0.22s -> 230415 ops/sec
159
+ GET 50000 ops in 0.08s -> 649351 ops/sec
160
+ ```
161
+
162
+ #### CI 测试 redis
163
+
164
+ 有 TEST_REDIS_URL 环境变量就测试 Redis 模式, 没有就测试 SQLite 模式
165
+
166
+ 例子:
167
+
168
+ ```
169
+ TEST_REDIS_URL="redis://:the_password@127.0.0.1:6379"
170
+ ```