@isdk/proxy 0.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.cn.md +138 -0
- package/README.md +129 -0
- package/dist/index.d.mts +213 -0
- package/dist/index.d.ts +213 -0
- package/dist/index.js +1 -0
- package/dist/index.mjs +1 -0
- package/docs/README.md +133 -0
- package/docs/classes/SmartCache.md +148 -0
- package/docs/functions/createCachedFetch.md +52 -0
- package/docs/functions/createFetchWithCache.md +52 -0
- package/docs/functions/extractData.md +39 -0
- package/docs/functions/fetchWithCache.md +39 -0
- package/docs/functions/generateCacheKey.md +27 -0
- package/docs/functions/isAllowed.md +38 -0
- package/docs/globals.md +29 -0
- package/docs/interfaces/CacheEntry.md +123 -0
- package/docs/interfaces/CacheMetadata.md +88 -0
- package/docs/interfaces/FetchWithCacheContext.md +161 -0
- package/docs/interfaces/FetchWithCacheOptions.md +103 -0
- package/docs/interfaces/KeyFilterConfig.md +33 -0
- package/docs/interfaces/ProxyConfig.md +41 -0
- package/docs/interfaces/SiteCacheConfig.md +61 -0
- package/docs/interfaces/SmartCacheOptions.md +41 -0
- package/package.json +101 -0
package/README.cn.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
1
|
+
# @isdk/proxy
|
|
2
|
+
|
|
3
|
+
这是一个专为 Node.js 开发者设计的高性能、开发者友好的缓存引擎,旨在解决数据密集型应用中 HTTP 响应缓存管理的复杂性。
|
|
4
|
+
|
|
5
|
+
## 为什么选择 @isdk/proxy?
|
|
6
|
+
|
|
7
|
+
在**高并发 API 代理**、**网页爬虫**或**微服务**等场景下,缓存管理通常需要在“速度”和“容量”之间进行妥协。`@isdk/proxy` 通过其独特的**混合多级架构**,完美解决了这一痛点:
|
|
8
|
+
|
|
9
|
+
- **解决“内存 vs. 容量”的矛盾**:它将小而热的响应存储在内存 (L1) 中以实现纳秒级访问,同时将大文件响应体转储到持久化磁盘 (L2)。更重要的是,它实现了 **“元数据驻留”**——无论响应体多大,其判定逻辑(Headers、Status、Policy)始终保留在内存中,确保瞬时完成缓存有效性评估。
|
|
10
|
+
- **防止缓存雪崩/击穿 (Cache Stampede)**:当一个热点缓存失效时,它通过内置的“请求合并”机制,确保同一时间只有一个网络请求被发出,有效保护上游服务器不被瞬间激增的并发请求压垮。
|
|
11
|
+
- **完全解耦,环境中立**:基于 Web 标准的 `Request`/`Response` 对象构建。这意味着你的缓存逻辑不再被某个具体的 HTTP 库(如 MSW, Axios, Fetch 或 Crawlee)所绑定,一套逻辑,到处运行。
|
|
12
|
+
|
|
13
|
+
## 核心特性
|
|
14
|
+
|
|
15
|
+
- **🚀 混合多级缓存**: L1 (LRU 内存) 提供极速响应,L2 (内容寻址磁盘 `cacache`) 提供持久化存储。
|
|
16
|
+
- **🌊 原生流式分发**: 内部完全基于 Stream 管道化构建,在代理大文件时天然防 OOM 内存溢出。
|
|
17
|
+
- **🧠 智能元数据驻留**: 无论文件多大,元数据 (Headers, Status, Policy) 始终驻留在内存中,确保纳秒级的缓存策略判定。
|
|
18
|
+
- **🔄 过期后异步更新 (SWR)**: 立即返回过期数据,同时在后台静默更新缓存,实现“零等待”响应。
|
|
19
|
+
- **🛡️ 请求合并防击穿 (Request Coalescing)**: 当大量并发请求同一资源时,通过全局 Map 合并排队,确保只有一个源站网络请求被发出,彻底防止缓存击穿。
|
|
20
|
+
- **🚑 强离线容灾**: 当后端服务宕机时,自动强制返回旧缓存 (`staleIfError`);甚至可以无视 `no-store` 指令强制缓存一切内容 (`forceCache`)。
|
|
21
|
+
- **🕵️ 透明的缓存状态**: 自动在返回结果中注入 `x-proxy-cache` 响应头 (`HIT`, `STALE`, `MISS`, `STALE_IF_ERROR`),极大方便调试与监控。
|
|
22
|
+
- **🌐 环境中立**: 完美适配所有支持 Web 标准 `Request`/`Response` API 的环境。
|
|
23
|
+
|
|
24
|
+
## 安装
|
|
25
|
+
|
|
26
|
+
```bash
|
|
27
|
+
pnpm add @isdk/proxy
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## 快速开始:核心协调函数
|
|
31
|
+
|
|
32
|
+
使用 `@isdk/proxy` 的主要方式是通过 `fetchWithCache` 函数,它可以包装任何 HTTP 请求逻辑。
|
|
33
|
+
|
|
34
|
+
```typescript
|
|
35
|
+
import { SmartCache, createCachedFetch } from '@isdk/proxy';
|
|
36
|
+
|
|
37
|
+
// 1. 初始化混合缓存实例
|
|
38
|
+
const cache = new SmartCache({
|
|
39
|
+
storagePath: './.cache',
|
|
40
|
+
maxMemorySize: 1024 * 1024 // 内存阈值 1MB
|
|
41
|
+
});
|
|
42
|
+
|
|
43
|
+
// 2. 创建一个预配置的缓存 Fetcher (内部会自动防缓存击穿)
|
|
44
|
+
const myFetch = createCachedFetch({
|
|
45
|
+
cache,
|
|
46
|
+
config: {
|
|
47
|
+
staleIfError: true,
|
|
48
|
+
forceCache: false // 设置为 true 可无视 no-store 强制缓存一切,适用于离线应用
|
|
49
|
+
},
|
|
50
|
+
backgroundUpdate: true // 开启 SWR (过期后后台静默更新)
|
|
51
|
+
});
|
|
52
|
+
|
|
53
|
+
// 3. 在应用的任何地方愉快地使用它!
|
|
54
|
+
const request = new Request('https://api.example.com/data');
|
|
55
|
+
const response = await myFetch(request, (req) => fetch(req)); // 传入任何返回 Promise<Response> 的获取函数
|
|
56
|
+
|
|
57
|
+
console.log(response.headers.get('x-proxy-cache')); // 输出: "MISS", "HIT", "STALE" 或 "STALE_IF_ERROR"
|
|
58
|
+
const data = await response.json();
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
## 适配器 (Adapters)
|
|
62
|
+
|
|
63
|
+
`@isdk/proxy` 旨在成为环境无关的纯净核心。虽然核心库保持纯粹,但你可以轻松集成或找到针对特定环境的适配器:
|
|
64
|
+
|
|
65
|
+
- **MSW 适配器**: 参见 `@isdk/proxy-msw` (独立包),将此缓存引擎作为 MSW 拦截器使用。
|
|
66
|
+
- **Axios 适配器**: 可以通过将 Axios 配置转换为 Web 标准 `Request` 轻松实现。
|
|
67
|
+
- **Crawlee 适配器**: 能够集成到爬虫生命周期中,减少重复抓取。
|
|
68
|
+
|
|
69
|
+
## 架构设计详解
|
|
70
|
+
|
|
71
|
+
### 1. 混合存储策略 (Hybrid Storage)
|
|
72
|
+
|
|
73
|
+
- **L1 (内存层)**: 基于 `@cacheable/memory`。对于小文件(小于 `maxMemorySize`),同时存储元数据和响应体。
|
|
74
|
+
- **L2 (磁盘层)**: 基于 `cacache`(内容寻址存储)。负责持久化和大数据存储。
|
|
75
|
+
- **性能优化**: 对于大文件,响应体只保存在磁盘,但其 **元数据(Metadata)** 仍会驻留在内存中。这意味着即使磁盘文件很大,系统依然可以瞬间判断其是否过期。
|
|
76
|
+
|
|
77
|
+
### 2. 请求合并 (Request Collapsing)
|
|
78
|
+
|
|
79
|
+
当多个并发请求同时遇到缓存缺失或过期时,`@isdk/proxy` 会利用 `In-Flight` 状态 Map 追踪正在进行的 Promise,确保只有**一个**真实的网络请求被发出。其他请求会根据配置选择等待新数据或立即返回现有的过期数据。
|
|
80
|
+
|
|
81
|
+
### 3. SWR 与 后台更新
|
|
82
|
+
|
|
83
|
+
当缓存过期但在 SWR 窗口内时:
|
|
84
|
+
|
|
85
|
+
1. `fetchWithCache` 立即构造并返回旧的 `Response`。
|
|
86
|
+
2. 启动异步网络请求。
|
|
87
|
+
3. 网络请求完成后,自动更新 L1 和 L2 缓存。
|
|
88
|
+
|
|
89
|
+
## API 参考
|
|
90
|
+
|
|
91
|
+
### `createCachedFetch(options)` (强烈推荐)
|
|
92
|
+
|
|
93
|
+
面向终端用户的高阶工厂函数。它会自动在内部闭包中维护并发追踪器,为你生成一个开箱即用、绝不会发生缓存击穿的 Fetch 实例。
|
|
94
|
+
|
|
95
|
+
- **`options.cache`**: `SmartCache` 实例。
|
|
96
|
+
- **`options.config`**: 全局缓存配置对象 (`SiteCacheConfig`):
|
|
97
|
+
- `staleIfError` (boolean): 网络请求失败时,是否强制返回本地过期的旧缓存以保障可用性。
|
|
98
|
+
- `forceCache` (boolean): 是否无视源站的 `Cache-Control: no-store` 指令强制执行缓存入盘。适用于极端弱网或离线优先的应用场景。
|
|
99
|
+
- **`options.backgroundUpdate`**: 设置为 `true` 以开启 SWR (Stale-While-Revalidate) 行为。
|
|
100
|
+
- **`options.activeCacheWrites`**: 可选参数。一个 `Map<string, Promise<void>>`,用于在多个 `createCachedFetch` 实例之间共享并发追踪状态,实现应用级别的缓存击穿防护。
|
|
101
|
+
- **返回值**: 一个可随处调用的 `(request: Request, fetcher: (req: Request) => Promise<Response>) => Promise<Response>` 包装函数。
|
|
102
|
+
|
|
103
|
+
### `createFetchWithCache(activeCacheWrites?)`
|
|
104
|
+
|
|
105
|
+
单一职责的高阶函数。专门用于封装和隔离 `activeCacheWrites` 并发追踪器。
|
|
106
|
+
它会返回一个绑定了闭包内 Map 的 `fetchWithCache` 变体函数。如果你正在构建中间件,但又不想使用顶层的 `createCachedFetch` 工厂,可以用它来免除手动维护追踪器的烦恼。
|
|
107
|
+
|
|
108
|
+
- **`activeCacheWrites`**: 可选参数。外部传入的 `Map<string, Promise<void>>` 作为并发追踪器。如果不提供,将自动创建一个新的内部 Map。在多个实例间共享同一个 Map 可以实现应用范围内的请求合并。
|
|
109
|
+
- **返回值**: `(request: Request, fetcher: (req: Request) => Promise<Response>, options: Omit<FetchWithCacheOptions, 'activeCacheWrites'>) => Promise<Response>`
|
|
110
|
+
|
|
111
|
+
### `fetchWithCache(request, fetcher, options)`
|
|
112
|
+
|
|
113
|
+
底层的核心缓存协调函数。如果你在开发更上层的插件或有特殊的生命周期控制需求,可以直接调用它。
|
|
114
|
+
|
|
115
|
+
- **`request`**: Web 标准的 `Request` 对象。
|
|
116
|
+
- **`fetcher`**: 发起真实网络请求的回调函数 `(req: Request) => Promise<Response>`。
|
|
117
|
+
- **`options.activeCacheWrites`**: 必须由**外部传入**的一个 `Map<string, Promise<void>>`,用于在多个并发的 `fetchWithCache` 调用间共享锁状态,以实现请求合并。如果你不想自己维护它,请使用 `createCachedFetch` 或 `createFetchWithCache`。
|
|
118
|
+
|
|
119
|
+
### `SmartCache`
|
|
120
|
+
|
|
121
|
+
管理多级混合存储的核心引擎。
|
|
122
|
+
|
|
123
|
+
- `new SmartCache(options)`
|
|
124
|
+
- **`options.maxMemorySize`**: 响应体进入内存 (L1) 的大小阈值(字节),超过此大小的文件将直接进入磁盘流传输(默认 `1048576` 即 1MB)。
|
|
125
|
+
- **`options.storagePath`**: 磁盘 L2 缓存(cacache)的物理存储路径(默认为操作系统的临时目录)。
|
|
126
|
+
|
|
127
|
+
### 缓存状态标头 (Cache Status Headers)
|
|
128
|
+
|
|
129
|
+
由 `@isdk/proxy` 处理并返回的所有 `Response`,其 Headers 中都会注入 `x-proxy-cache` 字段以便观测生命周期,可能的值有:
|
|
130
|
+
|
|
131
|
+
- `HIT`: 完美命中,数据完全来自于 L1 内存或 L2 磁盘缓存。
|
|
132
|
+
- `MISS`: 缓存未命中(或主动绕过缓存),数据真实来自于源站请求。
|
|
133
|
+
- `STALE`: 命中过期缓存(已通过 SWR 机制在后台发起了静默网络更新)。
|
|
134
|
+
- `STALE_IF_ERROR`: 源站请求失败(网络断开或报错),系统作为兜底强制返回了过期的旧缓存。
|
|
135
|
+
|
|
136
|
+
## 许可证
|
|
137
|
+
|
|
138
|
+
MIT
|
package/README.md
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
1
|
+
# @isdk/proxy
|
|
2
|
+
|
|
3
|
+
A high-performance, developer-friendly caching engine for Node.js, specifically designed to solve the complexity of managing HTTP response caches in data-intensive applications.
|
|
4
|
+
|
|
5
|
+
## Why @isdk/proxy?
|
|
6
|
+
|
|
7
|
+
In high-concurrency environments—like **API Proxies**, **Web Scrapers**, or **Microservices**—managing caches is often a trade-off between speed and memory.
|
|
8
|
+
|
|
9
|
+
`@isdk/proxy` provides a **Hybrid Multi-tier Architecture** that gives you the best of both worlds:
|
|
10
|
+
|
|
11
|
+
- **It solves the Memory vs. Capacity problem**: Keeps small, hot responses in memory (L1) for nanosecond access, while offloading large bodies to persistent disk (L2) without losing the ability to instantly evaluate cache policies.
|
|
12
|
+
- **It prevents Cache Stampede**: When a hot entry expires, it ensures only ONE network request is made, preventing your upstream from being crushed by concurrent misses.
|
|
13
|
+
- **It is Framework-Agnostic**: Built on Web standard `Request`/`Response` objects, it decouples your caching logic from your HTTP client (MSW, Axios, Fetch, Crawlee, etc.).
|
|
14
|
+
|
|
15
|
+
## Key Features
|
|
16
|
+
|
|
17
|
+
- **🚀 Hybrid Multi-tier Cache**: Extreme speed with L1 (LRU Memory) and persistence with L2 (Content Addressable Disk via `cacache`).
|
|
18
|
+
- **🌊 Streaming Native**: Fully stream-based internal pipeline natively prevents Out-Of-Memory (OOM) issues when proxying large files.
|
|
19
|
+
- **🧠 Intelligent Meta-Residency**: Metadata (Headers, Status, Policy) stays in memory regardless of body size, ensuring nanosecond cache policy evaluations.
|
|
20
|
+
- **🔄 Stale-While-Revalidate (SWR)**: Serve stale content instantly while updating the cache silently in the background.
|
|
21
|
+
- **🛡️ Request Coalescing (Anti-Stampede)**: Prevent cache stampede by coalescing identical concurrent requests using a shared tracker, ensuring only one network request is made.
|
|
22
|
+
- **🚑 Offline Resilience**: Automatically serve stale content if the upstream is down (`staleIfError`), or forcefully cache everything ignoring `Cache-Control: no-store` (`forceCache`).
|
|
23
|
+
- **🕵️ Transparent Cache Status**: Injects standard `x-proxy-cache` headers (`HIT`, `STALE`, `MISS`, `STALE_IF_ERROR`) into responses for easy observability.
|
|
24
|
+
- **🌐 Framework Agnostic**: Works everywhere by using standard Web `Request`/`Response` APIs.
|
|
25
|
+
|
|
26
|
+
## Installation
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
pnpm add @isdk/proxy
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
## Quick Start: The Core Orchestrator
|
|
33
|
+
|
|
34
|
+
The primary way to use `@isdk/proxy` is via the `fetchWithCache` function, which can wrap any HTTP request logic.
|
|
35
|
+
|
|
36
|
+
```typescript
|
|
37
|
+
import { SmartCache, createCachedFetch } from '@isdk/proxy';
|
|
38
|
+
|
|
39
|
+
// 1. Initialize the hybrid cache
|
|
40
|
+
const cache = new SmartCache({
|
|
41
|
+
storagePath: './.cache',
|
|
42
|
+
maxMemorySize: 1024 * 1024 // 1MB threshold
|
|
43
|
+
});
|
|
44
|
+
|
|
45
|
+
// 2. Create a pre-configured cached fetcher (automatically tracks concurrent requests)
|
|
46
|
+
const myFetch = createCachedFetch({
|
|
47
|
+
cache,
|
|
48
|
+
config: {
|
|
49
|
+
staleIfError: true,
|
|
50
|
+
forceCache: false // Set to true to cache everything (ignore no-store) for offline-first apps
|
|
51
|
+
},
|
|
52
|
+
backgroundUpdate: true // Enable SWR
|
|
53
|
+
});
|
|
54
|
+
|
|
55
|
+
// 3. Use it anywhere in your app!
|
|
56
|
+
const request = new Request('https://api.example.com/data');
|
|
57
|
+
const response = await myFetch(request, (req) => fetch(req));
|
|
58
|
+
|
|
59
|
+
console.log(response.headers.get('x-proxy-cache')); // "MISS", "HIT", "STALE", or "STALE_IF_ERROR"
|
|
60
|
+
const data = await response.json();
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
## Adapters
|
|
64
|
+
|
|
65
|
+
`@isdk/proxy` is designed to be framework-agnostic. While the core library is pure, you can find (or build) adapters for specific environments:
|
|
66
|
+
|
|
67
|
+
- **MSW Adapter**: See `@isdk/proxy-msw` (separate package) to use this caching engine as an MSW interceptor.
|
|
68
|
+
- **Axios Adapter**: Easily implemented by converting Axios config to Web `Request`.
|
|
69
|
+
|
|
70
|
+
## Architecture
|
|
71
|
+
|
|
72
|
+
### Hybrid Storage Strategy
|
|
73
|
+
|
|
74
|
+
- **L1 (Memory)**: Powered by `@cacheable/memory`. Stores both Meta and Body for small files (< `maxMemorySize`).
|
|
75
|
+
- **L2 (Disk)**: Powered by `cacache`. Stores all data for persistence.
|
|
76
|
+
- **Optimization**: For large files, only the Metadata is kept in memory. The body is streamed or read from disk only when requested, saving significant memory.
|
|
77
|
+
|
|
78
|
+
### Request Collapsing
|
|
79
|
+
|
|
80
|
+
When multiple concurrent requests hit a missing or expired cache entry, `@isdk/proxy` ensures that only **one** request goes to the network. Subsequent requests will wait for the same promise or serve the background-updated data.
|
|
81
|
+
|
|
82
|
+
## API Reference
|
|
83
|
+
|
|
84
|
+
### `createCachedFetch(options)` (Recommended)
|
|
85
|
+
|
|
86
|
+
A higher-order factory function designed for end-users. It creates a pre-configured `fetch` equivalent that automatically tracks concurrent requests internally to prevent cache stampedes.
|
|
87
|
+
|
|
88
|
+
- **`options.cache`**: An instance of `SmartCache`.
|
|
89
|
+
- **`options.config`**: A `SiteCacheConfig` object containing:
|
|
90
|
+
- `staleIfError` (boolean): Serve stale cache if the network fails.
|
|
91
|
+
- `forceCache` (boolean): Force cache everything, ignoring `Cache-Control: no-store`. Ideal for offline-first resilience.
|
|
92
|
+
- **`options.backgroundUpdate`**: Set to `true` to enable SWR behavior.
|
|
93
|
+
- **`options.activeCacheWrites`**: Optional. A `Map<string, Promise<void>>` that can be shared across multiple `createCachedFetch` instances to enable application-level cache stampede prevention.
|
|
94
|
+
- **Returns**: A reusable `(request: Request, fetcher: (req: Request) => Promise<Response>) => Promise<Response>` function.
|
|
95
|
+
|
|
96
|
+
### `createFetchWithCache(activeCacheWrites?)`
|
|
97
|
+
|
|
98
|
+
A single-responsibility higher-order function that encapsulates the `activeCacheWrites` concurrency tracker. It returns a variant of `fetchWithCache` that shares an internal Map to coalesce identical concurrent requests. Use this if you are building an intermediate wrapper but don't want to rely on the top-level `createCachedFetch` factory.
|
|
99
|
+
|
|
100
|
+
- **`activeCacheWrites`**: Optional. An external `Map<string, Promise<void>>` to be used as the concurrency tracker. If not provided, a new internal Map will be created. Sharing the same Map across multiple instances enables application-wide request coalescing.
|
|
101
|
+
- **Returns**: `(request: Request, fetcher: (req: Request) => Promise<Response>, options: Omit<FetchWithCacheOptions, 'activeCacheWrites'>) => Promise<Response>`
|
|
102
|
+
|
|
103
|
+
### `fetchWithCache(request, fetcher, options)`
|
|
104
|
+
|
|
105
|
+
The core caching orchestrator. Use this directly if you need low-level control or are building a library on top of it.
|
|
106
|
+
|
|
107
|
+
- **`request`**: Web Standard `Request`.
|
|
108
|
+
- **`fetcher`**: The raw fetching callback `(req: Request) => Promise<Response>`.
|
|
109
|
+
- **`options.activeCacheWrites`**: A `Map<string, Promise<void>>` that YOU must provide and maintain to coalesce concurrent requests. (If you don't want to manage this, use `createCachedFetch` or `createFetchWithCache` instead).
|
|
110
|
+
|
|
111
|
+
### `SmartCache`
|
|
112
|
+
|
|
113
|
+
The hybrid multi-tier storage engine.
|
|
114
|
+
|
|
115
|
+
- `new SmartCache(options)`
|
|
116
|
+
- **`options.maxMemorySize`**: Threshold (in bytes) for offloading bodies to disk (default `1048576`, i.e., 1MB).
|
|
117
|
+
- **`options.storagePath`**: Disk storage path for the `cacache` engine (defaults to a system temp folder).
|
|
118
|
+
|
|
119
|
+
### Cache Status Headers
|
|
120
|
+
|
|
121
|
+
Every response processed by `@isdk/proxy` will include an `x-proxy-cache` header indicating its lifecycle:
|
|
122
|
+
- `HIT`: Served entirely from L1 or L2 cache.
|
|
123
|
+
- `MISS`: Bypassed cache and fetched from the origin server.
|
|
124
|
+
- `STALE`: Served from stale cache while a background update was initiated (SWR).
|
|
125
|
+
- `STALE_IF_ERROR`: Origin fetch failed; served from stale cache as a fallback.
|
|
126
|
+
|
|
127
|
+
## License
|
|
128
|
+
|
|
129
|
+
MIT
|
package/dist/index.d.mts
ADDED
|
@@ -0,0 +1,213 @@
|
|
|
1
|
+
import { KeyvCacheableMemoryOptions } from '@cacheable/memory';
|
|
2
|
+
|
|
3
|
+
/**
|
|
4
|
+
* 缓存键过滤配置
|
|
5
|
+
*
|
|
6
|
+
* 用于定义在生成缓存指纹时,哪些字段应该被包含或排除。
|
|
7
|
+
*/
|
|
8
|
+
interface KeyFilterConfig {
|
|
9
|
+
/** 仅包含(白名单):如果设置,只有这些字段会参与 Key 的计算 */
|
|
10
|
+
include?: string[];
|
|
11
|
+
/** 排除(黑名单):用于排除像 `timestamp`、`nonce` 等干扰缓存命中的动态字段 */
|
|
12
|
+
exclude?: string[];
|
|
13
|
+
}
|
|
14
|
+
/**
|
|
15
|
+
* 站点级缓存配置
|
|
16
|
+
*/
|
|
17
|
+
interface SiteCacheConfig {
|
|
18
|
+
/** Query 参数过滤配置 */
|
|
19
|
+
query?: KeyFilterConfig;
|
|
20
|
+
/** 请求头过滤配置 */
|
|
21
|
+
headers?: KeyFilterConfig;
|
|
22
|
+
/** Cookie 过滤配置 */
|
|
23
|
+
cookies?: KeyFilterConfig;
|
|
24
|
+
/** 当后端请求失败且存在旧缓存时,是否强制返回旧缓存(容错机制) */
|
|
25
|
+
staleIfError?: boolean;
|
|
26
|
+
/** 是否强制缓存一切响应(无视 no-store 等不缓存指令),用于极端的离线可用容错场景 */
|
|
27
|
+
forceCache?: boolean;
|
|
28
|
+
}
|
|
29
|
+
/**
|
|
30
|
+
* 缓存元数据
|
|
31
|
+
*
|
|
32
|
+
* 存储在 L1 内存和 L2 磁盘中的非 Body 信息。
|
|
33
|
+
* 即使 Body 过大未进入内存,此元数据也会驻留在内存中以供快速策略判定。
|
|
34
|
+
*/
|
|
35
|
+
interface CacheMetadata {
|
|
36
|
+
/** HTTP 状态码 */
|
|
37
|
+
status: number;
|
|
38
|
+
/** 响应头对象 */
|
|
39
|
+
headers: Record<string, string>;
|
|
40
|
+
/** http-cache-semantics 策略对象,包含 TTL 和缓存指令 */
|
|
41
|
+
policy: any;
|
|
42
|
+
/** 原始请求 URL */
|
|
43
|
+
url: string;
|
|
44
|
+
/** 原始请求方法 */
|
|
45
|
+
method: string;
|
|
46
|
+
/** 缓存写入时的时间戳 */
|
|
47
|
+
timestamp: number;
|
|
48
|
+
/** Body 的字节长度,用于精确区分“空响应”与“未入内存的大响应” */
|
|
49
|
+
size: number;
|
|
50
|
+
}
|
|
51
|
+
/**
|
|
52
|
+
* 完整的缓存条目
|
|
53
|
+
*/
|
|
54
|
+
interface CacheEntry extends CacheMetadata {
|
|
55
|
+
/** 响应体数据:小文件为 Buffer,大文件为可读流 */
|
|
56
|
+
body: Buffer | any;
|
|
57
|
+
}
|
|
58
|
+
/**
|
|
59
|
+
* 代理拦截器全局配置
|
|
60
|
+
*/
|
|
61
|
+
interface ProxyConfig {
|
|
62
|
+
/** 默认缓存配置,当请求的域名未在 sites 中匹配时使用 */
|
|
63
|
+
default: SiteCacheConfig;
|
|
64
|
+
/** 针对特定域名的精细化缓存配置 */
|
|
65
|
+
sites: Record<string, SiteCacheConfig>;
|
|
66
|
+
/** 磁盘缓存(cacache)的物理存储路径,可选,默认为系统临时目录 */
|
|
67
|
+
storagePath?: string;
|
|
68
|
+
}
|
|
69
|
+
|
|
70
|
+
/**
|
|
71
|
+
* SmartCache 选项
|
|
72
|
+
*/
|
|
73
|
+
interface SmartCacheOptions {
|
|
74
|
+
/** 磁盘缓存的物理路径。如果不提供,将默认使用系统临时目录。 */
|
|
75
|
+
storagePath?: string;
|
|
76
|
+
/** 内存缓存阈值(字节)。响应体大小超过此值时,Body 将只存入磁盘,而 Meta 仍保留在内存。默认 1MB。 */
|
|
77
|
+
maxMemorySize?: number;
|
|
78
|
+
/** 透传给 L1 (Memory) 的高级配置 */
|
|
79
|
+
memoryOptions?: Partial<KeyvCacheableMemoryOptions>;
|
|
80
|
+
}
|
|
81
|
+
/**
|
|
82
|
+
* 智能混合缓存类 (Hybrid Cache)
|
|
83
|
+
*/
|
|
84
|
+
declare class SmartCache {
|
|
85
|
+
private memory;
|
|
86
|
+
private storagePath;
|
|
87
|
+
private maxMemorySize;
|
|
88
|
+
constructor(options?: SmartCacheOptions);
|
|
89
|
+
/**
|
|
90
|
+
* 获取缓存条目
|
|
91
|
+
* 如果是小文件,返回带 Buffer 的 Entry;如果是大文件,返回带 ReadStream 的 Entry。
|
|
92
|
+
*/
|
|
93
|
+
get(key: string): Promise<CacheEntry | null>;
|
|
94
|
+
/**
|
|
95
|
+
* 写入缓存
|
|
96
|
+
*/
|
|
97
|
+
set(key: string, body: Buffer, metadata: Omit<CacheMetadata, 'size'>): Promise<void>;
|
|
98
|
+
/**
|
|
99
|
+
* 内部方法:处理内存回填
|
|
100
|
+
*/
|
|
101
|
+
private saveToMemory;
|
|
102
|
+
getStream(key: string): NodeJS.ReadableStream;
|
|
103
|
+
setStream(key: string, metadata: Omit<CacheMetadata, 'size'>): NodeJS.WritableStream;
|
|
104
|
+
delete(key: string): Promise<void>;
|
|
105
|
+
clear(): Promise<void>;
|
|
106
|
+
}
|
|
107
|
+
|
|
108
|
+
/**
|
|
109
|
+
* 根据 Request 和配置生成唯一的缓存键
|
|
110
|
+
*/
|
|
111
|
+
declare const generateCacheKey: (req: Request, config: SiteCacheConfig) => string;
|
|
112
|
+
|
|
113
|
+
/**
|
|
114
|
+
* fetchWithCache 选项
|
|
115
|
+
*/
|
|
116
|
+
interface FetchWithCacheOptions {
|
|
117
|
+
/** 混合缓存实例 */
|
|
118
|
+
cache: SmartCache;
|
|
119
|
+
/** 站点级缓存配置 */
|
|
120
|
+
config: SiteCacheConfig;
|
|
121
|
+
/** 是否启用后台异步更新 (SWR) */
|
|
122
|
+
backgroundUpdate?: boolean;
|
|
123
|
+
/** 后台更新 Promise 触发时的回调 */
|
|
124
|
+
onBackgroundUpdate?: (promise: Promise<Response>) => void;
|
|
125
|
+
/** 自定义缓存键生成函数 */
|
|
126
|
+
generateKey?: typeof generateCacheKey;
|
|
127
|
+
/**
|
|
128
|
+
* 并发写入任务追踪器
|
|
129
|
+
* 传入一个外部维护的 Map,用于在跨请求、跨实例时防止针对同一文件的并发重复下载。
|
|
130
|
+
* Map 的 Key 是缓存 Key,Value 是一个代表写入完成的 Promise。
|
|
131
|
+
*/
|
|
132
|
+
activeCacheWrites?: Map<string, Promise<void>>;
|
|
133
|
+
}
|
|
134
|
+
/** 内部流水线上下文,合并了入参和计算出的关键状态 */
|
|
135
|
+
interface FetchWithCacheContext extends FetchWithCacheOptions {
|
|
136
|
+
request: Request;
|
|
137
|
+
fetcher: (req: Request) => Promise<Response>;
|
|
138
|
+
cacheKey: string;
|
|
139
|
+
activeCacheWrites: Map<string, Promise<void>>;
|
|
140
|
+
}
|
|
141
|
+
/**
|
|
142
|
+
* 核心协调函数 (Fetcher Orchestrator)
|
|
143
|
+
*
|
|
144
|
+
* 实现了基于流的混合缓存代理核心逻辑,主要机制包括:
|
|
145
|
+
* - **大文件流式处理**:底层完全通过 Streams 实现,代理大文件时自动写入磁盘且防 OOM。
|
|
146
|
+
* - **SWR (Stale-While-Revalidate)**:后台静默更新机制。
|
|
147
|
+
* - **并发防击穿 (Request Coalescing)**:利用 `activeCacheWrites` 将并发请求合并。
|
|
148
|
+
* - **强制离线容灾**:支持 `staleIfError` 和 `forceCache`(无视 Cache-Control 强制入库)。
|
|
149
|
+
*
|
|
150
|
+
* 并且会在响应头中自动注入 `x-proxy-cache` 标明缓存命中状态 (`HIT`, `STALE`, `MISS`, `STALE_IF_ERROR`)。
|
|
151
|
+
*/
|
|
152
|
+
declare function fetchWithCache(request: Request, fetcher: (req: Request) => Promise<Response>, options: FetchWithCacheOptions): Promise<Response>;
|
|
153
|
+
|
|
154
|
+
/**
|
|
155
|
+
* 单一职责高阶函数:专门用于封装和隔离 activeCacheWrites 并发追踪器。
|
|
156
|
+
*
|
|
157
|
+
* 每次调用此函数,都会创建一个完全独立的闭包 Map(或复用传入的 Map),
|
|
158
|
+
* 并返回一个绑定了该 Map 的 `fetchWithCache` 变体函数。
|
|
159
|
+
* 从而让使用者无需关心 `activeCacheWrites` 的维护,杜绝了误传或不传导致的并发击穿风险。
|
|
160
|
+
*
|
|
161
|
+
* @param activeCacheWrites - 可选参数,用于跨实例共享的并发写入追踪器。
|
|
162
|
+
* 如果未提供,将自动创建一个新的 Map。
|
|
163
|
+
* 传入同一个 Map 可以让多个 `createFetchWithCache` 实例共享
|
|
164
|
+
* 并发追踪状态,从而在整个应用范围内防止缓存击穿。
|
|
165
|
+
* @returns 一个绑定了并发追踪器的 `fetchWithCache` 变体函数。
|
|
166
|
+
*/
|
|
167
|
+
declare function createFetchWithCache(activeCacheWrites?: Map<string, Promise<void>>): (request: Request, fetcher: (req: Request) => Promise<Response>, options: Omit<FetchWithCacheOptions, "activeCacheWrites">) => Promise<Response>;
|
|
168
|
+
|
|
169
|
+
/**
|
|
170
|
+
* 缓存请求工厂函数 (针对终端用户的顶层高阶 API)
|
|
171
|
+
*
|
|
172
|
+
* 为用户提供一个只需配置一次(如 Cache 实例、默认 Config),
|
|
173
|
+
* 即可在整个应用生命周期中随处调用的 `cachedFetch` 方法。
|
|
174
|
+
*
|
|
175
|
+
* 底层调用了 `createFetchWithCache` 来保证单一职能隔离,内部自动维护并发追踪。
|
|
176
|
+
*
|
|
177
|
+
* @param defaultOptions - 默认缓存配置选项。
|
|
178
|
+
* 可以包含 `activeCacheWrites` 字段,用于跨多个 `createCachedFetch`
|
|
179
|
+
* 实例共享并发追踪状态,实现应用级别的缓存击穿防护。
|
|
180
|
+
* @returns 一个预配置的 `cachedFetch` 函数,可直接用于发起带缓存的请求。
|
|
181
|
+
*/
|
|
182
|
+
declare function createCachedFetch(defaultOptions: FetchWithCacheOptions): (request: Request, fetcher: (req: Request) => Promise<Response>, overrideOptions?: Partial<FetchWithCacheOptions>) => Promise<Response>;
|
|
183
|
+
|
|
184
|
+
/**
|
|
185
|
+
* 从源对象中根据过滤配置提取数据并标准化。
|
|
186
|
+
*
|
|
187
|
+
* 此函数主要用于生成缓存指纹。它会:
|
|
188
|
+
* 1. 根据 `config` (include/exclude) 过滤键。
|
|
189
|
+
* 2. 对键进行排序以保证指纹的一致性。
|
|
190
|
+
* 3. 将所有键转换为小写。
|
|
191
|
+
* 4. 将值统一包装为数组并进行排序,消除数组项顺序差异。
|
|
192
|
+
*
|
|
193
|
+
* @param source 原始数据对象 (如 QueryParams, Headers, Cookies)
|
|
194
|
+
* @param config 过滤配置 (白名单或黑名单)
|
|
195
|
+
* @returns 标准化后的数据 Map,键为小写,值为字符串数组
|
|
196
|
+
*/
|
|
197
|
+
declare const extractData: (source: Record<string, any>, config?: KeyFilterConfig) => Record<string, string[]>;
|
|
198
|
+
|
|
199
|
+
/**
|
|
200
|
+
* 判断给定的键是否允许参与缓存指纹计算。
|
|
201
|
+
*
|
|
202
|
+
* 优先级逻辑:
|
|
203
|
+
* 1. 如果配置了 `include` (白名单),则只有存在于 `include` 中的键才会被允许。
|
|
204
|
+
* 2. 否则,如果配置了 `exclude` (黑名单),则存在于 `exclude` 中的键将被拒绝。
|
|
205
|
+
* 3. 如果都没有配置,默认允许所有键。
|
|
206
|
+
*
|
|
207
|
+
* @param key 要检查的键名
|
|
208
|
+
* @param config 过滤配置
|
|
209
|
+
* @returns 是否允许
|
|
210
|
+
*/
|
|
211
|
+
declare function isAllowed(key: string, config?: KeyFilterConfig): boolean;
|
|
212
|
+
|
|
213
|
+
export { type CacheEntry, type CacheMetadata, type FetchWithCacheContext, type FetchWithCacheOptions, type KeyFilterConfig, type ProxyConfig, type SiteCacheConfig, SmartCache, type SmartCacheOptions, createCachedFetch, createFetchWithCache, extractData, fetchWithCache, generateCacheKey, isAllowed };
|