@clickhouse/client 1.18.4-head.f30da83.1 → 1.18.5-head.bccbdcf.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/dist/version.d.ts
CHANGED
|
@@ -1,2 +1,2 @@
|
|
|
1
|
-
declare const _default: "1.18.
|
|
1
|
+
declare const _default: "1.18.5-head.bccbdcf.1";
|
|
2
2
|
export default _default;
|
package/dist/version.js
CHANGED
package/package.json
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
"name": "@clickhouse/client",
|
|
3
3
|
"description": "Official JS client for ClickHouse DB - Node.js implementation",
|
|
4
4
|
"homepage": "https://clickhouse.com",
|
|
5
|
-
"version": "1.18.
|
|
5
|
+
"version": "1.18.5-head.bccbdcf.1",
|
|
6
6
|
"license": "Apache-2.0",
|
|
7
7
|
"keywords": [
|
|
8
8
|
"clickhouse",
|
|
@@ -40,7 +40,7 @@
|
|
|
40
40
|
"build": "rm -rf dist; tsc"
|
|
41
41
|
},
|
|
42
42
|
"dependencies": {
|
|
43
|
-
"@clickhouse/client-common": "1.18.
|
|
43
|
+
"@clickhouse/client-common": "1.18.5-head.bccbdcf.1"
|
|
44
44
|
},
|
|
45
45
|
"devDependencies": {
|
|
46
46
|
"simdjson": "^0.9.2"
|
|
@@ -109,9 +109,74 @@ const client = createClient({
|
|
|
109
109
|
})
|
|
110
110
|
```
|
|
111
111
|
|
|
112
|
-
|
|
112
|
+
### ⚠️ Critical: 16 KB Node.js Header Size Limit
|
|
113
113
|
|
|
114
|
-
**
|
|
114
|
+
**Node.js defaults to a total received HTTP header limit of approximately 16 KB (this can be increased via the `--max-http-header-size` CLI flag[^max-header-size]).** ClickHouse sends a new progress header with each interval (~200 bytes), and after ~75 progress headers accumulate, Node.js will throw an exception and terminate the request unless that limit is raised.
|
|
115
|
+
|
|
116
|
+
[^max-header-size]: Node.js also exposes a `maxHeaderSize` option on `http(s).request`, but the ClickHouse JS client currently does not forward it through `createClient`. For now, the practical workaround in clickhouse-js is to either use the `--max-http-header-size` CLI flag / `NODE_OPTIONS` (process-wide) or supply a custom `http.Agent` configured with `maxHeaderSize`. A dedicated client option is coming soon.
|
|
117
|
+
|
|
118
|
+
**Maximum safe query duration formula:**
|
|
119
|
+
|
|
120
|
+
```
|
|
121
|
+
Max duration (seconds) ≈ http_headers_progress_interval_ms × 75 ÷ 1000
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
**Examples:**
|
|
125
|
+
|
|
126
|
+
- `http_headers_progress_interval_ms: '10000'` (10s) → **~12.5 minutes** max safe duration
|
|
127
|
+
- `http_headers_progress_interval_ms: '60000'` (60s) → **~75 minutes** max safe duration
|
|
128
|
+
- `http_headers_progress_interval_ms: '120000'` (120s) → **~2.5 hours** max safe duration
|
|
129
|
+
|
|
130
|
+
> **Note:** `http_headers_progress_interval_ms` is a `UInt64` ClickHouse setting, so it must be passed as a **string** (e.g., `'10000'`).
|
|
131
|
+
|
|
132
|
+
**Raising the Node.js header limit (e.g., to 64 KB):**
|
|
133
|
+
|
|
134
|
+
If you need a longer max safe duration without lengthening the progress interval, raise Node's HTTP header limit. For example, increasing it from the default 16 KB to **64 KB** quadruples the max safe duration (≈300 progress headers instead of ≈75).
|
|
135
|
+
|
|
136
|
+
```bash
|
|
137
|
+
# Option 1 — CLI flag when launching your app
|
|
138
|
+
node --max-http-header-size=65536 app.js
|
|
139
|
+
|
|
140
|
+
# Option 2 — environment variable (works with any Node entry point, including npm/ts-node)
|
|
141
|
+
NODE_OPTIONS="--max-http-header-size=65536" node app.js
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
With `maxHeaderSize = 65536` (64 KB), the formula becomes:
|
|
145
|
+
Max duration (seconds) ≈ http_headers_progress_interval_ms × 300 ÷ 1000
|
|
146
|
+
```
|
|
147
|
+
Max duration ≈ http_headers_progress_interval_ms ÷ 1000 × 300
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
Examples at 64 KB:
|
|
151
|
+
|
|
152
|
+
- `http_headers_progress_interval_ms: '10000'` (10s) → **~50 minutes** max safe duration
|
|
153
|
+
- `http_headers_progress_interval_ms: '60000'` (60s) → **~5 hours** max safe duration
|
|
154
|
+
- `http_headers_progress_interval_ms: '120000'` (120s) → **~10 hours** max safe duration
|
|
155
|
+
|
|
156
|
+
**Guidelines for choosing the interval** (subject to your load balancer's idle timeout — see trade-offs below):
|
|
157
|
+
|
|
158
|
+
1. **For queries under 12 minutes:** Use `'10000'` ms (10s) intervals, if your LB idle timeout allows
|
|
159
|
+
2. **For queries 12 min – 1 hour:** Use `'60000'` ms (60s) intervals, if your LB idle timeout allows
|
|
160
|
+
3. **For queries 1–2 hours:** Use `'120000'` ms (120s) intervals, if your LB idle timeout allows
|
|
161
|
+
4. **For mutations over 2 hours:** Use the fire-and-forget pattern (see below)
|
|
162
|
+
5. **For SELECT queries over 2 hours:** Increase `http_headers_progress_interval_ms` to extend the safe duration, while keeping it below your LB idle timeout and within Node.js header-limit constraints
|
|
163
|
+
|
|
164
|
+
Use this command to experiment and debug:
|
|
165
|
+
|
|
166
|
+
```bash
|
|
167
|
+
curl -v "http://localhost:8123/?function_sleep_max_microseconds_per_block=10000000&wait_end_of_query=1&send_progress_in_http_headers=1&max_block_size=1&query=select+sum(sleepEachRow(1))+from+numbers(10)+FORMAT+JSONEachRow"
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
Experimenting with the exact load balancer stack might be required.
|
|
171
|
+
|
|
172
|
+
**Important trade-offs:**
|
|
173
|
+
|
|
174
|
+
- **Shorter intervals** = better load balancer keep-alive (prevents idle timeout) but **lower max duration**
|
|
175
|
+
- **Longer intervals** = higher max duration but **higher risk of LB idle timeout**
|
|
176
|
+
|
|
177
|
+
As a rule of thumb, set the interval slightly **below** your load balancer's idle timeout—typically by a few seconds (for example, often around 5–20 seconds), depending on your load balancer, proxies, and network behavior—while staying under the header limit for your expected query duration.
|
|
178
|
+
|
|
179
|
+
**Alternatively — fire-and-forget (mutations only):** Mutations (`INSERT ... SELECT`, `OPTIMIZE`, `ALTER`) are not cancelled on the server when the client connection is lost. You can send the mutation and immediately close the connection, then poll `system.query_log` or `system.mutations` for status. This bypasses both the load balancer idle timeout and the Node.js header limit. See the [client repo examples](https://github.com/ClickHouse/clickhouse-js/tree/main/examples) for a concrete implementation.
|
|
115
180
|
|
|
116
181
|
## Step 5 — Disable Keep-Alive entirely (last resort)
|
|
117
182
|
|