@clickhouse/client 1.18.4-head.2f6fd6b.1 → 1.18.4-head.3029a3d.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -105,6 +105,25 @@ See more examples in the [examples directory](./examples).
105
105
 
106
106
  See the [ClickHouse website](https://clickhouse.com/docs/integrations/javascript) for the full documentation.
107
107
 
108
+ ## AI Agent Skills
109
+
110
+ This repository contains agent skills for working with the client:
111
+
112
+ - `clickhouse-js-node-troubleshooting` — troubleshooting playbook for the Node.js client.
113
+
114
+ Install via CLI:
115
+
116
+ ```sh
117
+ # per project
118
+ npx skills add ClickHouse/clickhouse-js
119
+ # globally
120
+ npx skills add ClickHouse/clickhouse-js -g
121
+ ```
122
+
123
+ Or ask your agent to install it for you:
124
+
125
+ > install agent skills from ClickHouse/clickhouse-js
126
+
108
127
  ## Usage examples
109
128
 
110
129
  We have a wide range of [examples](./examples), aiming to cover various scenarios of client usage. The overview is available in the [examples README](https://github.com/ClickHouse/clickhouse-js/blob/main/examples/README.md#overview).
package/dist/version.d.ts CHANGED
@@ -1,2 +1,2 @@
1
- declare const _default: "1.18.4-head.2f6fd6b.1";
1
+ declare const _default: "1.18.4-head.3029a3d.1";
2
2
  export default _default;
package/dist/version.js CHANGED
@@ -1,4 +1,4 @@
1
1
  "use strict";
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
- exports.default = '1.18.4-head.2f6fd6b.1';
3
+ exports.default = '1.18.4-head.3029a3d.1';
4
4
  //# sourceMappingURL=version.js.map
package/package.json CHANGED
@@ -2,7 +2,7 @@
2
2
  "name": "@clickhouse/client",
3
3
  "description": "Official JS client for ClickHouse DB - Node.js implementation",
4
4
  "homepage": "https://clickhouse.com",
5
- "version": "1.18.4-head.2f6fd6b.1",
5
+ "version": "1.18.4-head.3029a3d.1",
6
6
  "license": "Apache-2.0",
7
7
  "keywords": [
8
8
  "clickhouse",
@@ -20,17 +20,27 @@
20
20
  "main": "dist/index.js",
21
21
  "types": "dist/index.d.ts",
22
22
  "files": [
23
- "dist"
23
+ "dist",
24
+ "skills"
24
25
  ],
26
+ "agents": {
27
+ "skills": [
28
+ {
29
+ "name": "clickhouse-js-node-troubleshooting",
30
+ "path": "./skills/clickhouse-js-node-troubleshooting"
31
+ }
32
+ ]
33
+ },
25
34
  "scripts": {
26
- "prepack": "cp ../../README.md ../../LICENSE .",
35
+ "pack": "npm pack",
36
+ "prepack": "rm -rf skills && cp ../../README.md ../../LICENSE . && cp -r ../../skills .",
27
37
  "typecheck": "tsc --noEmit",
28
38
  "lint": "eslint --max-warnings=0 .",
29
39
  "lint:fix": "eslint . --fix",
30
40
  "build": "rm -rf dist; tsc"
31
41
  },
32
42
  "dependencies": {
33
- "@clickhouse/client-common": "1.18.4-head.2f6fd6b.1"
43
+ "@clickhouse/client-common": "1.18.4-head.3029a3d.1"
34
44
  },
35
45
  "devDependencies": {
36
46
  "simdjson": "^0.9.2"
@@ -0,0 +1,52 @@
1
+ ---
2
+ name: clickhouse-js-node-troubleshooting
3
+ description: >
4
+ Troubleshoot and resolve common issues with the ClickHouse Node.js client
5
+ (@clickhouse/client). Use this skill whenever a user reports errors, unexpected
6
+ behavior, or configuration questions involving the Node.js client specifically —
7
+ including socket hang-up errors, Keep-Alive problems, stream handling issues, data
8
+ type mismatches, read-only user restrictions, proxy/TLS setup problems, or long-running
9
+ query timeouts. Trigger even when the user hasn't precisely named the issue; vague
10
+ symptoms like "my inserts keep failing" or "connection drops randomly" in a Node.js
11
+ context are strong signals to use this skill. Do NOT use for browser/Web client issues.
12
+ ---
13
+
14
+ # ClickHouse Node.js Client Troubleshooting
15
+
16
+ Reference: https://clickhouse.com/docs/integrations/javascript
17
+
18
+ > **⚠️ Node.js runtime only.** This skill covers the `@clickhouse/client` package running in a **Node.js runtime** exclusively — including **Next.js Node runtime** API routes, React Server Components, Server Actions, and standard Node.js processes. Do **not** apply this skill to browser client components, Web Workers, **Next.js Edge runtime**, Cloudflare Workers, or any usage of `@clickhouse/client-web`. For browser/edge environments, the correct package is `@clickhouse/client-web`.
19
+
20
+ ---
21
+
22
+ ## How to Use This Skill
23
+
24
+ 1. **Identify the issue** — match symptoms to the Issue Index below and read the corresponding reference file.
25
+ 2. **Lead with the diagnosis** — explain what's likely causing the issue before giving the fix.
26
+ 3. **Note version constraints** — flag if a fix requires a minimum client version and check it against what the user provided.
27
+ 4. **Ask only what's missing** — if the fix is version-dependent and you don't know their version, ask; otherwise help immediately.
28
+
29
+ ---
30
+
31
+ ## Issue Index
32
+
33
+ Identify the user's issue from the list below and read the corresponding reference file for detailed troubleshooting steps.
34
+
35
+ | Issue | Symptoms | Reference file |
36
+ | ------------------------------------- | ---------------------------------------------------------------------------------------------- | ----------------------------- |
37
+ | **Socket Hang-Up / ECONNRESET** | `socket hang up`, `ECONNRESET`, intermittent connection drops, long-running queries timing out | `reference/socket-hangup.md` |
38
+ | **Data Type Mismatches** | Large integers returned as strings, decimal precision loss, Date/DateTime insertion failures | `reference/data-types.md` |
39
+ | **Read-Only User Errors** | Errors when using response compression with `readonly=1` users | `reference/readonly-users.md` |
40
+ | **Proxy / Pathname URL Confusion** | Wrong database selected, requests failing behind a proxy with a path prefix | `reference/proxy-pathname.md` |
41
+ | **TLS / Certificate Errors** | TLS handshake failures, certificate verification issues, mutual TLS setup | `reference/tls.md` |
42
+ | **Compression Not Working** | GZIP compression not activating for requests or responses | `reference/compression.md` |
43
+ | **Logging Not Showing Anything** | No log output, need custom logger integration | `reference/logging.md` |
44
+ | **Query Parameters Not Interpolated** | Parameterized queries not working, SQL injection concerns | `reference/query-params.md` |
45
+
46
+ ---
47
+
48
+ ## Still Stuck?
49
+
50
+ - [JS client source + full examples](https://github.com/ClickHouse/clickhouse-js/tree/main/examples)
51
+ - [ClickHouse JS client docs](https://clickhouse.com/docs/integrations/javascript)
52
+ - [ClickHouse supported formats](https://clickhouse.com/docs/interfaces/formats)
@@ -0,0 +1,126 @@
1
+ {
2
+ "skill_name": "clickhouse-js-node-troubleshooting",
3
+ "evals": [
4
+ {
5
+ "id": 0,
6
+ "prompt": "I'm using @clickhouse/client in a Node.js API server and I get `socket hang up` errors, but only after the server has been idle for a while — if I hammer it with requests it's fine. Any idea what's going on? I'm on version 0.3.2.",
7
+ "expected_output": "Explanation that this is a Keep-Alive idle socket timeout mismatch. The server's keep-alive timeout is shorter than the client's idle_socket_ttl. Should recommend checking the server's keep-alive timeout with curl and setting idle_socket_ttl to ~500ms below it.",
8
+ "files": [],
9
+ "expectations": [
10
+ "Identifies the likely cause as a Keep-Alive idle timeout mismatch rather than a generic network problem.",
11
+ "Recommends checking the server or proxy Keep-Alive timeout, including the curl-based header check or equivalent.",
12
+ "Explains that idle_socket_ttl should be set slightly below the server timeout, around 500ms lower."
13
+ ]
14
+ },
15
+ {
16
+ "id": 1,
17
+ "prompt": "I keep getting ECONNRESET on literally every second request in my Node.js app. Here's my code:\n\n```js\nconst resultSet = await client.query({ query: 'SELECT count() FROM events' })\nconst stream = resultSet.stream()\n// then I do some stuff and run another query\nconst result2 = await client.query({ query: 'SELECT 1' })\n```\n\nThe first query always works, second always fails. What am I doing wrong?",
18
+ "expected_output": "Diagnosis of dangling stream — the stream from the first query is never fully iterated or closed, corrupting the Keep-Alive socket. Fix: either fully consume via for-await or call resultSet.close().",
19
+ "files": [],
20
+ "expectations": [
21
+ "Diagnoses the problem as an unconsumed or dangling ResultSet stream causing the next request to fail.",
22
+ "Explains that the first query response must be fully consumed or explicitly closed before reusing the client connection.",
23
+ "Provides at least one concrete fix using full stream consumption, resultSet.json/text, or resultSet.close()."
24
+ ]
25
+ },
26
+ {
27
+ "id": 2,
28
+ "prompt": "My UInt64 column values are coming back as strings in JavaScript — like `\"9007199254740993\"` instead of a number. I'm using JSONEachRow format. Is there a way to get them as actual numbers?",
29
+ "expected_output": "Explanation that ClickHouse serializes 64-bit integers as strings in JSON formats to prevent overflow. Option 1: use output_format_json_quote_64bit_integers: 0 (with precision-loss warning). Option 2: use BigInt or a BigInt-safe JSON parser. Should mention the precision risk.",
30
+ "files": [],
31
+ "expectations": [
32
+ "Explains that 64-bit integers are returned as strings in JSON formats to avoid JavaScript precision issues.",
33
+ "Mentions output_format_json_quote_64bit_integers: 0 as a way to receive numeric JSON output.",
34
+ "Warns that converting these values to Number can lose precision and suggests a safer BigInt-oriented alternative."
35
+ ]
36
+ },
37
+ {
38
+ "id": 3,
39
+ "prompt": "We have ClickHouse sitting behind an nginx reverse proxy. The proxy URL is http://myproxy.internal:8123/clickhouse. I'm on @clickhouse/client 1.3.0 and creating the client like this:\n\n```js\nconst client = createClient({ url: 'http://myproxy.internal:8123/clickhouse' })\n```\n\nBut it seems to be selecting the wrong database — it's trying to use 'clickhouse' as the database name instead of going through the proxy path. What am I missing?",
40
+ "expected_output": "Explanation of the proxy/pathname confusion: the path in the URL is being interpreted as the database name. Fix: use the `pathname` option separately — createClient({ url: 'http://myproxy.internal:8123', pathname: '/clickhouse' }). Should note this requires >= 1.0.0.",
41
+ "files": [],
42
+ "expectations": [
43
+ "Explains that putting the path segment in url makes the client interpret it as the database name or otherwise mishandle the proxy path.",
44
+ "Shows the fix using a base url plus a separate pathname option.",
45
+ "Acknowledges the version dependency by either noting pathname requires >= 1.0.0 or asking for the client version before assuming that fix is available."
46
+ ]
47
+ },
48
+ {
49
+ "id": 4,
50
+ "prompt": "I'm getting this error when connecting to our self-hosted ClickHouse over HTTPS:\n\n```\nError: unable to verify the first certificate\n at TLSSocket.onConnectEnd (_tls_wrap.js:1495:19)\n```\n\nWe use an internal certificate authority. I'm using @clickhouse/client 1.3.0 with Node.js 18. How do I fix this?",
51
+ "expected_output": "Diagnosis: private/internal CA not trusted by Node.js. Fix: pass the CA certificate via the tls.ca_cert option using fs.readFileSync. Should show the createClient({ url: 'https://...', tls: { ca_cert: fs.readFileSync('certs/CA.pem') } }) example.",
52
+ "files": [],
53
+ "expectations": [
54
+ "Diagnoses the error as Node.js not trusting the internal or private certificate authority.",
55
+ "Shows how to pass the CA certificate via tls.ca_cert with fs.readFileSync or an equivalent code example.",
56
+ "Avoids recommending insecure production advice such as disabling certificate verification without clearly marking it as development-only."
57
+ ]
58
+ },
59
+ {
60
+ "id": 5,
61
+ "prompt": "My parameterized queries aren't working. I'm doing:\n\n```js\nawait client.query({\n query: 'SELECT * FROM users WHERE id = $1 AND status = $2',\n query_params: { 1: 42, 2: 'active' }\n})\n```\n\nThe values just don't get substituted. Coming from PostgreSQL and this was how params work there.",
62
+ "expected_output": "Explanation that ClickHouse JS client uses ClickHouse's native {name: type} syntax, not $1/$2 placeholders. Show the correct syntax: { query: 'SELECT * FROM users WHERE id = {id: UInt32} AND status = {status: String}', query_params: { id: 42, status: 'active' } }. Warn against template literal interpolation (SQL injection risk).",
63
+ "files": [],
64
+ "expectations": [
65
+ "Explains that the ClickHouse JS client does not use PostgreSQL-style $1 or $2 placeholders.",
66
+ "Provides a corrected example using ClickHouse's native {name: type} parameter syntax with query_params keys matching the names.",
67
+ "Warns against interpolating user values directly into the SQL string because of SQL injection risk."
68
+ ]
69
+ },
70
+ {
71
+ "id": 6,
72
+ "prompt": "I enabled response compression in @clickhouse/client for my readonly user, but I'm getting an error from ClickHouse that says something like 'Cannot modify setting enable_http_compression for user with readonly=1'. My client setup:\n\n```js\nconst client = createClient({\n username: 'readonly_user',\n password: 'secret',\n compression: { response: true }\n})\n```",
73
+ "expected_output": "Explanation that readonly=1 users cannot change the enable_http_compression setting, which response compression requires. Fix: remove compression.response: true (or set to false). Note that request compression is unaffected. Mention that in >= 1.0.0, response compression is disabled by default.",
74
+ "files": [],
75
+ "expectations": [
76
+ "Explains that response compression toggles enable_http_compression, which a readonly=1 user cannot modify.",
77
+ "Recommends removing or disabling compression.response for this user.",
78
+ "Notes that request compression is a separate setting and is not blocked by the readonly restriction."
79
+ ]
80
+ },
81
+ {
82
+ "id": 7,
83
+ "prompt": "I'm on @clickhouse/client 1.3.0 and trying to set up structured logging to pipe into our observability stack (we use pino). I want to forward all client log messages at INFO level and above to pino. How do I wire that up?",
84
+ "expected_output": "Should show how to implement the Logger interface with a class (MyLogger implements Logger) that forwards to pino, then pass it via createClient({ log: { LoggerClass: MyLogger, level: ClickHouseLogLevel.INFO } }). Should show the debug/info/warn/error/trace method signatures.",
85
+ "files": [],
86
+ "expectations": [
87
+ "Shows a custom Logger implementation or equivalent logger wiring that forwards client logs to pino.",
88
+ "Configures createClient with log.LoggerClass and ClickHouseLogLevel.INFO or an equivalent INFO-level setup.",
89
+ "Acknowledges the version dependency by either noting this logging API requires >= 0.2.0 or asking for the client version before assuming availability."
90
+ ]
91
+ },
92
+ {
93
+ "id": 8,
94
+ "prompt": "I'm using `@clickhouse/client-web` inside a Next.js Edge route and trying to debug random request failures and TLS weirdness. Can you walk me through the Node.js client socket and certificate options I should tune?",
95
+ "expected_output": "Should explicitly reject applying the Node.js troubleshooting flow because this is an Edge/browser-style runtime using `@clickhouse/client-web`, not `@clickhouse/client`. Must redirect the user to the web client / runtime-appropriate guidance instead of suggesting Node-only socket, keep-alive, or tls options.",
96
+ "files": [],
97
+ "expectations": [
98
+ "Explicitly states that this skill's Node.js guidance does not apply to @clickhouse/client-web in a Next.js Edge runtime.",
99
+ "Avoids recommending Node-only configuration such as keep_alive, socket TTL tuning, custom HTTP agents, or tls.ca_cert for this case.",
100
+ "Redirects the user toward runtime-appropriate web or edge guidance instead of continuing with Node client troubleshooting."
101
+ ]
102
+ },
103
+ {
104
+ "id": 9,
105
+ "prompt": "I'm on @clickhouse/client 1.6.0 talking to a self-hosted ClickHouse cluster over HTTP. I turned on `compression: { response: true }` but the responses still don't look compressed. This is not a readonly user, and there is no settings error from ClickHouse. What should I check?",
106
+ "expected_output": "Should explain that in >= 1.0.0 response compression is disabled by default unless enabled, but since it is already enabled here the next checks are whether the server has HTTP compression enabled and whether the user is confusing request compression with response compression. Should mention that only GZIP is supported and that request compression does not affect response bodies.",
107
+ "files": [],
108
+ "expectations": [
109
+ "Recognizes that this is not the readonly-user failure mode because there is no settings error and the user already enabled response compression.",
110
+ "Recommends checking whether the ClickHouse server has HTTP compression enabled.",
111
+ "Clarifies that request compression and response compression are separate, and that only GZIP is supported."
112
+ ]
113
+ },
114
+ {
115
+ "id": 10,
116
+ "prompt": "We run a long `INSERT INTO dst SELECT * FROM src` through @clickhouse/client in a Node.js worker. It can sit there for a couple minutes with no rows coming back, and then our AWS load balancer drops the connection around the 120 second mark. Smaller queries are fine. We're on client 1.4.0. How should we handle this?",
117
+ "expected_output": "Should diagnose this as a long-running query idle-timeout problem rather than a dangling stream issue. Must recommend increasing request_timeout and enabling periodic progress headers with send_progress_in_http_headers and http_headers_progress_interval_ms set below the load balancer idle timeout. Should also mention the Node.js response-header limit tradeoff for very long queries and optionally suggest the fire-and-forget mutation pattern.",
118
+ "files": [],
119
+ "expectations": [
120
+ "Diagnoses the issue as a long-running idle timeout at the load balancer rather than a dangling stream or ordinary per-request ECONNRESET problem.",
121
+ "Recommends increasing request_timeout and enabling send_progress_in_http_headers with http_headers_progress_interval_ms below the load balancer timeout.",
122
+ "Mentions the Node.js received-header limit tradeoff for very long-running progress-header use or offers the fire-and-forget mutation pattern as an alternative."
123
+ ]
124
+ }
125
+ ]
126
+ }
@@ -0,0 +1,27 @@
1
+ # Compression Not Working
2
+
3
+ > **Applies to:** all versions. Response compression was enabled by default in `< 1.0.0` and **disabled by default since `>= 1.0.0`** — you must explicitly enable it. Request compression has always been opt-in.
4
+
5
+ Both request and response compression are supported. Only **GZIP** is supported (via zlib).
6
+
7
+ ```js
8
+ import { createClient } from '@clickhouse/client'
9
+ const client = createClient({
10
+ compression: {
11
+ response: true,
12
+ request: true,
13
+ },
14
+ })
15
+ ```
16
+
17
+ ## Compression enabled but getting an error?
18
+
19
+ If you enable `compression.response: true` and get a ClickHouse settings error, you are likely connecting as a `readonly=1` user. Response compression requires the `enable_http_compression` setting, which read-only users cannot change.
20
+
21
+ See [`reference/readonly-users.md`](./readonly-users.md) for the fix.
22
+
23
+ ## Compression enabled but response doesn't seem compressed?
24
+
25
+ - Verify your version-specific defaults — response compression was enabled by default in `< 1.0.0` and is **disabled by default** in `>= 1.0.0`, so on newer versions you must enable `compression.response: true` explicitly.
26
+ - Check that the ClickHouse server has HTTP compression enabled (`enable_http_compression = 1` in server config). By default this is enabled on ClickHouse Cloud and most self-hosted setups.
27
+ - Request compression (`compression.request: true`) compresses the request body sent to ClickHouse. It has no effect on the response.
@@ -0,0 +1,73 @@
1
+ # Data Type Mismatches
2
+
3
+ ## Large integers returned as strings
4
+
5
+ > **Applies to:** all versions. The `output_format_json_quote_64bit_integers` ClickHouse setting is server-side and can be passed via `clickhouse_settings` in any client version.
6
+
7
+ `UInt64`, `Int64`, `UInt128`, `Int128`, `UInt256`, `Int256` are serialized as **strings** in `JSON*` formats to prevent overflow (they exceed `Number.MAX_SAFE_INTEGER`).
8
+
9
+ To receive them as numbers (use with caution — precision loss possible):
10
+
11
+ ```js
12
+ const resultSet = await client.query({
13
+ query: 'SELECT toUInt64(9007199254740993)',
14
+ format: 'JSONEachRow',
15
+ clickhouse_settings: { output_format_json_quote_64bit_integers: 0 },
16
+ })
17
+ ```
18
+
19
+ > **Tip (`>= 1.15.0`):** BigInt values are now supported in query parameters, so you can safely pass large integers as bind params without string workarounds.
20
+
21
+ ## Decimals losing precision on read
22
+
23
+ > **Applies to:** all versions (this is a ClickHouse JSON serialization behavior). For custom JSON parse/stringify (e.g., using a BigInt-safe parser), see `>= 1.14.0` which added configurable `json.parse` and `json.stringify` functions.
24
+
25
+ ClickHouse returns Decimals as numbers by default in `JSON*` formats. Cast to string in the query:
26
+
27
+ ```js
28
+ const resultSet = await client.query({
29
+ query: `
30
+ SELECT toString(my_decimal) AS my_decimal
31
+ FROM my_table
32
+ `,
33
+ format: 'JSONEachRow',
34
+ })
35
+ ```
36
+
37
+ When inserting, always use the string representation to avoid precision loss:
38
+
39
+ ```js
40
+ await client.insert({
41
+ table: 'my_table',
42
+ values: [{ dec64: '123456789123456.789' }],
43
+ format: 'JSONEachRow',
44
+ })
45
+ ```
46
+
47
+ ## Format Selection Quick Reference
48
+
49
+ | Use case | Recommended format | Min version |
50
+ | --------------------------- | ----------------------------------- | ------------------------------------- |
51
+ | Insert/select JS objects | `JSONEachRow` | all |
52
+ | Bulk insert arrays | `JSONEachRow` | all |
53
+ | Stream large result sets | `JSONEachRow`, `JSONCompactEachRow` | all |
54
+ | CSV file streaming | `CSV`, `CSVWithNames` | all |
55
+ | Parquet file streaming | `Parquet` | `>= 0.2.6` |
56
+ | Single JSON object response | `JSON`, `JSONCompact` | `JSON` all; `JSONCompact` `>= 0.0.14` |
57
+ | Stream with progress | `JSONEachRowWithProgress` | `>= 1.7.0` |
58
+
59
+ > ⚠️ `JSON` and `JSONCompact` return a single object and **cannot be streamed**.
60
+
61
+ ## Date/DateTime insertion fails or produces wrong values
62
+
63
+ > **Applies to:** all versions. Note that `>= 0.2.1` changed Date object serialization to use time-zone-agnostic Unix timestamps instead of timezone-naive datetime strings, which fixed timezone mismatch issues between client and server.
64
+
65
+ - `Date` / `Date32` columns accept **strings only** (e.g., `'2024-01-15'`).
66
+ - `DateTime` / `DateTime64` columns accept strings **or** JS `Date` objects. To use `Date` objects, set:
67
+
68
+ ```js
69
+ import { createClient } from '@clickhouse/client'
70
+ const client = createClient({
71
+ clickhouse_settings: { date_time_input_format: 'best_effort' },
72
+ })
73
+ ```
@@ -0,0 +1,44 @@
1
+ # Logging Not Showing Anything
2
+
3
+ > **Requires:** `>= 0.2.0` (explicit `log.level` config option introduced in 0.2.0, replacing the `CLICKHOUSE_LOG_LEVEL` env var from 0.0.11). Custom `LoggerClass` also available since `>= 0.2.0`. In `>= 1.18.1`, the default changed from `OFF` to `WARN` and logging became lazy (messages only constructed if the log level matches). In `>= 1.18.1`, structured context fields (`connection_id`, `query_id`, `request_id`, `socket_id`) are available in logger `args`.
4
+
5
+ The default log level is **OFF** (for `< 1.18.1`) or **WARN** (for `>= 1.18.1`). Enable it explicitly:
6
+
7
+ ```js
8
+ import { ClickHouseLogLevel, createClient } from '@clickhouse/client'
9
+
10
+ const client = createClient({
11
+ log: {
12
+ level: ClickHouseLogLevel.DEBUG, // TRACE | DEBUG | INFO | WARN | ERROR
13
+ },
14
+ })
15
+ ```
16
+
17
+ To use a custom logger (e.g., to pipe to your observability stack), implement the `Logger` interface:
18
+
19
+ ```ts
20
+ import { ClickHouseLogLevel, createClient } from '@clickhouse/client'
21
+ import type { Logger } from '@clickhouse/client'
22
+
23
+ class MyLogger implements Logger {
24
+ debug({ module, message, args }) {
25
+ /* ... */
26
+ }
27
+ info({ module, message, args }) {
28
+ /* ... */
29
+ }
30
+ warn({ module, message, args, err }) {
31
+ /* ... */
32
+ }
33
+ error({ module, message, args, err }) {
34
+ /* ... */
35
+ }
36
+ trace({ module, message, args }) {
37
+ /* ... */
38
+ }
39
+ }
40
+
41
+ const client = createClient({
42
+ log: { LoggerClass: MyLogger, level: ClickHouseLogLevel.INFO },
43
+ })
44
+ ```
@@ -0,0 +1,32 @@
1
+ # Proxy / Pathname URL Confusion
2
+
3
+ > **Requires:** `>= 1.0.0` (the `pathname` config option and URL-based configuration were introduced in 1.0.0). For `< 1.0.0`, a partial fix for pathname handling in the `host` parameter was shipped in `0.2.5`.
4
+
5
+ **Symptom:** Wrong database is selected, or requests fail when ClickHouse is behind a proxy with a path prefix (e.g., `http://proxy:8123/clickhouse_server`).
6
+
7
+ **Cause:** Passing the pathname in `url` makes the client treat it as the database name.
8
+
9
+ **Fix:** Use the `pathname` option separately:
10
+
11
+ ```js
12
+ import { createClient } from '@clickhouse/client'
13
+
14
+ const client = createClient({
15
+ url: 'http://proxy:8123',
16
+ pathname: '/clickhouse_server', // leading slash optional; multiple segments supported
17
+ })
18
+ ```
19
+
20
+ For proxies that require custom auth headers:
21
+
22
+ > **Requires:** `>= 1.0.0` (`http_headers` config option; replaces the deprecated `additional_headers` from `>= 0.2.9`). Per-request `http_headers` overrides are available since `>= 1.11.0`.
23
+
24
+ ```js
25
+ import { createClient } from '@clickhouse/client'
26
+
27
+ const client = createClient({
28
+ http_headers: {
29
+ 'My-Auth-Header': 'secret',
30
+ },
31
+ })
32
+ ```
@@ -0,0 +1,103 @@
1
+ # Query Parameters Not Interpolated
2
+
3
+ > **Applies to:** all versions. NULL parameter binding was fixed in `0.0.16`. Tuple support via `TupleParam` wrapper and JS `Map` as a query parameter were added in `>= 1.9.0`. BigInt values in query parameters are supported since `>= 1.15.0`. Boolean formatting in `Array`/`Tuple`/`Map` params was fixed in `>= 1.13.0`.
4
+
5
+ Use the `{name: type}` syntax in the query string and pass values via `query_params`:
6
+
7
+ ```js
8
+ await client.query({
9
+ query: 'SELECT plus({val1: Int32}, {val2: Int32})',
10
+ format: 'CSV',
11
+ query_params: { val1: 10, val2: 20 },
12
+ })
13
+ ```
14
+
15
+ ## Never use template literals for user values
16
+
17
+ When `$1`/`?` don't work, a common instinct is to interpolate values directly with a template literal. Don't — this bypasses ClickHouse's server-side escaping and opens the door to SQL injection:
18
+
19
+ ```js
20
+ // ❌ Dangerous — never do this with user-controlled values
21
+ const userId = req.params.id
22
+ await client.query({ query: `SELECT * FROM users WHERE id = ${userId}` })
23
+
24
+ // ✓ Safe — parameterized
25
+ await client.query({
26
+ query: 'SELECT * FROM users WHERE id = {id: UInt32}',
27
+ query_params: { id: userId },
28
+ })
29
+ ```
30
+
31
+ Always bring this up when answering query-params questions, especially when the user is coming from another database (PostgreSQL, MySQL, etc.) — they're the most likely to reach for template literals as a fallback.
32
+
33
+ ## Common mistake: wrong parameter syntax
34
+
35
+ The ClickHouse JS client uses ClickHouse's native `{name: type}` syntax — not `$1`/`?`/`:name` placeholders from other databases:
36
+
37
+ ```js
38
+ // ❌ Wrong — these don't work
39
+ await client.query({
40
+ query: 'SELECT * FROM t WHERE id = $1',
41
+ query: 'SELECT * FROM t WHERE id = ?',
42
+ query: 'SELECT * FROM t WHERE id = :id',
43
+ query_params: { id: 42 },
44
+ })
45
+
46
+ // ✓ Correct
47
+ await client.query({
48
+ query: 'SELECT * FROM t WHERE id = {id: UInt32}',
49
+ query_params: { id: 42 },
50
+ })
51
+ ```
52
+
53
+ ## Array parameters
54
+
55
+ ```js
56
+ await client.query({
57
+ query: 'SELECT * FROM t WHERE id IN {ids: Array(UInt32)}',
58
+ format: 'JSONEachRow',
59
+ query_params: { ids: [1, 2, 3] },
60
+ })
61
+ ```
62
+
63
+ ## Tuple parameters (`>= 1.9.0`)
64
+
65
+ Use the `TupleParam` wrapper to pass a tuple:
66
+
67
+ ```js
68
+ import { TupleParam, createClient } from '@clickhouse/client'
69
+
70
+ const client = createClient({
71
+ url: 'http://localhost:8123',
72
+ })
73
+
74
+ await client.query({
75
+ query: 'SELECT {t: Tuple(UInt32, String)}',
76
+ format: 'JSONEachRow',
77
+ query_params: { t: new TupleParam([42, 'hello']) },
78
+ })
79
+ ```
80
+
81
+ ## Map parameters (`>= 1.9.0`)
82
+
83
+ Pass a JS `Map` directly:
84
+
85
+ ```js
86
+ await client.query({
87
+ query: 'SELECT {m: Map(String, UInt32)}',
88
+ format: 'JSONEachRow',
89
+ query_params: { m: new Map([['key', 1]]) },
90
+ })
91
+ ```
92
+
93
+ ## NULL parameters
94
+
95
+ Pass `null` directly — binding fixed in `0.0.16`:
96
+
97
+ ```js
98
+ await client.query({
99
+ query: 'SELECT {val: Nullable(String)}',
100
+ format: 'JSONEachRow',
101
+ query_params: { val: null },
102
+ })
103
+ ```
@@ -0,0 +1,25 @@
1
+ # Read-Only User Errors
2
+
3
+ > **Applies to:** all versions. In `>= 1.0.0`, `compression.response` was changed to **disabled by default** specifically to avoid this confusing error for read-only users. If you are on `< 1.0.0`, response compression was enabled by default and you must explicitly disable it.
4
+
5
+ **Symptom:** Error when using `compression: { response: true }` with a `readonly=1` user.
6
+
7
+ **Cause:** Response compression requires the `enable_http_compression` setting, which `readonly=1` users cannot change. Note: **request compression** (`compression: { request: true }`) is unaffected by this restriction — only response compression triggers the error.
8
+
9
+ **Fix:** Remove response compression for read-only users:
10
+
11
+ ```js
12
+ import { createClient } from '@clickhouse/client'
13
+
14
+ // Don't do this with a readonly=1 user:
15
+ // compression: { response: true }
16
+
17
+ const client = createClient({
18
+ username: 'my_readonly_user',
19
+ password: '...',
20
+ // compression omitted, or explicitly set to false
21
+ compression: {
22
+ response: false,
23
+ },
24
+ })
25
+ ```
@@ -0,0 +1,191 @@
1
+ # Socket Hang-Up / ECONNRESET
2
+
3
+ **Symptom:** `socket hang up` or `ECONNRESET` errors, often intermittent.
4
+
5
+ **Root cause:** The server or load balancer closes the Keep-Alive connection before the client detects it and stops reusing the socket.
6
+
7
+ **Quick triage:**
8
+
9
+ - Errors on every request → likely dangling stream (Step 1–2)
10
+ - Errors only after idle periods → Keep-Alive timeout mismatch (Step 3)
11
+ - Errors on long-running queries (INSERT FROM SELECT, etc.) → load balancer idle timeout (Step 4)
12
+ - Can't diagnose → disable Keep-Alive as a last resort (Step 5)
13
+
14
+ ## Step 1 — Enable WARN-level logging to find dangling streams
15
+
16
+ > **Requires:** `>= 0.2.0` (logging support with `log.level` config option). In `>= 1.18.1`, the default log level changed from `OFF` to `WARN`, so this step may already be active. In `>= 1.18.2`, the client auto-emits a WARN log with Keep-Alive troubleshooting hints when an `ECONNRESET` is detected. In `>= 1.12.0`, a warning is logged when a socket is closed without fully consuming the stream.
17
+
18
+ ```js
19
+ import { createClient, ClickHouseLogLevel } from '@clickhouse/client'
20
+
21
+ const client = createClient({
22
+ log: { level: ClickHouseLogLevel.WARN },
23
+ })
24
+ ```
25
+
26
+ Look for log lines about unconsumed or dangling streams — these are a common hidden cause. A **dangling stream** is a query response stream that was never fully consumed or explicitly closed with `ResultSet.close()`. Because the Node.js client reuses sockets (Keep-Alive), leaving a stream open corrupts the socket and causes the _next_ request to fail with `ECONNRESET`. Errors on **every request** strongly suggest dangling streams rather than a Keep-Alive timeout mismatch.
27
+
28
+ **Common dangling stream patterns:**
29
+
30
+ ```js
31
+ // ❌ Wrong — result stream never consumed; socket is left open
32
+ const resultSet = await client.query({ query: 'SELECT ...' })
33
+ // result is abandoned without calling .json(), .text(), .stream(), or .close()
34
+
35
+ // ❌ Wrong — stream created but not fully piped/iterated
36
+ const resultSet = await client.query({
37
+ query: 'SELECT ...',
38
+ format: 'JSONEachRow',
39
+ })
40
+ const stream = resultSet.stream()
41
+ // stream is never iterated and resultSet is never closed
42
+
43
+ // ✓ Correct — consume via .json()
44
+ const resultSet = await client.query({ query: 'SELECT ...' })
45
+ const data = await resultSet.json()
46
+
47
+ // ✓ Correct — consume via async iteration
48
+ const resultSet = await client.query({
49
+ query: 'SELECT ...',
50
+ format: 'JSONEachRow',
51
+ })
52
+ for await (const rows of resultSet.stream()) {
53
+ // process rows
54
+ }
55
+
56
+ // ✓ Correct — explicitly close; this destroys the underlying socket immediately
57
+ const resultSet = await client.query({ query: 'SELECT ...' })
58
+ resultSet.close()
59
+ ```
60
+
61
+ ## Step 2 — Check your ESLint setup
62
+
63
+ Add the [`no-floating-promises`](https://typescript-eslint.io/rules/no-floating-promises/) ESLint rule. Unhandled promises leave streams dangling, which can cause the server to close the socket.
64
+
65
+ Even with `await`, if the returned `ResultSet` is not consumed (no `.json()`, `.text()`, `.close()`, or full stream iteration), the socket is left open. The ESLint rule catches the promise case; code review is needed for the "awaited but unconsumed result" case.
66
+
67
+ ## Step 3 — Find the server's Keep-Alive timeout
68
+
69
+ ```bash
70
+ curl -v --data-binary "SELECT 1" <your_clickhouse_url>
71
+ ```
72
+
73
+ Check the response headers:
74
+
75
+ ```
76
+ < Connection: Keep-Alive
77
+ < Keep-Alive: timeout=10
78
+ ```
79
+
80
+ > **Requires:** `>= 0.3.0` (`keep_alive.idle_socket_ttl` was introduced in 0.3.0 with a default of 2500 ms, replacing the older `keep_alive.socket_ttl` from 0.1.1 which was removed in 0.3.0).
81
+
82
+ The default `idle_socket_ttl` in the client is **2500 ms**, which is safe for servers with a 3 s timeout (common in ClickHouse < 23.11). If your server has a higher timeout (e.g., 10 s), you can safely increase:
83
+
84
+ ```js
85
+ const client = createClient({
86
+ keep_alive: {
87
+ idle_socket_ttl: 9000, // stay ~500ms below the server's timeout
88
+ },
89
+ })
90
+ ```
91
+
92
+ > ⚠️ If you still get errors after increasing, **lower** the value, not raise it.
93
+
94
+ > **Tip (`>= 1.18.3`):** Enable `keep_alive.eagerly_destroy_stale_sockets: true` to proactively destroy sockets that have been idle longer than `idle_socket_ttl` before each request. This helps when event loop delays prevent the idle timeout callback from firing on time.
95
+
96
+ ## Step 4 — Long-running queries with no data in/out (INSERT FROM SELECT, etc.)
97
+
98
+ > **Requires:** `>= 1.0.0` (`request_timeout` default was fixed to 30 000 ms in 0.3.0; `url`-based configuration including `request_timeout` via URL params available since 1.0.0).
99
+
100
+ Load balancers may close idle connections mid-query. Force periodic progress headers:
101
+
102
+ ```js
103
+ const client = createClient({
104
+ request_timeout: 400_000, // e.g. 400s for long queries
105
+ clickhouse_settings: {
106
+ send_progress_in_http_headers: 1,
107
+ http_headers_progress_interval_ms: '110000', // string — UInt64 type; set ~10s below LB idle timeout
108
+ },
109
+ })
110
+ ```
111
+
112
+ ### ⚠️ Critical: 16 KB Node.js Header Size Limit
113
+
114
+ **Node.js defaults to a total received HTTP header limit of approximately 16 KB (this can be increased via the `--max-http-header-size` CLI flag[^max-header-size]).** ClickHouse sends a new progress header with each interval (~200 bytes), and after ~75 progress headers accumulate, Node.js will throw an exception and terminate the request unless that limit is raised.
115
+
116
+ [^max-header-size]: Node.js also exposes a `maxHeaderSize` option on `http(s).request`, but the ClickHouse JS client currently does not forward it through `createClient`. For now, the practical workaround in clickhouse-js is to either use the `--max-http-header-size` CLI flag / `NODE_OPTIONS` (process-wide) or supply a custom `http.Agent` configured with `maxHeaderSize`. A dedicated client option is coming soon.
117
+
118
+ **Maximum safe query duration formula:**
119
+
120
+ ```
121
+ Max duration (seconds) ≈ http_headers_progress_interval_ms × 75 ÷ 1000
122
+ ```
123
+
124
+ **Examples:**
125
+
126
+ - `http_headers_progress_interval_ms: '10000'` (10s) → **~12.5 minutes** max safe duration
127
+ - `http_headers_progress_interval_ms: '60000'` (60s) → **~75 minutes** max safe duration
128
+ - `http_headers_progress_interval_ms: '120000'` (120s) → **~2.5 hours** max safe duration
129
+
130
+ > **Note:** `http_headers_progress_interval_ms` is a `UInt64` ClickHouse setting, so it must be passed as a **string** (e.g., `'10000'`).
131
+
132
+ **Raising the Node.js header limit (e.g., to 64 KB):**
133
+
134
+ If you need a longer max safe duration without lengthening the progress interval, raise Node's HTTP header limit. For example, increasing it from the default 16 KB to **64 KB** quadruples the max safe duration (≈300 progress headers instead of ≈75).
135
+
136
+ ```bash
137
+ # Option 1 — CLI flag when launching your app
138
+ node --max-http-header-size=65536 app.js
139
+
140
+ # Option 2 — environment variable (works with any Node entry point, including npm/ts-node)
141
+ NODE_OPTIONS="--max-http-header-size=65536" node app.js
142
+ ```
143
+
144
+ With `maxHeaderSize = 65536` (64 KB), the formula becomes:
145
+ Max duration (seconds) ≈ http_headers_progress_interval_ms × 300 ÷ 1000
146
+ ```
147
+ Max duration ≈ http_headers_progress_interval_ms ÷ 1000 × 300
148
+ ```
149
+
150
+ Examples at 64 KB:
151
+
152
+ - `http_headers_progress_interval_ms: '10000'` (10s) → **~50 minutes** max safe duration
153
+ - `http_headers_progress_interval_ms: '60000'` (60s) → **~5 hours** max safe duration
154
+ - `http_headers_progress_interval_ms: '120000'` (120s) → **~10 hours** max safe duration
155
+
156
+ **Guidelines for choosing the interval** (subject to your load balancer's idle timeout — see trade-offs below):
157
+
158
+ 1. **For queries under 12 minutes:** Use `'10000'` ms (10s) intervals, if your LB idle timeout allows
159
+ 2. **For queries 12 min – 1 hour:** Use `'60000'` ms (60s) intervals, if your LB idle timeout allows
160
+ 3. **For queries 1–2 hours:** Use `'120000'` ms (120s) intervals, if your LB idle timeout allows
161
+ 4. **For mutations over 2 hours:** Use the fire-and-forget pattern (see below)
162
+ 5. **For SELECT queries over 2 hours:** Increase `http_headers_progress_interval_ms` to extend the safe duration, while keeping it below your LB idle timeout and within Node.js header-limit constraints
163
+
164
+ Use this command to experiment and debug:
165
+
166
+ ```bash
167
+ curl -v "http://localhost:8123/?function_sleep_max_microseconds_per_block=10000000&wait_end_of_query=1&send_progress_in_http_headers=1&max_block_size=1&query=select+sum(sleepEachRow(1))+from+numbers(10)+FORMAT+JSONEachRow"
168
+ ```
169
+
170
+ Experimenting with the exact load balancer stack might be required.
171
+
172
+ **Important trade-offs:**
173
+
174
+ - **Shorter intervals** = better load balancer keep-alive (prevents idle timeout) but **lower max duration**
175
+ - **Longer intervals** = higher max duration but **higher risk of LB idle timeout**
176
+
177
+ As a rule of thumb, set the interval slightly **below** your load balancer's idle timeout—typically by a few seconds (for example, often around 5–20 seconds), depending on your load balancer, proxies, and network behavior—while staying under the header limit for your expected query duration.
178
+
179
+ **Alternatively — fire-and-forget (mutations only):** Mutations (`INSERT ... SELECT`, `OPTIMIZE`, `ALTER`) are not cancelled on the server when the client connection is lost. You can send the mutation and immediately close the connection, then poll `system.query_log` or `system.mutations` for status. This bypasses both the load balancer idle timeout and the Node.js header limit. See the [client repo examples](https://github.com/ClickHouse/clickhouse-js/tree/main/examples) for a concrete implementation.
180
+
181
+ ## Step 5 — Disable Keep-Alive entirely (last resort)
182
+
183
+ > **Requires:** `>= 0.1.1` (Keep-Alive disable option introduced in 0.1.1).
184
+
185
+ Adds overhead (new TCP connection per request) but eliminates all Keep-Alive issues:
186
+
187
+ ```js
188
+ const client = createClient({
189
+ keep_alive: { enabled: false },
190
+ })
191
+ ```
@@ -0,0 +1,91 @@
1
+ # TLS / Certificate Errors
2
+
3
+ > **Requires:** `>= 0.0.8` (basic and mutual TLS support added in 0.0.8). For custom HTTP agent with TLS, see `>= 1.2.0` (`http_agent` option); note that when using a custom agent, the `tls` config option is ignored.
4
+
5
+ ## Basic TLS (CA certificate only)
6
+
7
+ ```js
8
+ import fs from 'fs'
9
+ import { createClient } from '@clickhouse/client'
10
+
11
+ const client = createClient({
12
+ url: 'https://<hostname>:<port>',
13
+ username: '<user>',
14
+ password: '<pass>',
15
+ tls: {
16
+ ca_cert: fs.readFileSync('certs/CA.pem'),
17
+ },
18
+ })
19
+ ```
20
+
21
+ ## Mutual TLS (client certificate + key)
22
+
23
+ ```js
24
+ import fs from 'fs'
25
+ import { createClient } from '@clickhouse/client'
26
+
27
+ const client = createClient({
28
+ url: 'https://<hostname>:<port>',
29
+ username: '<user>',
30
+ tls: {
31
+ ca_cert: fs.readFileSync('certs/CA.pem'),
32
+ cert: fs.readFileSync('certs/client.crt'),
33
+ key: fs.readFileSync('certs/client.key'),
34
+ },
35
+ })
36
+ ```
37
+
38
+ > **Tip (`>= 1.2.0`):** If you need a custom HTTP(S) agent, use the `http_agent` option. Only set `set_basic_auth_header: false` if you must avoid sending the basic-auth `Authorization` header (for example, due to a header conflict); in that case, provide alternative auth headers such as `X-ClickHouse-User` / `X-ClickHouse-Key` via `http_headers`.
39
+
40
+ ## Common TLS errors
41
+
42
+ ### `UNABLE_TO_VERIFY_LEAF_SIGNATURE` / `UNABLE_TO_GET_ISSUER_CERT_LOCALLY`
43
+
44
+ **Scenario A — Private/internal CA (most common for self-hosted):** The server's certificate was issued by a private CA that Node.js doesn't trust. Pass the CA certificate explicitly:
45
+
46
+ ```js
47
+ tls: {
48
+ ca_cert: fs.readFileSync('certs/CA.pem'),
49
+ }
50
+ ```
51
+
52
+ **Scenario B — ClickHouse Cloud:** The CA is a well-known public CA; this error typically means the system CA bundle is outdated or the URL/hostname is wrong. Updating Node.js or the system certificates usually resolves it.
53
+
54
+ ### `self signed certificate` / `self signed certificate in certificate chain`
55
+
56
+ The server uses a self-signed cert (the certificate is its own CA). Options in order of preference:
57
+
58
+ 1. Pass the self-signed cert as the CA:
59
+
60
+ ```js
61
+ tls: {
62
+ ca_cert: fs.readFileSync('certs/server.crt')
63
+ }
64
+ ```
65
+
66
+ 2. For development only — disable verification via a custom agent (`>= 1.2.0`):
67
+
68
+ ```js
69
+ import https from 'https'
70
+ import { createClient } from '@clickhouse/client'
71
+
72
+ const client = createClient({
73
+ url: 'https://<hostname>:<port>',
74
+ username: '<user>',
75
+ password: '<pass>',
76
+ http_agent: new https.Agent({ rejectUnauthorized: false }),
77
+ // Optional: only disable the basic-auth Authorization header if you need to
78
+ // provide alternative auth headers instead.
79
+ set_basic_auth_header: false,
80
+ http_headers: {
81
+ 'X-ClickHouse-User': '<user>',
82
+ 'X-ClickHouse-Key': '<pass>',
83
+ },
84
+ })
85
+ ```
86
+
87
+ > ⚠️ Never use `rejectUnauthorized: false` in production — it disables all certificate verification.
88
+
89
+ ### `ERR_SSL_WRONG_VERSION_NUMBER` / `ECONNREFUSED` on HTTPS URL
90
+
91
+ The client is connecting with HTTPS but the server is listening on plain HTTP. Change the URL scheme to `http://` or enable TLS on the ClickHouse server.