bun-types 1.2.9-canary.20250403T140620 → 1.2.9-canary.20250404T140622
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bun.d.ts +26 -8
- package/docs/api/fetch.md +1 -1
- package/docs/api/spawn.md +15 -1
- package/docs/cli/publish.md +1 -1
- package/docs/guides/ecosystem/nuxt.md +1 -1
- package/docs/guides/install/add-peer.md +2 -2
- package/docs/guides/install/from-npm-install-to-bun-install.md +1 -1
- package/docs/guides/test/run-tests.md +3 -3
- package/docs/guides/test/snapshot.md +3 -3
- package/docs/guides/test/update-snapshots.md +1 -1
- package/docs/guides/util/version.md +1 -1
- package/docs/installation.md +4 -4
- package/docs/runtime/debugger.md +3 -3
- package/docs/test/configuration.md +87 -0
- package/docs/test/coverage.md +22 -1
- package/docs/test/discovery.md +85 -0
- package/docs/test/dom.md +1 -1
- package/docs/test/mocks.md +81 -4
- package/docs/test/reporters.md +108 -0
- package/docs/test/runtime-behavior.md +93 -0
- package/docs/test/snapshots.md +53 -0
- package/docs/test/time.md +21 -1
- package/docs/test/writing.md +170 -7
- package/globals.d.ts +5 -1
- package/overrides.d.ts +88 -1
- package/package.json +1 -1
- package/sqlite.d.ts +33 -31
|
@@ -0,0 +1,108 @@
|
|
|
1
|
+
bun test supports different output formats through reporters. This document covers both built-in reporters and how to implement your own custom reporters.
|
|
2
|
+
|
|
3
|
+
## Built-in Reporters
|
|
4
|
+
|
|
5
|
+
### Default Console Reporter
|
|
6
|
+
|
|
7
|
+
By default, bun test outputs results to the console in a human-readable format:
|
|
8
|
+
|
|
9
|
+
```sh
|
|
10
|
+
test/package-json-lint.test.ts:
|
|
11
|
+
✓ test/package.json [0.88ms]
|
|
12
|
+
✓ test/js/third_party/grpc-js/package.json [0.18ms]
|
|
13
|
+
✓ test/js/third_party/svelte/package.json [0.21ms]
|
|
14
|
+
✓ test/js/third_party/express/package.json [1.05ms]
|
|
15
|
+
|
|
16
|
+
4 pass
|
|
17
|
+
0 fail
|
|
18
|
+
4 expect() calls
|
|
19
|
+
Ran 4 tests in 1.44ms
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
When a terminal doesn't support colors, the output avoids non-ascii characters:
|
|
23
|
+
|
|
24
|
+
```sh
|
|
25
|
+
test/package-json-lint.test.ts:
|
|
26
|
+
(pass) test/package.json [0.48ms]
|
|
27
|
+
(pass) test/js/third_party/grpc-js/package.json [0.10ms]
|
|
28
|
+
(pass) test/js/third_party/svelte/package.json [0.04ms]
|
|
29
|
+
(pass) test/js/third_party/express/package.json [0.04ms]
|
|
30
|
+
|
|
31
|
+
4 pass
|
|
32
|
+
0 fail
|
|
33
|
+
4 expect() calls
|
|
34
|
+
Ran 4 tests across 1 files. [0.66ms]
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
### JUnit XML Reporter
|
|
38
|
+
|
|
39
|
+
For CI/CD environments, Bun supports generating JUnit XML reports. JUnit XML is a widely-adopted format for test results that can be parsed by many CI/CD systems, including GitLab, Jenkins, and others.
|
|
40
|
+
|
|
41
|
+
#### Using the JUnit Reporter
|
|
42
|
+
|
|
43
|
+
To generate a JUnit XML report, use the `--reporter=junit` flag along with `--reporter-outfile` to specify the output file:
|
|
44
|
+
|
|
45
|
+
```sh
|
|
46
|
+
$ bun test --reporter=junit --reporter-outfile=./junit.xml
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
This continues to output to the console as usual while also writing the JUnit XML report to the specified path at the end of the test run.
|
|
50
|
+
|
|
51
|
+
#### Configuring via bunfig.toml
|
|
52
|
+
|
|
53
|
+
You can also configure the JUnit reporter in your `bunfig.toml` file:
|
|
54
|
+
|
|
55
|
+
```toml
|
|
56
|
+
[test.reporter]
|
|
57
|
+
junit = "path/to/junit.xml" # Output path for JUnit XML report
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
#### Environment Variables in JUnit Reports
|
|
61
|
+
|
|
62
|
+
The JUnit reporter automatically includes environment information as `<properties>` in the XML output. This can be helpful for tracking test runs in CI environments.
|
|
63
|
+
|
|
64
|
+
Specifically, it includes the following environment variables when available:
|
|
65
|
+
|
|
66
|
+
| Environment Variable | Property Name | Description |
|
|
67
|
+
| ----------------------------------------------------------------------- | ------------- | ---------------------- |
|
|
68
|
+
| `GITHUB_RUN_ID`, `GITHUB_SERVER_URL`, `GITHUB_REPOSITORY`, `CI_JOB_URL` | `ci` | CI build information |
|
|
69
|
+
| `GITHUB_SHA`, `CI_COMMIT_SHA`, `GIT_SHA` | `commit` | Git commit identifiers |
|
|
70
|
+
| System hostname | `hostname` | Machine hostname |
|
|
71
|
+
|
|
72
|
+
This makes it easier to track which environment and commit a particular test run was for.
|
|
73
|
+
|
|
74
|
+
#### Current Limitations
|
|
75
|
+
|
|
76
|
+
The JUnit reporter currently has a few limitations that will be addressed in future updates:
|
|
77
|
+
|
|
78
|
+
- `stdout` and `stderr` output from individual tests are not included in the report
|
|
79
|
+
- Precise timestamp fields per test case are not included
|
|
80
|
+
|
|
81
|
+
### GitHub Actions reporter
|
|
82
|
+
|
|
83
|
+
Bun test automatically detects when it's running inside GitHub Actions and emits GitHub Actions annotations to the console directly. No special configuration is needed beyond installing Bun and running `bun test`.
|
|
84
|
+
|
|
85
|
+
For a GitHub Actions workflow configuration example, see the [CI/CD integration](../cli/test.md#cicd-integration) section of the CLI documentation.
|
|
86
|
+
|
|
87
|
+
## Custom Reporters
|
|
88
|
+
|
|
89
|
+
Bun allows developers to implement custom test reporters by extending the WebKit Inspector Protocol with additional testing-specific domains.
|
|
90
|
+
|
|
91
|
+
### Inspector Protocol for Testing
|
|
92
|
+
|
|
93
|
+
To support test reporting, Bun extends the standard WebKit Inspector Protocol with two custom domains:
|
|
94
|
+
|
|
95
|
+
1. **TestReporter**: Reports test discovery, execution start, and completion events
|
|
96
|
+
2. **LifecycleReporter**: Reports errors and exceptions during test execution
|
|
97
|
+
|
|
98
|
+
These extensions allow you to build custom reporting tools that can receive detailed information about test execution in real-time.
|
|
99
|
+
|
|
100
|
+
### Key Events
|
|
101
|
+
|
|
102
|
+
Custom reporters can listen for these key events:
|
|
103
|
+
|
|
104
|
+
- `TestReporter.found`: Emitted when a test is discovered
|
|
105
|
+
- `TestReporter.start`: Emitted when a test starts running
|
|
106
|
+
- `TestReporter.end`: Emitted when a test completes
|
|
107
|
+
- `Console.messageAdded`: Emitted when console output occurs during a test
|
|
108
|
+
- `LifecycleReporter.error`: Emitted when an error or exception occurs
|
|
@@ -0,0 +1,93 @@
|
|
|
1
|
+
`bun test` is deeply integrated with Bun's runtime. This is part of what makes `bun test` fast and simple to use.
|
|
2
|
+
|
|
3
|
+
#### `$NODE_ENV` environment variable
|
|
4
|
+
|
|
5
|
+
`bun test` automatically sets `$NODE_ENV` to `"test"` unless it's already set in the environment or via .env files. This is standard behavior for most test runners and helps ensure consistent test behavior.
|
|
6
|
+
|
|
7
|
+
```ts
|
|
8
|
+
import { test, expect } from "bun:test";
|
|
9
|
+
|
|
10
|
+
test("NODE_ENV is set to test", () => {
|
|
11
|
+
expect(process.env.NODE_ENV).toBe("test");
|
|
12
|
+
});
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
#### `$TZ` environment variable
|
|
16
|
+
|
|
17
|
+
By default, all `bun test` runs use UTC (`Etc/UTC`) as the time zone unless overridden by the `TZ` environment variable. This ensures consistent date and time behavior across different development environments.
|
|
18
|
+
|
|
19
|
+
#### Test Timeouts
|
|
20
|
+
|
|
21
|
+
Each test has a default timeout of 5000ms (5 seconds) if not explicitly overridden. Tests that exceed this timeout will fail. This can be changed globally with the `--timeout` flag or per-test as the third parameter to the test function.
|
|
22
|
+
|
|
23
|
+
## Error Handling
|
|
24
|
+
|
|
25
|
+
### Unhandled Errors
|
|
26
|
+
|
|
27
|
+
`bun test` tracks unhandled promise rejections and errors that occur between tests. If such errors occur, the final exit code will be non-zero (specifically, the count of such errors), even if all tests pass.
|
|
28
|
+
|
|
29
|
+
This helps catch errors in asynchronous code that might otherwise go unnoticed:
|
|
30
|
+
|
|
31
|
+
```ts
|
|
32
|
+
import { test } from "bun:test";
|
|
33
|
+
|
|
34
|
+
test("test 1", () => {
|
|
35
|
+
// This test passes
|
|
36
|
+
});
|
|
37
|
+
|
|
38
|
+
// This error happens outside any test
|
|
39
|
+
setTimeout(() => {
|
|
40
|
+
throw new Error("Unhandled error");
|
|
41
|
+
}, 0);
|
|
42
|
+
|
|
43
|
+
test("test 2", () => {
|
|
44
|
+
// This test also passes
|
|
45
|
+
});
|
|
46
|
+
|
|
47
|
+
// The test run will still fail with a non-zero exit code
|
|
48
|
+
// because of the unhandled error
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
Internally, this occurs with a higher precedence than `process.on("unhandledRejection")` or `process.on("uncaughtException")`, which makes it simpler to integrate with existing code.
|
|
52
|
+
|
|
53
|
+
## Using General CLI Flags with Tests
|
|
54
|
+
|
|
55
|
+
Several Bun CLI flags can be used with `bun test` to modify its behavior:
|
|
56
|
+
|
|
57
|
+
### Memory Usage
|
|
58
|
+
|
|
59
|
+
- `--smol`: Reduces memory usage for the test runner VM
|
|
60
|
+
|
|
61
|
+
### Debugging
|
|
62
|
+
|
|
63
|
+
- `--inspect`, `--inspect-brk`: Attaches the debugger to the test runner process
|
|
64
|
+
|
|
65
|
+
### Module Loading
|
|
66
|
+
|
|
67
|
+
- `--preload`: Runs scripts before test files (useful for global setup/mocks)
|
|
68
|
+
- `--define`: Sets compile-time constants
|
|
69
|
+
- `--loader`: Configures custom loaders
|
|
70
|
+
- `--tsconfig-override`: Uses a different tsconfig
|
|
71
|
+
- `--conditions`: Sets package.json conditions for module resolution
|
|
72
|
+
- `--env-file`: Loads environment variables for tests
|
|
73
|
+
|
|
74
|
+
### Installation-related Flags
|
|
75
|
+
|
|
76
|
+
- `--prefer-offline`, `--frozen-lockfile`, etc.: Affect any network requests or auto-installs during test execution
|
|
77
|
+
|
|
78
|
+
## Watch and Hot Reloading
|
|
79
|
+
|
|
80
|
+
When running `bun test` with the `--watch` flag, the test runner will watch for file changes and re-run affected tests.
|
|
81
|
+
|
|
82
|
+
The `--hot` flag provides similar functionality but is more aggressive about trying to preserve state between runs. For most test scenarios, `--watch` is the recommended option.
|
|
83
|
+
|
|
84
|
+
## Global Variables
|
|
85
|
+
|
|
86
|
+
The following globals are automatically available in test files without importing (though they can be imported from `bun:test` if preferred):
|
|
87
|
+
|
|
88
|
+
- `test`, `it`: Define tests
|
|
89
|
+
- `describe`: Group tests
|
|
90
|
+
- `expect`: Make assertions
|
|
91
|
+
- `beforeAll`, `beforeEach`, `afterAll`, `afterEach`: Lifecycle hooks
|
|
92
|
+
- `jest`: Jest global object
|
|
93
|
+
- `vi`: Vitest compatibility alias for common jest methods
|
package/docs/test/snapshots.md
CHANGED
|
@@ -1,3 +1,7 @@
|
|
|
1
|
+
Snapshot testing saves the output of a value and compares it against future test runs. This is particularly useful for UI components, complex objects, or any output that needs to remain consistent.
|
|
2
|
+
|
|
3
|
+
## Basic snapshots
|
|
4
|
+
|
|
1
5
|
Snapshot tests are written using the `.toMatchSnapshot()` matcher:
|
|
2
6
|
|
|
3
7
|
```ts
|
|
@@ -13,3 +17,52 @@ The first time this test is run, the argument to `expect` will be serialized and
|
|
|
13
17
|
```bash
|
|
14
18
|
$ bun test --update-snapshots
|
|
15
19
|
```
|
|
20
|
+
|
|
21
|
+
## Inline snapshots
|
|
22
|
+
|
|
23
|
+
For smaller values, you can use inline snapshots with `.toMatchInlineSnapshot()`. These snapshots are stored directly in your test file:
|
|
24
|
+
|
|
25
|
+
```ts
|
|
26
|
+
import { test, expect } from "bun:test";
|
|
27
|
+
|
|
28
|
+
test("inline snapshot", () => {
|
|
29
|
+
// First run: snapshot will be inserted automatically
|
|
30
|
+
expect({ hello: "world" }).toMatchInlineSnapshot();
|
|
31
|
+
|
|
32
|
+
// After first run, the test file will be updated to:
|
|
33
|
+
// expect({ hello: "world" }).toMatchInlineSnapshot(`
|
|
34
|
+
// {
|
|
35
|
+
// "hello": "world",
|
|
36
|
+
// }
|
|
37
|
+
// `);
|
|
38
|
+
});
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
When you run the test, Bun automatically updates the test file itself with the generated snapshot string. This makes the tests more portable and easier to understand, since the expected output is right next to the test.
|
|
42
|
+
|
|
43
|
+
### Using inline snapshots
|
|
44
|
+
|
|
45
|
+
1. Write your test with `.toMatchInlineSnapshot()`
|
|
46
|
+
2. Run the test once
|
|
47
|
+
3. Bun automatically updates your test file with the snapshot
|
|
48
|
+
4. On subsequent runs, the value will be compared against the inline snapshot
|
|
49
|
+
|
|
50
|
+
Inline snapshots are particularly useful for small, simple values where it's helpful to see the expected output right in the test file.
|
|
51
|
+
|
|
52
|
+
## Error snapshots
|
|
53
|
+
|
|
54
|
+
You can also snapshot error messages using `.toThrowErrorMatchingSnapshot()` and `.toThrowErrorMatchingInlineSnapshot()`:
|
|
55
|
+
|
|
56
|
+
```ts
|
|
57
|
+
import { test, expect } from "bun:test";
|
|
58
|
+
|
|
59
|
+
test("error snapshot", () => {
|
|
60
|
+
expect(() => {
|
|
61
|
+
throw new Error("Something went wrong");
|
|
62
|
+
}).toThrowErrorMatchingSnapshot();
|
|
63
|
+
|
|
64
|
+
expect(() => {
|
|
65
|
+
throw new Error("Another error");
|
|
66
|
+
}).toThrowErrorMatchingInlineSnapshot();
|
|
67
|
+
});
|
|
68
|
+
```
|
package/docs/test/time.md
CHANGED
|
@@ -74,9 +74,29 @@ test("it was 2020, for a moment.", () => {
|
|
|
74
74
|
});
|
|
75
75
|
```
|
|
76
76
|
|
|
77
|
+
## Get mocked time with `jest.now()`
|
|
78
|
+
|
|
79
|
+
When you're using mocked time (with `setSystemTime` or `useFakeTimers`), you can use `jest.now()` to get the current mocked timestamp:
|
|
80
|
+
|
|
81
|
+
```ts
|
|
82
|
+
import { test, expect, jest } from "bun:test";
|
|
83
|
+
|
|
84
|
+
test("get the current mocked time", () => {
|
|
85
|
+
jest.useFakeTimers();
|
|
86
|
+
jest.setSystemTime(new Date("2020-01-01T00:00:00.000Z"));
|
|
87
|
+
|
|
88
|
+
expect(Date.now()).toBe(1577836800000); // Jan 1, 2020 timestamp
|
|
89
|
+
expect(jest.now()).toBe(1577836800000); // Same value
|
|
90
|
+
|
|
91
|
+
jest.useRealTimers();
|
|
92
|
+
});
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
This is useful when you need to access the mocked time directly without creating a new Date object.
|
|
96
|
+
|
|
77
97
|
## Set the time zone
|
|
78
98
|
|
|
79
|
-
To change the time zone, either pass the `$TZ` environment variable to `bun test`.
|
|
99
|
+
By default, the time zone for all `bun test` runs is set to UTC (`Etc/UTC`) unless overridden. To change the time zone, either pass the `$TZ` environment variable to `bun test`.
|
|
80
100
|
|
|
81
101
|
```sh
|
|
82
102
|
TZ=America/Los_Angeles bun test
|
package/docs/test/writing.md
CHANGED
|
@@ -78,9 +78,11 @@ test("wat", async () => {
|
|
|
78
78
|
|
|
79
79
|
In `bun:test`, test timeouts throw an uncatchable exception to force the test to stop running and fail. We also kill any child processes that were spawned in the test to avoid leaving behind zombie processes lurking in the background.
|
|
80
80
|
|
|
81
|
+
The default timeout for each test is 5000ms (5 seconds) if not overridden by this timeout option or `jest.setDefaultTimeout()`.
|
|
82
|
+
|
|
81
83
|
### 🧟 Zombie process killer
|
|
82
84
|
|
|
83
|
-
When a test times out and processes spawned in the test via `Bun.spawn`, `Bun.spawnSync`, or `node:child_process` are not killed, they will be automatically killed and a message will be logged to the console.
|
|
85
|
+
When a test times out and processes spawned in the test via `Bun.spawn`, `Bun.spawnSync`, or `node:child_process` are not killed, they will be automatically killed and a message will be logged to the console. This prevents zombie processes from lingering in the background after timed-out tests.
|
|
84
86
|
|
|
85
87
|
## `test.skip`
|
|
86
88
|
|
|
@@ -125,7 +127,7 @@ fix the test.
|
|
|
125
127
|
|
|
126
128
|
## `test.only`
|
|
127
129
|
|
|
128
|
-
To run a particular test or suite of tests use `test.only()` or `describe.only()`.
|
|
130
|
+
To run a particular test or suite of tests use `test.only()` or `describe.only()`.
|
|
129
131
|
|
|
130
132
|
```ts
|
|
131
133
|
import { test, describe } from "bun:test";
|
|
@@ -197,22 +199,121 @@ test.todoIf(macOS)("runs on posix", () => {
|
|
|
197
199
|
});
|
|
198
200
|
```
|
|
199
201
|
|
|
200
|
-
## `test.
|
|
202
|
+
## `test.failing`
|
|
203
|
+
|
|
204
|
+
Use `test.failing()` when you know a test is currently failing but you want to track it and be notified when it starts passing. This inverts the test result:
|
|
201
205
|
|
|
202
|
-
|
|
206
|
+
- A failing test marked with `.failing()` will pass
|
|
207
|
+
- A passing test marked with `.failing()` will fail (with a message indicating it's now passing and should be fixed)
|
|
208
|
+
|
|
209
|
+
```ts
|
|
210
|
+
// This will pass because the test is failing as expected
|
|
211
|
+
test.failing("math is broken", () => {
|
|
212
|
+
expect(0.1 + 0.2).toBe(0.3); // fails due to floating point precision
|
|
213
|
+
});
|
|
214
|
+
|
|
215
|
+
// This will fail with a message that the test is now passing
|
|
216
|
+
test.failing("fixed bug", () => {
|
|
217
|
+
expect(1 + 1).toBe(2); // passes, but we expected it to fail
|
|
218
|
+
});
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
This is useful for tracking known bugs that you plan to fix later, or for implementing test-driven development.
|
|
222
|
+
|
|
223
|
+
## Conditional Tests for Describe Blocks
|
|
224
|
+
|
|
225
|
+
The conditional modifiers `.if()`, `.skipIf()`, and `.todoIf()` can also be applied to `describe` blocks, affecting all tests within the suite:
|
|
226
|
+
|
|
227
|
+
```ts
|
|
228
|
+
const isMacOS = process.platform === "darwin";
|
|
229
|
+
|
|
230
|
+
// Only runs the entire suite on macOS
|
|
231
|
+
describe.if(isMacOS)("macOS-specific features", () => {
|
|
232
|
+
test("feature A", () => {
|
|
233
|
+
// only runs on macOS
|
|
234
|
+
});
|
|
235
|
+
|
|
236
|
+
test("feature B", () => {
|
|
237
|
+
// only runs on macOS
|
|
238
|
+
});
|
|
239
|
+
});
|
|
240
|
+
|
|
241
|
+
// Skips the entire suite on Windows
|
|
242
|
+
describe.skipIf(process.platform === "win32")("Unix features", () => {
|
|
243
|
+
test("feature C", () => {
|
|
244
|
+
// skipped on Windows
|
|
245
|
+
});
|
|
246
|
+
});
|
|
247
|
+
|
|
248
|
+
// Marks the entire suite as TODO on Linux
|
|
249
|
+
describe.todoIf(process.platform === "linux")("Upcoming Linux support", () => {
|
|
250
|
+
test("feature D", () => {
|
|
251
|
+
// marked as TODO on Linux
|
|
252
|
+
});
|
|
253
|
+
});
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
## `test.each` and `describe.each`
|
|
257
|
+
|
|
258
|
+
To run the same test with multiple sets of data, use `test.each`. This creates a parametrized test that runs once for each test case provided.
|
|
203
259
|
|
|
204
260
|
```ts
|
|
205
261
|
const cases = [
|
|
206
262
|
[1, 2, 3],
|
|
207
|
-
[3, 4,
|
|
263
|
+
[3, 4, 7],
|
|
208
264
|
];
|
|
209
265
|
|
|
210
266
|
test.each(cases)("%p + %p should be %p", (a, b, expected) => {
|
|
211
|
-
|
|
267
|
+
expect(a + b).toBe(expected);
|
|
268
|
+
});
|
|
269
|
+
```
|
|
270
|
+
|
|
271
|
+
You can also use `describe.each` to create a parametrized suite that runs once for each test case:
|
|
272
|
+
|
|
273
|
+
```ts
|
|
274
|
+
describe.each([
|
|
275
|
+
[1, 2, 3],
|
|
276
|
+
[3, 4, 7],
|
|
277
|
+
])("add(%i, %i)", (a, b, expected) => {
|
|
278
|
+
test(`returns ${expected}`, () => {
|
|
279
|
+
expect(a + b).toBe(expected);
|
|
280
|
+
});
|
|
281
|
+
|
|
282
|
+
test(`sum is greater than each value`, () => {
|
|
283
|
+
expect(a + b).toBeGreaterThan(a);
|
|
284
|
+
expect(a + b).toBeGreaterThan(b);
|
|
285
|
+
});
|
|
286
|
+
});
|
|
287
|
+
```
|
|
288
|
+
|
|
289
|
+
### Argument Passing
|
|
290
|
+
|
|
291
|
+
How arguments are passed to your test function depends on the structure of your test cases:
|
|
292
|
+
|
|
293
|
+
- If a table row is an array (like `[1, 2, 3]`), each element is passed as an individual argument
|
|
294
|
+
- If a row is not an array (like an object), it's passed as a single argument
|
|
295
|
+
|
|
296
|
+
```ts
|
|
297
|
+
// Array items passed as individual arguments
|
|
298
|
+
test.each([
|
|
299
|
+
[1, 2, 3],
|
|
300
|
+
[4, 5, 9],
|
|
301
|
+
])("add(%i, %i) = %i", (a, b, expected) => {
|
|
302
|
+
expect(a + b).toBe(expected);
|
|
303
|
+
});
|
|
304
|
+
|
|
305
|
+
// Object items passed as a single argument
|
|
306
|
+
test.each([
|
|
307
|
+
{ a: 1, b: 2, expected: 3 },
|
|
308
|
+
{ a: 4, b: 5, expected: 9 },
|
|
309
|
+
])("add($a, $b) = $expected", data => {
|
|
310
|
+
expect(data.a + data.b).toBe(data.expected);
|
|
212
311
|
});
|
|
213
312
|
```
|
|
214
313
|
|
|
215
|
-
|
|
314
|
+
### Format Specifiers
|
|
315
|
+
|
|
316
|
+
There are a number of options available for formatting the test title:
|
|
216
317
|
|
|
217
318
|
{% table %}
|
|
218
319
|
|
|
@@ -263,6 +364,68 @@ There are a number of options available for formatting the case label depending
|
|
|
263
364
|
|
|
264
365
|
{% /table %}
|
|
265
366
|
|
|
367
|
+
#### Examples
|
|
368
|
+
|
|
369
|
+
```ts
|
|
370
|
+
// Basic specifiers
|
|
371
|
+
test.each([
|
|
372
|
+
["hello", 123],
|
|
373
|
+
["world", 456],
|
|
374
|
+
])("string: %s, number: %i", (str, num) => {
|
|
375
|
+
// "string: hello, number: 123"
|
|
376
|
+
// "string: world, number: 456"
|
|
377
|
+
});
|
|
378
|
+
|
|
379
|
+
// %p for pretty-format output
|
|
380
|
+
test.each([
|
|
381
|
+
[{ name: "Alice" }, { a: 1, b: 2 }],
|
|
382
|
+
[{ name: "Bob" }, { x: 5, y: 10 }],
|
|
383
|
+
])("user %p with data %p", (user, data) => {
|
|
384
|
+
// "user { name: 'Alice' } with data { a: 1, b: 2 }"
|
|
385
|
+
// "user { name: 'Bob' } with data { x: 5, y: 10 }"
|
|
386
|
+
});
|
|
387
|
+
|
|
388
|
+
// %# for index
|
|
389
|
+
test.each(["apple", "banana"])("fruit #%# is %s", fruit => {
|
|
390
|
+
// "fruit #0 is apple"
|
|
391
|
+
// "fruit #1 is banana"
|
|
392
|
+
});
|
|
393
|
+
```
|
|
394
|
+
|
|
395
|
+
## Assertion Counting
|
|
396
|
+
|
|
397
|
+
Bun supports verifying that a specific number of assertions were called during a test:
|
|
398
|
+
|
|
399
|
+
### expect.hasAssertions()
|
|
400
|
+
|
|
401
|
+
Use `expect.hasAssertions()` to verify that at least one assertion is called during a test:
|
|
402
|
+
|
|
403
|
+
```ts
|
|
404
|
+
test("async work calls assertions", async () => {
|
|
405
|
+
expect.hasAssertions(); // Will fail if no assertions are called
|
|
406
|
+
|
|
407
|
+
const data = await fetchData();
|
|
408
|
+
expect(data).toBeDefined();
|
|
409
|
+
});
|
|
410
|
+
```
|
|
411
|
+
|
|
412
|
+
This is especially useful for async tests to ensure your assertions actually run.
|
|
413
|
+
|
|
414
|
+
### expect.assertions(count)
|
|
415
|
+
|
|
416
|
+
Use `expect.assertions(count)` to verify that a specific number of assertions are called during a test:
|
|
417
|
+
|
|
418
|
+
```ts
|
|
419
|
+
test("exactly two assertions", () => {
|
|
420
|
+
expect.assertions(2); // Will fail if not exactly 2 assertions are called
|
|
421
|
+
|
|
422
|
+
expect(1 + 1).toBe(2);
|
|
423
|
+
expect("hello").toContain("ell");
|
|
424
|
+
});
|
|
425
|
+
```
|
|
426
|
+
|
|
427
|
+
This helps ensure all your assertions run, especially in complex async code with multiple code paths.
|
|
428
|
+
|
|
266
429
|
## Matchers
|
|
267
430
|
|
|
268
431
|
Bun implements the following matchers. Full Jest compatibility is on the roadmap; track progress [here](https://github.com/oven-sh/bun/issues/1825).
|
package/globals.d.ts
CHANGED
|
@@ -1098,6 +1098,10 @@ interface Console {
|
|
|
1098
1098
|
|
|
1099
1099
|
declare var console: Console;
|
|
1100
1100
|
|
|
1101
|
+
interface ImportMetaEnv {
|
|
1102
|
+
[key: string]: string | undefined;
|
|
1103
|
+
}
|
|
1104
|
+
|
|
1101
1105
|
interface ImportMeta {
|
|
1102
1106
|
/**
|
|
1103
1107
|
* `file://` url string for the current module.
|
|
@@ -1130,7 +1134,7 @@ interface ImportMeta {
|
|
|
1130
1134
|
* import.meta.env === process.env
|
|
1131
1135
|
* ```
|
|
1132
1136
|
*/
|
|
1133
|
-
readonly env: Bun.Env;
|
|
1137
|
+
readonly env: Bun.Env & NodeJS.ProcessEnv & ImportMetaEnv;
|
|
1134
1138
|
|
|
1135
1139
|
/**
|
|
1136
1140
|
* @deprecated Use `require.resolve` or `Bun.resolveSync(moduleId, path.dirname(parent))` instead
|
package/overrides.d.ts
CHANGED
|
@@ -2,7 +2,7 @@ export {};
|
|
|
2
2
|
|
|
3
3
|
declare global {
|
|
4
4
|
namespace NodeJS {
|
|
5
|
-
interface ProcessEnv extends Bun.Env {}
|
|
5
|
+
interface ProcessEnv extends Bun.Env, ImportMetaEnv {}
|
|
6
6
|
|
|
7
7
|
interface Process {
|
|
8
8
|
readonly version: string;
|
|
@@ -57,6 +57,93 @@ declare global {
|
|
|
57
57
|
TRACE_EVENT_PHASE_LINK_IDS: number;
|
|
58
58
|
};
|
|
59
59
|
};
|
|
60
|
+
binding(m: "uv"): {
|
|
61
|
+
errname(code: number): string;
|
|
62
|
+
UV_E2BIG: number;
|
|
63
|
+
UV_EACCES: number;
|
|
64
|
+
UV_EADDRINUSE: number;
|
|
65
|
+
UV_EADDRNOTAVAIL: number;
|
|
66
|
+
UV_EAFNOSUPPORT: number;
|
|
67
|
+
UV_EAGAIN: number;
|
|
68
|
+
UV_EAI_ADDRFAMILY: number;
|
|
69
|
+
UV_EAI_AGAIN: number;
|
|
70
|
+
UV_EAI_BADFLAGS: number;
|
|
71
|
+
UV_EAI_BADHINTS: number;
|
|
72
|
+
UV_EAI_CANCELED: number;
|
|
73
|
+
UV_EAI_FAIL: number;
|
|
74
|
+
UV_EAI_FAMILY: number;
|
|
75
|
+
UV_EAI_MEMORY: number;
|
|
76
|
+
UV_EAI_NODATA: number;
|
|
77
|
+
UV_EAI_NONAME: number;
|
|
78
|
+
UV_EAI_OVERFLOW: number;
|
|
79
|
+
UV_EAI_PROTOCOL: number;
|
|
80
|
+
UV_EAI_SERVICE: number;
|
|
81
|
+
UV_EAI_SOCKTYPE: number;
|
|
82
|
+
UV_EALREADY: number;
|
|
83
|
+
UV_EBADF: number;
|
|
84
|
+
UV_EBUSY: number;
|
|
85
|
+
UV_ECANCELED: number;
|
|
86
|
+
UV_ECHARSET: number;
|
|
87
|
+
UV_ECONNABORTED: number;
|
|
88
|
+
UV_ECONNREFUSED: number;
|
|
89
|
+
UV_ECONNRESET: number;
|
|
90
|
+
UV_EDESTADDRREQ: number;
|
|
91
|
+
UV_EEXIST: number;
|
|
92
|
+
UV_EFAULT: number;
|
|
93
|
+
UV_EFBIG: number;
|
|
94
|
+
UV_EHOSTUNREACH: number;
|
|
95
|
+
UV_EINTR: number;
|
|
96
|
+
UV_EINVAL: number;
|
|
97
|
+
UV_EIO: number;
|
|
98
|
+
UV_EISCONN: number;
|
|
99
|
+
UV_EISDIR: number;
|
|
100
|
+
UV_ELOOP: number;
|
|
101
|
+
UV_EMFILE: number;
|
|
102
|
+
UV_EMSGSIZE: number;
|
|
103
|
+
UV_ENAMETOOLONG: number;
|
|
104
|
+
UV_ENETDOWN: number;
|
|
105
|
+
UV_ENETUNREACH: number;
|
|
106
|
+
UV_ENFILE: number;
|
|
107
|
+
UV_ENOBUFS: number;
|
|
108
|
+
UV_ENODEV: number;
|
|
109
|
+
UV_ENOENT: number;
|
|
110
|
+
UV_ENOMEM: number;
|
|
111
|
+
UV_ENONET: number;
|
|
112
|
+
UV_ENOPROTOOPT: number;
|
|
113
|
+
UV_ENOSPC: number;
|
|
114
|
+
UV_ENOSYS: number;
|
|
115
|
+
UV_ENOTCONN: number;
|
|
116
|
+
UV_ENOTDIR: number;
|
|
117
|
+
UV_ENOTEMPTY: number;
|
|
118
|
+
UV_ENOTSOCK: number;
|
|
119
|
+
UV_ENOTSUP: number;
|
|
120
|
+
UV_EOVERFLOW: number;
|
|
121
|
+
UV_EPERM: number;
|
|
122
|
+
UV_EPIPE: number;
|
|
123
|
+
UV_EPROTO: number;
|
|
124
|
+
UV_EPROTONOSUPPORT: number;
|
|
125
|
+
UV_EPROTOTYPE: number;
|
|
126
|
+
UV_ERANGE: number;
|
|
127
|
+
UV_EROFS: number;
|
|
128
|
+
UV_ESHUTDOWN: number;
|
|
129
|
+
UV_ESPIPE: number;
|
|
130
|
+
UV_ESRCH: number;
|
|
131
|
+
UV_ETIMEDOUT: number;
|
|
132
|
+
UV_ETXTBSY: number;
|
|
133
|
+
UV_EXDEV: number;
|
|
134
|
+
UV_UNKNOWN: number;
|
|
135
|
+
UV_EOF: number;
|
|
136
|
+
UV_ENXIO: number;
|
|
137
|
+
UV_EMLINK: number;
|
|
138
|
+
UV_EHOSTDOWN: number;
|
|
139
|
+
UV_EREMOTEIO: number;
|
|
140
|
+
UV_ENOTTY: number;
|
|
141
|
+
UV_EFTYPE: number;
|
|
142
|
+
UV_EILSEQ: number;
|
|
143
|
+
UV_ESOCKTNOSUPPORT: number;
|
|
144
|
+
UV_ENODATA: number;
|
|
145
|
+
UV_EUNATCH: number;
|
|
146
|
+
};
|
|
60
147
|
binding(m: string): object;
|
|
61
148
|
}
|
|
62
149
|
|
package/package.json
CHANGED