@simulatte/webgpu 0.2.3 → 0.2.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,19 +7,41 @@ retrofitted from package version history and package-surface commits so the npm
7
7
  package has a conventional release history alongside the broader Fawn status
8
8
  and process documents.
9
9
 
10
+ ## [0.2.4] - 2026-03-11
11
+
12
+ ### Changed
13
+
14
+ - `doe.runCompute()` now infers binding access from Doe helper-created buffer
15
+ usage and fails fast when a bare binding lacks Doe usage metadata or uses a
16
+ non-bindable/ambiguous usage shape.
17
+ - Simplified the compute-surface README example to use inferred binding access
18
+ (`bindings: [input, output]`) and the device-bound `doe.bind(await
19
+ requestDevice())` flow directly.
20
+ - Clarified the install contract for non-prebuilt platforms: the `node-gyp`
21
+ fallback only builds the native addon and does not bundle `libwebgpu_doe`
22
+ plus the required Dawn sidecar.
23
+ - Aligned the published package docs and API contract with the current
24
+ `@simulatte/webgpu`, `@simulatte/webgpu/compute`, and `@simulatte/webgpu/full`
25
+ export surface.
26
+
10
27
  ## [0.2.3] - 2026-03-10
11
28
 
12
29
  ### Added
13
30
 
14
31
  - macOS arm64 (Metal) prebuilds shipped alongside existing Linux x64 (Vulkan).
15
- - Monte Carlo pi estimation example in the README, replacing the trivial
16
- buffer-readback snippet with a real GPU compute demonstration.
17
32
  - "Verify your install" section with `npm run smoke` and `npm test` guidance.
33
+ - Added explicit package export surfaces for `@simulatte/webgpu` (default
34
+ full) and `@simulatte/webgpu/compute`, plus the first `doe` ergonomic
35
+ namespace for buffer/readback/compute helpers.
36
+ - Added `doe.bind(device)` so the ergonomic helper surface supports device-bound
37
+ workflows in addition to static helper calls.
18
38
 
19
39
  ### Changed
20
40
 
21
- - Restructured package README for consumers: examples, quickstart, and
22
- verification first; building from source and Fawn developer context at the end.
41
+ - Restructured the package README around the default full surface,
42
+ `@simulatte/webgpu/compute`, and the `doe` helper surface.
43
+ - `doe.runCompute()` now infers binding access from Doe helper-created buffer
44
+ usage and fails fast for bare bindings that do not carry Doe usage metadata.
23
45
  - Fixed broken README image links to use bundled asset paths instead of dead
24
46
  raw GitHub URLs.
25
47
  - Root Fawn README now directs package users to the package README.
package/README.md CHANGED
@@ -1,16 +1,38 @@
1
1
  # @simulatte/webgpu
2
2
 
3
- Headless WebGPU for Node.js and Bun, powered by Doe, Fawn's Zig WebGPU
4
- runtime.
3
+ Headless WebGPU for Node.js and Bun, powered by Doe.
5
4
 
6
5
  <p align="center">
7
6
  <img src="assets/fawn-icon-main-256.png" alt="Fawn logo" width="196" />
8
7
  </p>
9
8
 
10
- Use this package for headless compute, CI, benchmarking, and offscreen GPU
11
- execution. It is built for explicit runtime behavior, deterministic
12
- traceability, and artifact-backed performance work. It is not a DOM/canvas
13
- package and it should not be read as a promise of full browser-surface parity.
9
+ Use this package for compute, CI, benchmarking, and offscreen GPU execution.
10
+ It is not a DOM/canvas package and it does not target browser-surface parity.
11
+
12
+ ## Install
13
+
14
+ ```bash
15
+ npm install @simulatte/webgpu
16
+ ```
17
+
18
+ The install ships platform-specific prebuilds for macOS arm64 (Metal) and
19
+ Linux x64 (Vulkan). If no prebuild matches your platform, the installer falls
20
+ back to building the native addon with `node-gyp` only; it does not build or
21
+ bundle `libwebgpu_doe` and the required Dawn sidecar for you. On unsupported
22
+ platforms, use a local Fawn workspace build for those runtime libraries.
23
+
24
+ ## Choose a surface
25
+
26
+ | Import | Surface | Includes |
27
+ | --- | --- | --- |
28
+ | `@simulatte/webgpu` | Default full surface | Buffers, compute, textures, samplers, render, Doe helpers |
29
+ | `@simulatte/webgpu/compute` | Compute-first surface | Buffers, compute, copy/upload/readback, Doe helpers |
30
+ | `@simulatte/webgpu/full` | Explicit full surface | Same contract as the default package surface |
31
+
32
+ Use `@simulatte/webgpu/compute` when you want the constrained package contract
33
+ for AI workloads and other buffer/dispatch-heavy headless execution. The
34
+ compute surface intentionally omits render and sampler methods from the JS
35
+ facade.
14
36
 
15
37
  ## Quick examples
16
38
 
@@ -22,7 +44,7 @@ import { providerInfo } from "@simulatte/webgpu";
22
44
  console.log(providerInfo());
23
45
  ```
24
46
 
25
- ### Request a device
47
+ ### Request a full device
26
48
 
27
49
  ```js
28
50
  import { requestDevice } from "@simulatte/webgpu";
@@ -31,289 +53,113 @@ const device = await requestDevice();
31
53
  console.log(device.limits.maxBufferSize);
32
54
  ```
33
55
 
34
- ### Estimate pi on the GPU
35
-
36
- 65,536 threads each test 1,024 points inside the unit square. Each thread
37
- hashes its index to produce sample coordinates, counts how many land inside
38
- the unit circle, and writes its count to a results array. The CPU sums the
39
- counts and computes pi ≈ 4 × hits / total.
56
+ ### Request a compute-only device
40
57
 
41
58
  ```js
42
- import { globals, requestDevice } from "@simulatte/webgpu";
59
+ import { requestDevice } from "@simulatte/webgpu/compute";
43
60
 
44
- const { GPUBufferUsage, GPUMapMode, GPUShaderStage } = globals;
45
61
  const device = await requestDevice();
62
+ console.log(typeof device.createComputePipeline); // "function"
63
+ console.log(typeof device.createRenderPipeline); // "undefined"
64
+ ```
46
65
 
47
- const THREADS = 65536;
48
- const WORKGROUP_SIZE = 256;
49
- const SAMPLES_PER_THREAD = 1024;
50
-
51
- if (THREADS % WORKGROUP_SIZE !== 0) {
52
- throw new Error("THREADS must be a multiple of WORKGROUP_SIZE");
53
- }
66
+ ### Run a small compute job with `doe`
54
67
 
55
- const shader = device.createShaderModule({
56
- code: `
57
- @group(0) @binding(0) var<storage, read_write> counts: array<u32>;
58
-
59
- fn hash(n: u32) -> u32 {
60
- var x = n;
61
- x ^= x >> 16u;
62
- x *= 0x45d9f3bu;
63
- x ^= x >> 16u;
64
- x *= 0x45d9f3bu;
65
- x ^= x >> 16u;
66
- return x;
67
- }
68
+ ```js
69
+ import { doe, requestDevice } from "@simulatte/webgpu/compute";
68
70
 
69
- @compute @workgroup_size(${WORKGROUP_SIZE})
70
- fn main(@builtin(global_invocation_id) gid: vec3u) {
71
- var count = 0u;
72
- for (var i = 0u; i < ${SAMPLES_PER_THREAD}u; i += 1u) {
73
- let idx = gid.x * ${SAMPLES_PER_THREAD}u + i;
74
- let x = f32(hash(idx * 2u)) / 4294967295.0;
75
- let y = f32(hash(idx * 2u + 1u)) / 4294967295.0;
76
- if x * x + y * y <= 1.0 {
77
- count += 1u;
78
- }
79
- }
80
- counts[gid.x] = count;
81
- }
82
- `,
83
- });
71
+ const gpu = doe.bind(await requestDevice());
84
72
 
85
- const bindGroupLayout = device.createBindGroupLayout({
86
- entries: [{
87
- binding: 0,
88
- visibility: GPUShaderStage.COMPUTE,
89
- buffer: { type: "storage" },
90
- }],
91
- });
73
+ const input = gpu.createBufferFromData(new Float32Array([1, 2, 3, 4]));
92
74
 
93
- const pipeline = device.createComputePipeline({
94
- layout: device.createPipelineLayout({ bindGroupLayouts: [bindGroupLayout] }),
95
- compute: { module: shader, entryPoint: "main" },
75
+ const output = gpu.createBuffer({
76
+ size: input.size,
77
+ usage: "storage-readwrite",
96
78
  });
97
79
 
98
- const countsBuffer = device.createBuffer({
99
- size: THREADS * 4,
100
- usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC,
101
- });
102
- const readback = device.createBuffer({
103
- size: THREADS * 4,
104
- usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ,
105
- });
80
+ await gpu.runCompute({
81
+ code: `
82
+ @group(0) @binding(0) var<storage, read> src: array<f32>;
83
+ @group(0) @binding(1) var<storage, read_write> dst: array<f32>;
106
84
 
107
- const bindGroup = device.createBindGroup({
108
- layout: bindGroupLayout,
109
- entries: [{ binding: 0, resource: { buffer: countsBuffer } }],
85
+ @compute @workgroup_size(4)
86
+ fn main(@builtin(global_invocation_id) gid: vec3u) {
87
+ let i = gid.x;
88
+ dst[i] = src[i] * 2.0;
89
+ }
90
+ `,
91
+ bindings: [input, output],
92
+ workgroups: 1,
110
93
  });
111
94
 
112
- const encoder = device.createCommandEncoder();
113
- const pass = encoder.beginComputePass();
114
- pass.setPipeline(pipeline);
115
- pass.setBindGroup(0, bindGroup);
116
- pass.dispatchWorkgroups(THREADS / WORKGROUP_SIZE);
117
- pass.end();
118
- encoder.copyBufferToBuffer(countsBuffer, 0, readback, 0, THREADS * 4);
119
- device.queue.submit([encoder.finish()]);
120
-
121
- await readback.mapAsync(GPUMapMode.READ);
122
- const counts = new Uint32Array(readback.getMappedRange());
123
- const hits = counts.reduce((a, b) => a + b, 0);
124
- readback.unmap();
125
-
126
- const total = THREADS * SAMPLES_PER_THREAD;
127
- const pi = 4 * hits / total;
128
- console.log(`${total.toLocaleString()} samples → pi ≈ ${pi.toFixed(6)}`);
95
+ const result = await gpu.readBuffer(output, Float32Array);
96
+ console.log(Array.from(result)); // [2, 4, 6, 8]
129
97
  ```
130
98
 
131
- Expected output will vary slightly, but it should look like:
132
-
133
- ```
134
- 67,108,864 samples pi ≈ 3.14...
135
- ```
136
-
137
- Increase `SAMPLES_PER_THREAD` for more precision.
99
+ `doe` is available from both `@simulatte/webgpu` and
100
+ `@simulatte/webgpu/compute`. It provides a small ergonomic layer for common
101
+ headless tasks: `doe.bind(device)` for device-bound workflows, plus static
102
+ buffer creation, readback, one-shot compute dispatch, and
103
+ reusable compiled compute kernels.
104
+ Binding access is inferred from Doe helper-created buffer usage when possible.
105
+ For raw WebGPU buffers or non-bindable/ambiguous usage, pass
106
+ `{ buffer, access }` explicitly.
138
107
 
139
108
  ## What this package is
140
109
 
141
110
  `@simulatte/webgpu` is the canonical package surface for Doe. Node uses an
142
111
  N-API addon and Bun currently routes through the same addon-backed runtime
143
- entry to load `libwebgpu_doe`. Current package builds still ship a Dawn sidecar
144
- where proc resolution requires it. The experimental raw Bun FFI path remains in
145
- `src/bun-ffi.js`, but it is not the default package entry.
146
-
147
- Doe is a Zig-first WebGPU runtime with explicit allocator control, startup-time
148
- profile and quirk binding, a native WGSL pipeline (`lexer -> parser ->
149
- semantic analysis -> IR -> backend emitters`), and explicit
150
- Vulkan/Metal/D3D12 execution paths in one system. Optional
151
- `-Dlean-verified=true` builds use Lean 4 where proved invariants can be
152
- hoisted out of runtime branches instead of being re-checked on every command;
153
- package consumers should not assume that path by default.
154
-
155
- Doe also keeps adapter and driver quirks explicit. Profile selection happens at
156
- startup, quirk data is schema-backed, and the runtime binds the selected
157
- profile instead of relying on hidden per-command fallback logic.
112
+ entry to load `libwebgpu_doe`. Current builds still ship a Dawn sidecar where
113
+ proc resolution requires it.
114
+
115
+ Doe is a Zig-first WebGPU runtime with explicit profile and quirk binding, a
116
+ native WGSL pipeline (`lexer -> parser -> semantic analysis -> IR -> backend
117
+ emitters`), and explicit Vulkan/Metal/D3D12 execution paths in one system.
118
+ Optional `-Dlean-verified=true` builds use Lean 4 where proved invariants can
119
+ be hoisted out of runtime branches instead of being re-checked on every
120
+ command; package consumers should not assume that path by default.
158
121
 
159
122
  ## Current scope
160
123
 
161
- - Node is the primary supported package surface (N-API bridge).
162
- - Bun has API parity with Node through the package's addon-backed runtime entry
163
- (61/61 contract tests passing). Bun benchmark cube maturity remains
164
- prototype until the comparable macOS cells stabilize across repeated
165
- governed runs.
166
- - Package-surface comparisons should be read through the benchmark cube outputs
167
- under `bench/out/cube/`, not as a replacement for strict backend reports.
124
+ - `@simulatte/webgpu` is the default full headless package surface.
125
+ - `@simulatte/webgpu/compute` is the compute-first subset for AI workloads.
126
+ - Node is the primary supported package surface.
127
+ - Bun currently shares the addon-backed runtime entry with Node.
128
+ - Package-surface comparisons should be read through the published repository
129
+ benchmark artifacts, not as a replacement for strict backend reports.
168
130
 
169
131
  <p align="center">
170
132
  <img src="assets/package-surface-cube-snapshot.svg" alt="Static package-surface benchmark cube snapshot" width="920" />
171
133
  </p>
172
134
 
173
- Package-surface benchmark evidence lives under `bench/out/cube/latest/`. Read
174
- those rows as package-surface positioning data, not as substitutes for strict
175
- backend-native claim lanes.
176
-
177
- ## Quick start
178
-
179
- ```bash
180
- npm install @simulatte/webgpu
181
- ```
182
-
183
- ```js
184
- import { providerInfo, requestDevice } from "@simulatte/webgpu";
185
-
186
- console.log(providerInfo());
187
-
188
- const device = await requestDevice();
189
- console.log(device.limits.maxBufferSize);
190
- ```
191
-
192
- The install ships platform-specific prebuilds for macOS arm64 (Metal) and
193
- Linux x64 (Vulkan). The commands are the same on both platforms; the correct
194
- backend is selected automatically. The only external prerequisite is GPU
195
- drivers on the host. If no prebuild matches your platform, install falls back
196
- to building from source via node-gyp.
197
-
198
135
  ## Verify your install
199
136
 
200
- After installing, run the smoke test to confirm native library loading and a
201
- GPU round-trip:
202
-
203
137
  ```bash
204
138
  npm run smoke
205
- ```
206
-
207
- To run the full contract test suite (adapter, device, buffers, compute
208
- dispatch with readback, textures, samplers):
209
-
210
- ```bash
211
- npm test # Node
212
- npm run test:bun # Bun
213
- ```
214
-
215
- If `npm run smoke` fails, check that GPU drivers are installed and that your
216
- platform is supported (macOS arm64 or Linux x64).
217
-
218
- ## Building from source
219
-
220
- Use this when working from the Fawn repo checkout or rebuilding the addon
221
- against a local Doe runtime build.
222
-
223
- ```bash
224
- # From the Fawn workspace root:
225
- cd zig && zig build dropin # build libwebgpu_doe + Dawn sidecar
226
-
227
- cd nursery/webgpu
228
- npm run build:addon # compile doe_napi.node from source
229
- npm run smoke # verify native loading + GPU round-trip
230
- npm test # Node contract tests
231
- npm run test:bun # Bun contract tests
232
- ```
233
-
234
- Current macOS arm64 validation for `0.2.3` was rerun on March 10, 2026 with:
235
-
236
- ```bash
237
- cd zig && zig build dropin
238
-
239
- cd nursery/webgpu
240
- npm run build:addon
241
- npm run smoke
242
139
  npm test
243
140
  npm run test:bun
244
- npm run prebuild -- --skip-addon-build
245
- npm pack --dry-run
246
- ```
247
-
248
- That path is green on the Apple Metal host. `npm run test:bun` also passed on
249
- this host (`61 passed, 0 failed`) once Bun was added to `PATH`.
250
-
251
- For Fawn development setup, build toolchain requirements, and benchmark
252
- harness usage, see the [Fawn project README](../../README.md).
253
-
254
- ## Packaging prebuilds (CI / release)
255
-
256
- ```bash
257
- npm run prebuild # assembles prebuilds/<platform>-<arch>/
258
141
  ```
259
142
 
260
- Supported prebuild targets: macOS arm64 (Metal), Linux x64 (Vulkan),
261
- Windows x64 (D3D12). Host GPU drivers are the only external prerequisite.
262
- Install uses prebuilds when available, falls back to node-gyp from source.
263
- Tracked `prebuilds/<platform>-<arch>/` directories are the source of truth for
264
- reproducible package publishes. If a prebuild exists only on one local machine
265
- and is not committed, `npm pack` output will differ by environment.
266
- Generated `.tgz` package archives are release outputs and should not be
267
- committed to the repo.
268
- Prebuild `metadata.json` now records `doeBuild.leanVerifiedBuild` and
269
- `proofArtifactSha256`, and `providerInfo()` surfaces the same values when
270
- metadata is present.
271
-
272
- Package publication still depends on the governed Linux Vulkan release lane in
273
- [`process.md`](../../process.md). A green macOS package rerun is necessary, but
274
- not sufficient, for a release publish.
275
-
276
- ## Current caveats
277
-
278
- - This package is for headless benchmarking and CI workflows, not full browser
279
- parity.
280
- - Node provider comparisons are host-local package/runtime evidence measured
281
- with package-level timers. They are useful surface-positioning data, not
282
- backend claim substantiation or a broad "the package is faster" claim.
283
- - `@simulatte/webgpu` does not yet have a single broad cross-surface speed
284
- claim. Current performance evidence is split across Node package-surface
285
- runs, prototype Bun package-surface runs, and workload-specific strict
286
- backend reports.
287
- - Linux Node Doe-native path is now wired end-to-end (Linux guard removed).
288
- No `DOE_WEBGPU_LIB` env var needed when prebuilds or workspace artifacts
289
- are present.
290
- - Fresh macOS package evidence from March 10, 2026 is reflected in
291
- `bench/out/cube/latest/` (generated `2026-03-10T20:31:02.431911Z`):
292
- Bun `uploads`, `compute_e2e`, and `full_comparable` are `claimable`;
293
- Node `uploads`, `compute_e2e`, and `full_comparable` are also `claimable`.
294
- - Separate Apple Metal extended-comparable backend evidence from March 10, 2026
295
- (`bench/out/apple-metal/extended-comparable/20260310T121546Z/`) is
296
- `31/31` comparable and `31/31` claimable. Read that lane as stricter
297
- backend evidence, not as a replacement for the package-surface cube rows.
298
- - Bun has API parity with Node (61/61 contract tests). The package-default Bun
299
- entry currently routes through the addon-backed runtime, while
300
- `src/bun-ffi.js` remains experimental. Bun benchmark lane is at
301
- `bench/bun/compare.js`; benchmark interpretations should note which runtime
302
- entry was exercised. Latest fresh macOS run
303
- (`bench/out/bun-doe-vs-webgpu/doe-vs-bun-webgpu-2026-03-10T195022524Z.json`)
304
- executes all `12` current workloads and has `9` comparable rows, all `9`
305
- claimable. `compute_e2e_{256,4096,65536}` and
306
- `copy_buffer_to_buffer_4kb` are claimable in the full macOS package lane.
307
- The remaining three rows are intentional directional-only workloads
308
- (`submit_empty`, `pipeline_create`, `compute_dispatch_simple`).
309
- - Latest fresh macOS Node package run
310
- (`bench/out/node-doe-vs-dawn-claim-full/doe-vs-dawn-node-2026-03-10T202406545Z.json`)
311
- has `12` total rows, `9` comparable rows, and all `9` comparable rows are
312
- claimable. `compute_e2e_{256,4096,65536}`, `copy_buffer_to_buffer_4kb`,
313
- and the current upload set are claimable in the full package lane. The
314
- remaining three rows are intentional directional-only workloads
315
- (`submit_empty`, `pipeline_create`, `compute_dispatch_simple`).
316
- - Self-contained install ships prebuilt `doe_napi.node` + `libwebgpu_doe` +
317
- Dawn sidecar per platform. See **Verify your install** above.
318
- - API details live in `API_CONTRACT.md`.
319
- - Compatibility scope is documented in `COMPAT_SCOPE.md`.
143
+ `npm run smoke` checks native library loading and a GPU round-trip. `npm test`
144
+ covers the Node package contract and a packed-tarball export/import check.
145
+
146
+ ## Caveats
147
+
148
+ - This is a headless package, not a browser DOM/canvas package.
149
+ - `@simulatte/webgpu/compute` is intentionally narrower than the default full
150
+ surface.
151
+ - Bun currently shares the addon-backed runtime entry with Node. Package-surface
152
+ contract tests are green, and current comparable macOS package cells are
153
+ claimable. Any FFI-specific claims remain scoped to the experimental Bun FFI
154
+ path until separately revalidated.
155
+ - Package-surface benchmark rows are positioning data; backend-native claim
156
+ lanes remain the source of truth for strict Doe-vs-Dawn claims.
157
+
158
+ ## Further reading
159
+
160
+ - [API contract](./api-contract.md)
161
+ - [Support contracts](./support-contracts.md)
162
+ - [Compatibility scope](./compat-scope.md)
163
+ - [Layering plan](./layering-plan.md)
164
+ - [Headless WebGPU comparison](./headless-webgpu-comparison.md)
165
+ - [Zig source inventory](./zig-source-inventory.md)
@@ -2,21 +2,45 @@
2
2
 
3
3
  Contract version: `v1`
4
4
 
5
- Scope: current single-surface headless WebGPU package contract for Node.js and
6
- Bun, plus Doe runtime helpers used by benchmarking, CI, and artifact-backed
7
- comparison workflows.
5
+ Scope: current headless WebGPU package contract for Node.js and Bun, with a
6
+ default `full` surface, an explicit `compute` subpath, and Doe runtime helpers
7
+ used by benchmarking, CI, and artifact-backed comparison workflows.
8
8
 
9
- This is the current single-surface package contract.
10
- For the proposed future layered `core` vs `full` support split, see
11
- `SUPPORT_CONTRACTS.md`.
9
+ For the current `compute` vs `full` support split, see
10
+ [`./support-contracts.md`](./support-contracts.md).
12
11
 
13
12
  This contract covers package-surface GPU access, provider metadata, and helper
14
13
  entrypoints. It does not promise DOM/canvas ownership or browser-process
15
14
  parity.
16
15
 
17
- ## Node runtime API
16
+ ## Export surfaces
18
17
 
19
- Module: `@simulatte/webgpu` (Node default export target)
18
+ ### `@simulatte/webgpu`
19
+
20
+ Default package surface.
21
+
22
+ Contract:
23
+
24
+ - headless `full` surface
25
+ - includes compute plus render/sampler/surface APIs already exposed by the package runtime
26
+ - also exports the `doe` ergonomic namespace
27
+
28
+ ### `@simulatte/webgpu/compute`
29
+
30
+ Compute-first package surface.
31
+
32
+ Contract:
33
+
34
+ - sized for AI workloads and other buffer/dispatch-heavy headless execution
35
+ - excludes render/sampler/surface methods from the public JS facade
36
+ - also exports the same `doe` ergonomic namespace
37
+
38
+ ## Shared runtime API
39
+
40
+ Modules:
41
+
42
+ - `@simulatte/webgpu`
43
+ - `@simulatte/webgpu/compute`
20
44
 
21
45
  ### `create(createArgs?)`
22
46
 
@@ -73,6 +97,11 @@ Output:
73
97
 
74
98
  - `Promise<GPUDevice>`
75
99
 
100
+ On `@simulatte/webgpu/compute`, the returned device is a compute-only facade:
101
+
102
+ - buffer / bind group / compute pipeline / command encoder / queue methods are available
103
+ - render / sampler / surface methods are intentionally absent from the facade
104
+
76
105
  ### `providerInfo()`
77
106
 
78
107
  Output object:
@@ -95,6 +124,27 @@ Behavior:
95
124
  metadata is available
96
125
  - does not guess: if metadata is unavailable, `leanVerifiedBuild` is `null`
97
126
 
127
+ ### `doe`
128
+
129
+ Output object:
130
+
131
+ - `bind(device)`
132
+ - `createBuffer(device, options)`
133
+ - `createBufferFromData(device, data, options?)`
134
+ - `readBuffer(device, buffer, TypedArray, options?)`
135
+ - `runCompute(device, options)`
136
+ - `compileCompute(device, options)`
137
+
138
+ Behavior:
139
+
140
+ - provides an ergonomic JS surface for common headless compute tasks
141
+ - supports both static helper calls and `doe.bind(device)` for device-bound workflows
142
+ - infers `runCompute(...).bindings` access from Doe helper-created buffer usage when that
143
+ usage maps to one bindable access mode (`uniform`, `storage-read`, `storage-readwrite`)
144
+ - fails fast for bare bindings that do not carry Doe helper usage metadata or whose
145
+ usage is non-bindable/ambiguous; callers must pass `{ buffer, access }` explicitly
146
+ - additive only; it does not replace the raw WebGPU-facing package API
147
+
98
148
  ### `createDoeRuntime(options?)`
99
149
 
100
150
  Input:
Binary file
@@ -60,22 +60,22 @@
60
60
 
61
61
  <rect x="640" y="176" width="488" height="318" rx="24" class="panel toneRight"/>
62
62
  <text x="668" y="216" class="cardTitle">Bun package lane</text>
63
- <text x="668" y="244" class="cardMeta">Prototype support | linux_x64</text>
64
- <text x="668" y="266" class="cardMeta">latest populated cell 2026-03-06T21:55:26.482Z</text>
63
+ <text x="668" y="244" class="cardMeta">Validated support | mac_apple_silicon</text>
64
+ <text x="668" y="266" class="cardMeta">latest populated cell 2026-03-10T19:50:22.523Z</text>
65
65
 
66
66
  <rect x="658" y="300" width="452" height="82" rx="16" class="metric toneRight"/>
67
67
  <text x="682" y="331" class="metricTitle">Compute E2E</text>
68
68
  <rect x="954" y="315" width="132" height="28" rx="14" fill="#16a34a" stroke="#86efac" stroke-width="1.5"/>
69
69
  <text x="1020" y="334" text-anchor="middle" class="pillText">CLAIMABLE</text>
70
70
  <text x="682" y="357" class="metricBody">3 rows | claimable</text>
71
- <text x="682" y="377" class="metricBody">median p50 delta +77.2%</text>
71
+ <text x="682" y="377" class="metricBody">median p50 delta +53.1%</text>
72
72
 
73
73
  <rect x="658" y="396" width="452" height="82" rx="16" class="metric toneRight"/>
74
74
  <text x="682" y="427" class="metricTitle">Uploads</text>
75
- <rect x="954" y="411" width="132" height="28" rx="14" fill="#d97706" stroke="#fbbf24" stroke-width="1.5"/>
76
- <text x="1020" y="430" text-anchor="middle" class="pillText">COMPARABLE</text>
77
- <text x="682" y="453" class="metricBody">5 rows | comparable</text>
78
- <text x="682" y="473" class="metricBody">median p50 delta +8.6%</text>
75
+ <rect x="954" y="411" width="132" height="28" rx="14" fill="#16a34a" stroke="#86efac" stroke-width="1.5"/>
76
+ <text x="1020" y="430" text-anchor="middle" class="pillText">CLAIMABLE</text>
77
+ <text x="682" y="453" class="metricBody">5 rows | claimable</text>
78
+ <text x="682" y="473" class="metricBody">median p50 delta +67.8%</text>
79
79
  <text x="72" y="590" class="foot">Generated by nursery/webgpu/scripts/generate-readme-assets.js.</text>
80
80
  <text x="72" y="612" class="foot">Static claim and comparability card from the package-surface cube. It is not a substitute for strict backend reports.</text>
81
81
  </svg>
@@ -43,4 +43,4 @@ Layering note:
43
43
 
44
44
  - this file describes the current package surface and its present non-goals
45
45
  - proposed future `core` vs `full` support contracts are defined separately in
46
- `SUPPORT_CONTRACTS.md`
46
+ [`./support-contracts.md`](./support-contracts.md)
@@ -20,10 +20,10 @@ It answers four questions:
20
20
 
21
21
  Use this together with:
22
22
 
23
- - `SUPPORT_CONTRACTS.md` for product/support scope
24
- - `API_CONTRACT.md` for the current single-surface package contract
25
- - `COMPAT_SCOPE.md` for current package non-goals
26
- - `ZIG_SOURCE_INVENTORY.md` for the current `zig/src` file map
23
+ - `support-contracts.md` for product/support scope
24
+ - `api-contract.md` for the current package contract (`full` default, `compute` subpath)
25
+ - `compat-scope.md` for current package non-goals
26
+ - `zig-source-inventory.md` for the current `zig/src` file map
27
27
 
28
28
  ## Current state
29
29
 
@@ -37,7 +37,7 @@ Current reality:
37
37
  4. Canonical texture command handling now lives in `zig/src/core/resource/wgpu_texture_commands.zig`; canonical sampler and surface command handling now lives in `zig/src/full/render/wgpu_sampler_commands.zig` and `zig/src/full/surface/wgpu_surface_commands.zig`.
38
38
  5. `zig/src/wgpu_commands.zig`, `zig/src/wgpu_resources.zig`, and `zig/src/wgpu_extended_commands.zig` are now compatibility façades over the canonical subtrees, while `zig/src/webgpu_ffi.zig` remains the public façade and owner of `WebGPUBackend`.
39
39
  6. Dedicated Zig test lanes now exist as `zig build test-core` and `zig build test-full`, but split coverage remains thin and capability tracking is still represented by one shared coverage ledger.
40
- 7. The JS package still exposes a single surface today.
40
+ 7. The JS package now exposes a default `full` surface plus an explicit `compute` subpath, while the underlying JS implementation is still shared.
41
41
 
42
42
  That means this plan is now materially physicalized in the tree, and the remaining semantic split is concentrated in the public façade files and backend roots.
43
43
 
@@ -195,12 +195,14 @@ lean/Fawn/Core/
195
195
  lean/Fawn/Full/
196
196
  ```
197
197
 
198
- Matching package layout can be one of:
198
+ Matching package layout is currently:
199
199
 
200
200
  1. one package with scoped exports
201
- 2. separate packages with separate contracts
201
+ - `@simulatte/webgpu` => `full`
202
+ - `@simulatte/webgpu/compute` => compute-first subset
202
203
 
203
- Packaging choice is secondary. The source boundary must come first.
204
+ Separate packages remain optional later, but they are not the current shape.
205
+ The source boundary still comes first.
204
206
 
205
207
  ## Refactor order
206
208