shaderpad 1.0.0-beta.39 → 1.0.0-beta.40

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -51,7 +51,7 @@ shader.play(time => {
51
51
  });
52
52
  ```
53
53
 
54
- See the [`examples/` directory](./examples/) for more.
54
+ See the [`examples/` directory](./examples/src) for more.
55
55
 
56
56
  ## Usage
57
57
 
@@ -352,372 +352,141 @@ const shader = new ShaderPad(fragmentShaderSrc, { debug: true });
352
352
 
353
353
  ### plugins
354
354
 
355
- ShaderPad supports plugins to add additional functionality. Plugins are imported from separate paths to keep bundle sizes small.
355
+ Plugins add additional functionality. Imported from separate paths to keep bundle sizes small.
356
356
 
357
357
  #### helpers
358
358
 
359
- The `helpers` plugin provides convenience functions and constants. See [helpers.glsl](./src/plugins/helpers.glsl) for the implementation.
359
+ Convenience functions and constants. See [helpers.glsl](./src/plugins/helpers.glsl).
360
360
 
361
361
  ```typescript
362
- import ShaderPad from 'shaderpad';
363
362
  import helpers from 'shaderpad/plugins/helpers';
364
-
365
- const shader = new ShaderPad(fragmentShaderSrc, {
366
- plugins: [helpers()],
367
- });
363
+ const shader = new ShaderPad(fragmentShaderSrc, { plugins: [helpers()] });
368
364
  ```
369
365
 
370
- **Note:** The `helpers` plugin automatically injects the `u_resolution` uniform into your shader. Do not declare it yourself.
366
+ **Note:** Automatically injects `u_resolution`. Don't declare it yourself.
371
367
 
372
368
  #### save
373
369
 
374
- The `save` plugin adds a `.save()` method to the shader that saves the current frame to a PNG file. It works on desktop and mobile.
370
+ Adds `.save()` method to save the current frame as PNG.
375
371
 
376
372
  ```typescript
377
- import ShaderPad from 'shaderpad';
378
373
  import save, { WithSave } from 'shaderpad/plugins/save';
379
-
380
374
  const shader = new ShaderPad(fragmentShaderSrc, { plugins: [save()] }) as WithSave<ShaderPad>;
381
375
  shader.save('filename', 'Optional mobile share text');
382
376
  ```
383
377
 
384
378
  #### face
385
379
 
386
- The `face` plugin uses [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) to detect faces in video or image textures.
380
+ Uses [MediaPipe Face Landmarker](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) to detect faces.
387
381
 
388
382
  ```typescript
389
- import ShaderPad from 'shaderpad';
390
383
  import face from 'shaderpad/plugins/face';
391
-
392
384
  const shader = new ShaderPad(fragmentShaderSrc, {
393
- plugins: [
394
- face({
395
- textureName: 'u_webcam',
396
- options: { maxFaces: 3 },
397
- }),
398
- ],
385
+ plugins: [face({ textureName: 'u_webcam', options: { maxFaces: 3 } })],
399
386
  });
400
387
  ```
401
388
 
402
- **Options:**
403
-
404
- - `onReady?: () => void` - Callback invoked when initialization is complete and the detection model is loaded
405
- - `onResults?: (results: FaceLandmarkerResult) => void` - Callback invoked with detection results each frame
406
-
407
- **Uniforms:**
408
-
409
- | Uniform | Type | Description |
410
- | -------------------- | --------- | --------------------------------------------------------------------------- |
411
- | `u_maxFaces` | int | Maximum number of faces to detect |
412
- | `u_nFaces` | int | Current number of detected faces |
413
- | `u_faceLandmarksTex` | sampler2D | Raw landmark data texture (use `faceLandmark()` to access) |
414
- | `u_faceMask` | sampler2D | Face mask texture (R: region type, G: confidence, B: normalized face index) |
415
-
416
- **Helper functions:**
417
-
418
- All region functions return `vec2(confidence, faceIndex)`. faceIndex is 0-indexed (-1 = no face).
419
-
420
- - `faceLandmark(int faceIndex, int landmarkIndex) -> vec4` - Returns landmark data as `vec4(x, y, z, visibility)`. Use `vec2(faceLandmark(...))` to get just the screen position.
421
- - `leftEyebrowAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in left eyebrow, `vec2(0.0, -1.0)` otherwise.
422
- - `rightEyebrowAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in right eyebrow, `vec2(0.0, -1.0)` otherwise.
423
- - `leftEyeAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in left eye, `vec2(0.0, -1.0)` otherwise.
424
- - `rightEyeAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in right eye, `vec2(0.0, -1.0)` otherwise.
425
- - `lipsAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in lips, `vec2(0.0, -1.0)` otherwise.
426
- - `outerMouthAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in outer mouth (lips + inner mouth), `vec2(0.0, -1.0)` otherwise.
427
- - `innerMouthAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in inner mouth region, `vec2(0.0, -1.0)` otherwise.
428
- - `faceOvalAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in face oval contour, `vec2(0.0, -1.0)` otherwise.
429
- - `faceAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in face mesh or oval contour, `vec2(0.0, -1.0)` otherwise.
430
- - `eyeAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in either eye, `vec2(0.0, -1.0)` otherwise.
431
- - `eyebrowAt(vec2 pos) -> vec2` - Returns `vec2(1.0, faceIndex)` if position is in either eyebrow, `vec2(0.0, -1.0)` otherwise.
432
-
433
- **Convenience functions** (return `1.0` if true, `0.0` if false):
434
-
435
- - `inFace(vec2 pos) -> float` - Returns `1.0` if position is in face mesh, `0.0` otherwise.
436
- - `inEye(vec2 pos) -> float` - Returns `1.0` if position is in either eye, `0.0` otherwise.
437
- - `inEyebrow(vec2 pos) -> float` - Returns `1.0` if position is in either eyebrow, `0.0` otherwise.
438
- - `inOuterMouth(vec2 pos) -> float` - Returns `1.0` if position is in outer mouth (lips + inner mouth), `0.0` otherwise.
439
- - `inInnerMouth(vec2 pos) -> float` - Returns `1.0` if position is in inner mouth, `0.0` otherwise.
440
- - `inLips(vec2 pos) -> float` - Returns `1.0` if position is in lips, `0.0` otherwise.
389
+ **Options:** `onReady?: () => void`, `onResults?: (results: FaceLandmarkerResult) => void`
441
390
 
442
- **Landmark Constants:**
391
+ **Uniforms:** `u_maxFaces` (int), `u_nFaces` (int), `u_faceLandmarksTex` (sampler2D), `u_faceMask` (sampler2D)
443
392
 
444
- - `FACE_LANDMARK_L_EYE_CENTER` - Left eye center landmark index
445
- - `FACE_LANDMARK_R_EYE_CENTER` - Right eye center landmark index
446
- - `FACE_LANDMARK_NOSE_TIP` - Nose tip landmark index
447
- - `FACE_LANDMARK_FACE_CENTER` - Face center landmark index (custom, calculated from all landmarks)
448
- - `FACE_LANDMARK_MOUTH_CENTER` - Mouth center landmark index (custom, calculated from inner mouth landmarks)
393
+ **Helper functions:** All region functions return `vec2(confidence, faceIndex)`. faceIndex is 0-indexed (-1 = no face).
449
394
 
450
- **Example usage:**
395
+ - `faceLandmark(int faceIndex, int landmarkIndex) -> vec4` - Returns `vec4(x, y, z, visibility)`
396
+ - `leftEyebrowAt(vec2 pos) -> vec2`, `rightEyebrowAt(vec2 pos) -> vec2`
397
+ - `leftEyeAt(vec2 pos) -> vec2`, `rightEyeAt(vec2 pos) -> vec2`
398
+ - `lipsAt(vec2 pos) -> vec2`, `outerMouthAt(vec2 pos) -> vec2`, `innerMouthAt(vec2 pos) -> vec2`
399
+ - `faceOvalAt(vec2 pos) -> vec2`, `faceAt(vec2 pos) -> vec2`
400
+ - `eyeAt(vec2 pos) -> vec2`, `eyebrowAt(vec2 pos) -> vec2`
451
401
 
452
- ```glsl
453
- // Get a specific landmark position.
454
- vec2 nosePos = vec2(faceLandmark(0, FACE_LANDMARK_NOSE_TIP));
455
-
456
- // Use in* convenience functions for simple boolean checks.
457
- float eyeMask = inEye(v_uv);
458
-
459
- // Use faceLandmark or *At functions when you need to check a specific face index.
460
- vec2 leftEye = leftEyeAt(v_uv);
461
- for (int i = 0; i < u_nFaces; ++i) {
462
- vec4 leftEyeCenter = faceLandmark(i, FACE_LANDMARK_L_EYE_CENTER);
463
- vec4 rightEyeCenter = faceLandmark(i, FACE_LANDMARK_R_EYE_CENTER);
464
- if (leftEye.x > 0.0 && int(leftEye.y) == i) {
465
- // Position is inside the left eye of face i.
466
- }
467
- // ...
468
- }
469
- ```
402
+ **Convenience functions:** `inFace(vec2 pos) -> float`, `inEye(vec2 pos) -> float`, `inEyebrow(vec2 pos) -> float`, `inOuterMouth(vec2 pos) -> float`, `inInnerMouth(vec2 pos) -> float`, `inLips(vec2 pos) -> float`
470
403
 
471
- [Landmark indices are documented here.](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker#face_landmarker_model) This library adds two custom landmarks: `FACE_CENTER` and `MOUTH_CENTER`. This brings the total landmark count to 480.
404
+ **Constants:** `FACE_LANDMARK_L_EYE_CENTER`, `FACE_LANDMARK_R_EYE_CENTER`, `FACE_LANDMARK_NOSE_TIP`, `FACE_LANDMARK_FACE_CENTER`, `FACE_LANDMARK_MOUTH_CENTER`
472
405
 
473
- **Note:** The face plugin requires `@mediapipe/tasks-vision` as a peer dependency.
406
+ **Note:** Requires `@mediapipe/tasks-vision` as a peer dependency. Adds two custom landmarks (`FACE_CENTER`, `MOUTH_CENTER`), bringing total to 480.
474
407
 
475
408
  #### pose
476
409
 
477
- The `pose` plugin uses [MediaPipe Pose Landmarker](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker) to expose a flat array of 2D landmarks. Each pose contributes 39 landmarks (33 standard + 6 custom), enumerated below.
410
+ Uses [MediaPipe Pose Landmarker](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker). Each pose contributes 39 landmarks (33 standard + 6 custom).
478
411
 
479
412
  ```typescript
480
- import ShaderPad from 'shaderpad';
481
413
  import pose from 'shaderpad/plugins/pose';
482
-
483
414
  const shader = new ShaderPad(fragmentShaderSrc, {
484
415
  plugins: [pose({ textureName: 'u_video', options: { maxPoses: 3 } })],
485
416
  });
486
417
  ```
487
418
 
488
- **Options:**
489
-
490
- - `onReady?: () => void` - Callback invoked when initialization is complete and the detection model is loaded
491
- - `onResults?: (results: PoseLandmarkerResult) => void` - Callback invoked with detection results each frame
419
+ **Options:** `onReady?: () => void`, `onResults?: (results: PoseLandmarkerResult) => void`
492
420
 
493
- **Uniforms:**
494
-
495
- | Uniform | Type | Description |
496
- | -------------------- | --------- | ----------------------------------------------------------------------------- |
497
- | `u_maxPoses` | int | Maximum number of poses to track |
498
- | `u_nPoses` | int | Current number of detected poses |
499
- | `u_poseLandmarksTex` | sampler2D | Raw landmark data texture (RGBA: x, y, z, visibility) |
500
- | `u_poseMask` | sampler2D | Pose mask texture (R: body detected, G: confidence, B: normalized pose index) |
421
+ **Uniforms:** `u_maxPoses` (int), `u_nPoses` (int), `u_poseLandmarksTex` (sampler2D), `u_poseMask` (sampler2D)
501
422
 
502
423
  **Helper functions:**
503
424
 
504
- - `poseLandmark(int poseIndex, int landmarkIndex) -> vec4` - Returns landmark data as `vec4(x, y, z, visibility)`. Use `vec2(poseLandmark(...))` to get just the screen position.
505
- - `poseAt(vec2 pos) -> vec2` - Returns `vec2(confidence, poseIndex)`. poseIndex is 0-indexed (-1 = no pose), confidence is the segmentation confidence.
506
- - `inPose(vec2 pos) -> float` - Returns `1.0` if position is in any pose, `0.0` otherwise.
507
-
508
- **Constants:**
509
-
510
- - `POSE_LANDMARK_LEFT_EYE` - Left eye landmark index (2)
511
- - `POSE_LANDMARK_RIGHT_EYE` - Right eye landmark index (5)
512
- - `POSE_LANDMARK_LEFT_SHOULDER` - Left shoulder landmark index (11)
513
- - `POSE_LANDMARK_RIGHT_SHOULDER` - Right shoulder landmark index (12)
514
- - `POSE_LANDMARK_LEFT_ELBOW` - Left elbow landmark index (13)
515
- - `POSE_LANDMARK_RIGHT_ELBOW` - Right elbow landmark index (14)
516
- - `POSE_LANDMARK_LEFT_HIP` - Left hip landmark index (23)
517
- - `POSE_LANDMARK_RIGHT_HIP` - Right hip landmark index (24)
518
- - `POSE_LANDMARK_LEFT_KNEE` - Left knee landmark index (25)
519
- - `POSE_LANDMARK_RIGHT_KNEE` - Right knee landmark index (26)
520
- - `POSE_LANDMARK_BODY_CENTER` - Body center landmark index (33, custom, calculated from all landmarks)
521
- - `POSE_LANDMARK_LEFT_HAND_CENTER` - Left hand center landmark index (34, custom, calculated from pinky, thumb, wrist, index)
522
- - `POSE_LANDMARK_RIGHT_HAND_CENTER` - Right hand center landmark index (35, custom, calculated from pinky, thumb, wrist, index)
523
- - `POSE_LANDMARK_LEFT_FOOT_CENTER` - Left foot center landmark index (36, custom, calculated from ankle, heel, foot index)
524
- - `POSE_LANDMARK_RIGHT_FOOT_CENTER` - Right foot center landmark index (37, custom, calculated from ankle, heel, foot index)
525
- - `POSE_LANDMARK_TORSO_CENTER` - Torso center landmark index (38, custom, calculated from shoulders and hips)
526
-
527
- **Note:** For connecting pose landmarks (e.g., drawing skeleton lines), `PoseLandmarker.POSE_CONNECTIONS` from `@mediapipe/tasks-vision` provides an array of `{ start, end }` pairs that define which landmarks should be connected.
528
-
529
- Use `poseLandmark(int poseIndex, int landmarkIndex)` in GLSL to retrieve a specific point. Landmark indices are:
530
-
531
- | Index | Landmark | Index | Landmark |
532
- | ----- | ----------------- | ----- | -------------------------- |
533
- | 0 | nose | 20 | right index |
534
- | 1 | left eye (inner) | 21 | left thumb |
535
- | 2 | left eye | 22 | right thumb |
536
- | 3 | left eye (outer) | 23 | left hip |
537
- | 4 | right eye (inner) | 24 | right hip |
538
- | 5 | right eye | 25 | left knee |
539
- | 6 | right eye (outer) | 26 | right knee |
540
- | 7 | left ear | 27 | left ankle |
541
- | 8 | right ear | 28 | right ankle |
542
- | 9 | mouth (left) | 29 | left heel |
543
- | 10 | mouth (right) | 30 | right heel |
544
- | 11 | left shoulder | 31 | left foot index |
545
- | 12 | right shoulder | 32 | right foot index |
546
- | 13 | left elbow | 33 | body center (custom) |
547
- | 14 | right elbow | 34 | left hand center (custom) |
548
- | 15 | left wrist | 35 | right hand center (custom) |
549
- | 16 | right wrist | 36 | left foot center (custom) |
550
- | 17 | left pinky | 37 | right foot center (custom) |
551
- | 18 | right pinky | 38 | torso center (custom) |
552
- | 19 | left index | | |
553
-
554
- [Source](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker#pose_landmarker_model)
555
-
556
- [Landmark indices are documented here.](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker#pose_landmarker_model) This library adds six custom landmarks: `BODY_CENTER`, `LEFT_HAND_CENTER`, `RIGHT_HAND_CENTER`, `LEFT_FOOT_CENTER`, `RIGHT_FOOT_CENTER`, and `TORSO_CENTER`. This brings the total landmark count to 39.
557
-
558
- A minimal fragment shader loop looks like:
425
+ - `poseLandmark(int poseIndex, int landmarkIndex) -> vec4` - Returns `vec4(x, y, z, visibility)`
426
+ - `poseAt(vec2 pos) -> vec2` - Returns `vec2(confidence, poseIndex)`
427
+ - `inPose(vec2 pos) -> float`
559
428
 
560
- ```glsl
561
- for (int i = 0; i < u_maxPoses; ++i) {
562
- if (i >= u_nPoses) break;
563
- vec2 leftHip = vec2(poseLandmark(i, POSE_LANDMARK_LEFT_HIP));
564
- vec2 rightHip = vec2(poseLandmark(i, POSE_LANDMARK_RIGHT_HIP));
565
- // …
566
- }
567
- ```
429
+ **Constants:** `POSE_LANDMARK_LEFT_EYE`, `POSE_LANDMARK_RIGHT_EYE`, `POSE_LANDMARK_LEFT_SHOULDER`, `POSE_LANDMARK_RIGHT_SHOULDER`, `POSE_LANDMARK_LEFT_ELBOW`, `POSE_LANDMARK_RIGHT_ELBOW`, `POSE_LANDMARK_LEFT_HIP`, `POSE_LANDMARK_RIGHT_HIP`, `POSE_LANDMARK_LEFT_KNEE`, `POSE_LANDMARK_RIGHT_KNEE`, `POSE_LANDMARK_BODY_CENTER`, `POSE_LANDMARK_LEFT_HAND_CENTER`, `POSE_LANDMARK_RIGHT_HAND_CENTER`, `POSE_LANDMARK_LEFT_FOOT_CENTER`, `POSE_LANDMARK_RIGHT_FOOT_CENTER`, `POSE_LANDMARK_TORSO_CENTER`
430
+
431
+ **Note:** Requires `@mediapipe/tasks-vision`. Use `PoseLandmarker.POSE_CONNECTIONS` for skeleton connections. [Landmark indices](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker#pose_landmarker_model)
568
432
 
569
433
  #### hands
570
434
 
571
- The `hands` plugin uses [MediaPipe Hand Landmarker](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) to expose a flat array of 2D landmarks. Each hand contributes 22 landmarks, enumerated below.
435
+ Uses [MediaPipe Hand Landmarker](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker). Each hand contributes 22 landmarks.
572
436
 
573
437
  ```typescript
574
- import ShaderPad from 'shaderpad';
575
438
  import hands from 'shaderpad/plugins/hands';
576
-
577
439
  const shader = new ShaderPad(fragmentShaderSrc, {
578
440
  plugins: [hands({ textureName: 'u_video', options: { maxHands: 2 } })],
579
441
  });
580
442
  ```
581
443
 
582
- **Options:**
583
-
584
- - `onReady?: () => void` - Callback invoked when initialization is complete and the detection model is loaded
585
- - `onResults?: (results: HandLandmarkerResult) => void` - Callback invoked with detection results each frame
444
+ **Options:** `onReady?: () => void`, `onResults?: (results: HandLandmarkerResult) => void`
586
445
 
587
- **Uniforms:**
588
-
589
- | Uniform | Type | Description |
590
- | -------------------- | --------- | ----------------------------------------------------- |
591
- | `u_maxHands` | int | Maximum number of hands to track |
592
- | `u_nHands` | int | Current number of detected hands |
593
- | `u_handLandmarksTex` | sampler2D | Raw landmark data texture (RGBA: x, y, z, handedness) |
446
+ **Uniforms:** `u_maxHands` (int), `u_nHands` (int), `u_handLandmarksTex` (sampler2D)
594
447
 
595
448
  **Helper functions:**
596
449
 
597
- - `handLandmark(int handIndex, int landmarkIndex) -> vec4` - Returns landmark data as `vec4(x, y, z, handedness)`. Use `vec2(handLandmark(...))` to get just the screen position. Handedness: 0.0 = left hand, 1.0 = right hand.
598
- - `isRightHand(int handIndex) -> float` - Returns 1.0 if the hand is a right hand, 0.0 if left.
599
- - `isLeftHand(int handIndex) -> float` - Returns 1.0 if the hand is a left hand, 0.0 if right.
600
-
601
- **Landmark Indices:**
602
-
603
- | Index | Landmark | Index | Landmark |
604
- | ----- | ----------------- | ----- | ----------------- |
605
- | 0 | WRIST | 11 | MIDDLE_FINGER_DIP |
606
- | 1 | THUMB_CMC | 12 | MIDDLE_FINGER_TIP |
607
- | 2 | THUMB_MCP | 13 | RING_FINGER_MCP |
608
- | 3 | THUMB_IP | 14 | RING_FINGER_PIP |
609
- | 4 | THUMB_TIP | 15 | RING_FINGER_DIP |
610
- | 5 | INDEX_FINGER_MCP | 16 | RING_FINGER_TIP |
611
- | 6 | INDEX_FINGER_PIP | 17 | PINKY_MCP |
612
- | 7 | INDEX_FINGER_DIP | 18 | PINKY_PIP |
613
- | 8 | INDEX_FINGER_TIP | 19 | PINKY_DIP |
614
- | 9 | MIDDLE_FINGER_MCP | 20 | PINKY_TIP |
615
- | 10 | MIDDLE_FINGER_PIP | 21 | HAND_CENTER |
616
-
617
- [Source](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker#models)
618
-
619
- A minimal fragment shader loop looks like:
620
-
621
- ```glsl
622
- #define WRIST 0
623
- #define THUMB_TIP 4
624
- #define INDEX_TIP 8
625
- #define HAND_CENTER 21
626
-
627
- for (int i = 0; i < u_maxHands; ++i) {
628
- if (i >= u_nHands) break;
629
- vec2 wrist = vec2(handLandmark(i, WRIST));
630
- vec2 thumbTip = vec2(handLandmark(i, THUMB_TIP));
631
- vec2 indexTip = vec2(handLandmark(i, INDEX_TIP));
632
- vec2 handCenter = vec2(handLandmark(i, HAND_CENTER));
633
-
634
- // Use handedness for coloring (0.0 = left/black, 1.0 = right/white).
635
- vec3 handColor = vec3(isRightHand(i));
636
- // …
637
- }
638
- ```
450
+ - `handLandmark(int handIndex, int landmarkIndex) -> vec4` - Returns `vec4(x, y, z, handedness)`. Handedness: 0.0 = left, 1.0 = right.
451
+ - `isRightHand(int handIndex) -> float`, `isLeftHand(int handIndex) -> float`
639
452
 
640
- **Note:** The hands plugin requires `@mediapipe/tasks-vision` as a peer dependency.
453
+ **Note:** Requires `@mediapipe/tasks-vision`. [Landmark indices](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker#models)
641
454
 
642
455
  #### segmenter
643
456
 
644
- The `segmenter` plugin uses [MediaPipe Image Segmenter](https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter) to segment objects in video or image textures. It supports models with multiple categories (e.g., background, hair, chair, dog…). By default, it uses the [hair segmentation model](https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter#hair-model).
457
+ Uses [MediaPipe Image Segmenter](https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter). Supports multi-category models. Defaults to [hair segmentation model](https://ai.google.dev/edge/mediapipe/solutions/vision/image_segmenter#hair-model).
645
458
 
646
459
  ```typescript
647
- import ShaderPad from 'shaderpad';
648
460
  import segmenter from 'shaderpad/plugins/segmenter';
649
-
650
461
  const shader = new ShaderPad(fragmentShaderSrc, {
651
- plugins: [
652
- segmenter({
653
- textureName: 'u_webcam',
654
- options: {
655
- modelPath:
656
- 'https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_multiclass_256x256/float32/latest/selfie_multiclass_256x256.tflite',
657
- outputCategoryMask: true,
658
- onReady: () => {
659
- console.log('Selfie multiclass model: loading complete');
660
- },
661
- },
662
- }),
663
- ],
462
+ plugins: [segmenter({ textureName: 'u_webcam', options: { outputCategoryMask: true } })],
664
463
  });
665
464
  ```
666
465
 
667
- **Options:**
466
+ **Options:** `onReady?: () => void`, `onResults?: (results: ImageSegmenterResult) => void`
668
467
 
669
- - `onReady?: () => void` - Callback invoked when initialization is complete and the detection model is loaded
670
- - `onResults?: (results: ImageSegmenterResult) => void` - Callback invoked with segmentation results each frame
671
-
672
- **Uniforms:**
673
-
674
- | Uniform | Type | Description |
675
- | ----------------- | --------- | ----------------------------------------------------------------------- |
676
- | `u_segmentMask` | sampler2D | Segment mask texture (R: normalized category, G: confidence, B: unused) |
677
- | `u_numCategories` | int | Number of segmentation categories (including background) |
468
+ **Uniforms:** `u_segmentMask` (sampler2D), `u_numCategories` (int)
678
469
 
679
470
  **Helper functions:**
680
471
 
681
- - `segmentAt(vec2 pos) -> vec2` - Returns `vec2(confidence, categoryIndex)`. categoryIndex is 0-indexed (-1 = background). confidence is the segmentation confidence (0-1).
682
-
683
- **Example usage:**
472
+ - `segmentAt(vec2 pos) -> vec2` - Returns `vec2(confidence, categoryIndex)`. categoryIndex is 0-indexed (-1 = background).
684
473
 
685
- ```glsl
686
- vec2 segment = segmentAt(v_uv);
687
- float confidence = segment.x; // Segmentation confidence
688
- float category = segment.y; // Category index (0-indexed, -1 = background)
689
-
690
- if (category >= 0.0) {
691
- // Apply effect based on confidence.
692
- color = mix(color, vec3(1.0, 0.0, 1.0), confidence);
693
- }
694
- ```
695
-
696
- **Note:** The segmenter plugin requires `@mediapipe/tasks-vision` as a peer dependency.
474
+ **Note:** Requires `@mediapipe/tasks-vision` as a peer dependency.
697
475
 
698
476
  ## Contributing
699
477
 
700
478
  ### Running an example
701
479
 
702
480
  ```bash
703
- # Clone the repository.
704
481
  git clone https://github.com/rileyjshaw/shaderpad.git
705
- cd shaderpad
706
-
707
- # Install dependencies and start the development server.
708
- cd examples
482
+ cd shaderpad/examples
709
483
  npm install
710
484
  npm run dev
711
485
  ```
712
486
 
713
- This will launch a local server. Open the provided URL (usually `http://localhost:5173`) in your browser to view and interact with the examples. Use the select box to view different examples.
714
-
715
487
  ### Adding an example
716
488
 
717
- - Add a new `.ts` file in `examples/src/`.
718
- - Follow the structure of an existing example as a template. The example must export an `init` function and a `destroy` function.
719
- - Add the example to the `demos` array in `examples/src/main.ts`.
720
- - If your example needs images or other assets, place them in `examples/public/` and reference them with a relative path.
489
+ Add a `.ts` file in `examples/src/` that exports `init` and `destroy` functions. Add it to the `demos` array in `examples/src/main.ts`.
721
490
 
722
491
  ## License
723
492
 
@@ -1,2 +1,2 @@
1
- "use strict";var d=Object.defineProperty;var l=Object.getOwnPropertyDescriptor;var h=Object.getOwnPropertyNames;var p=Object.prototype.hasOwnProperty;var v=(a,e)=>{for(var n in e)d(a,n,{get:e[n],enumerable:!0})},f=(a,e,n,o)=>{if(e&&typeof e=="object"||typeof e=="function")for(let t of h(e))!p.call(a,t)&&t!==n&&d(a,t,{get:()=>e[t],enumerable:!(o=l(e,t))||o.enumerable});return a};var u=a=>f(d({},"__esModule",{value:!0}),a);var P={};v(P,{default:()=>b});module.exports=u(P);function w(){return function(a,e){let{gl:n,canvas:o}=e,t=document.createElement("a");a.save=async function(r,g){n.clear(n.COLOR_BUFFER_BIT),n.drawArrays(n.TRIANGLES,0,6),r&&!`${r}`.toLowerCase().endsWith(".png")&&(r=`${r}.png`),r=r||"export.png";let s=await(o instanceof HTMLCanvasElement?new Promise(i=>o.toBlob(i,"image/png")):o.convertToBlob({type:"image/png"}));if("ongesturechange"in window)try{let c={files:[new File([s],r,{type:s.type})]};if(g&&(c.text=g),navigator.canShare?.(c)){await navigator.share(c);return}}catch(i){console.warn("Web Share API failed:",i)}t.download=r,t.href=URL.createObjectURL(s),t.click(),URL.revokeObjectURL(t.href)}}}var b=w;
1
+ "use strict";var c=Object.defineProperty;var g=Object.getOwnPropertyDescriptor;var h=Object.getOwnPropertyNames;var p=Object.prototype.hasOwnProperty;var v=(a,e)=>{for(var r in e)c(a,r,{get:e[r],enumerable:!0})},f=(a,e,r,o)=>{if(e&&typeof e=="object"||typeof e=="function")for(let t of h(e))!p.call(a,t)&&t!==r&&c(a,t,{get:()=>e[t],enumerable:!(o=g(e,t))||o.enumerable});return a};var u=a=>f(c({},"__esModule",{value:!0}),a);var x={};v(x,{default:()=>b});module.exports=u(x);function w(){return function(a,e){let{gl:r,canvas:o}=e,t=document.createElement("a");a.save=async function(n,d){r.clear(r.COLOR_BUFFER_BIT),r.drawArrays(r.TRIANGLES,0,6),n&&!`${n}`.toLowerCase().endsWith(".png")&&(n=`${n}.png`),n=n||"export.png";let s=await(o instanceof HTMLCanvasElement?new Promise(i=>o.toBlob(i,"image/png")):o.convertToBlob({type:"image/png"}));if(navigator.share)try{let l={files:[new File([s],n,{type:s.type})]};d&&(l.text=d),await navigator.share(l);return}catch{}t.download=n,t.href=URL.createObjectURL(s),t.click(),URL.revokeObjectURL(t.href)}}}var b=w;
2
2
  //# sourceMappingURL=save.js.map
@@ -1 +1 @@
1
- {"version":3,"sources":["../../src/plugins/save.ts"],"sourcesContent":["import ShaderPad, { PluginContext } from '../index';\n\ndeclare module '../index' {\n\tinterface ShaderPad {\n\t\tsave(filename: string, text?: string): Promise<void>;\n\t}\n}\n\nfunction save() {\n\treturn function (shaderPad: ShaderPad, context: PluginContext) {\n\t\tconst { gl, canvas } = context;\n\t\tconst downloadLink = document.createElement('a');\n\n\t\t(shaderPad as any).save = async function (filename: string, text?: string) {\n\t\t\tgl.clear(gl.COLOR_BUFFER_BIT);\n\t\t\tgl.drawArrays(gl.TRIANGLES, 0, 6);\n\n\t\t\tif (filename && !`${filename}`.toLowerCase().endsWith('.png')) {\n\t\t\t\tfilename = `${filename}.png`;\n\t\t\t}\n\t\t\tfilename = filename || 'export.png';\n\n\t\t\tconst blob: Blob = await (canvas instanceof HTMLCanvasElement\n\t\t\t\t? new Promise(resolve => canvas.toBlob(resolve as BlobCallback, 'image/png'))\n\t\t\t\t: canvas.convertToBlob({ type: 'image/png' }));\n\n\t\t\tif ('ongesturechange' in window) {\n\t\t\t\t// Mobile.\n\t\t\t\ttry {\n\t\t\t\t\tconst file = new File([blob], filename, { type: blob.type });\n\t\t\t\t\tconst shareData: ShareData = { files: [file] };\n\t\t\t\t\tif (text) shareData.text = text;\n\n\t\t\t\t\tif (navigator.canShare?.(shareData)) {\n\t\t\t\t\t\tawait navigator.share(shareData);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t} catch (error) {\n\t\t\t\t\tconsole.warn('Web Share API failed:', error);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Desktop / mobile fallback.\n\t\t\tdownloadLink.download = filename;\n\t\t\tdownloadLink.href = URL.createObjectURL(blob);\n\t\t\tdownloadLink.click();\n\t\t\tURL.revokeObjectURL(downloadLink.href);\n\t\t};\n\t};\n}\n\n// Type helper.\nexport type WithSave<T extends ShaderPad> = T & {\n\tsave(filename: string, text?: string): Promise<void>;\n};\n\nexport default save;\n"],"mappings":"yaAAA,IAAAA,EAAA,GAAAC,EAAAD,EAAA,aAAAE,IAAA,eAAAC,EAAAH,GAQA,SAASI,GAAO,CACf,OAAO,SAAUC,EAAsBC,EAAwB,CAC9D,GAAM,CAAE,GAAAC,EAAI,OAAAC,CAAO,EAAIF,EACjBG,EAAe,SAAS,cAAc,GAAG,EAE9CJ,EAAkB,KAAO,eAAgBK,EAAkBC,EAAe,CAC1EJ,EAAG,MAAMA,EAAG,gBAAgB,EAC5BA,EAAG,WAAWA,EAAG,UAAW,EAAG,CAAC,EAE5BG,GAAY,CAAC,GAAGA,CAAQ,GAAG,YAAY,EAAE,SAAS,MAAM,IAC3DA,EAAW,GAAGA,CAAQ,QAEvBA,EAAWA,GAAY,aAEvB,IAAME,EAAa,MAAOJ,aAAkB,kBACzC,IAAI,QAAQK,GAAWL,EAAO,OAAOK,EAAyB,WAAW,CAAC,EAC1EL,EAAO,cAAc,CAAE,KAAM,WAAY,CAAC,GAE7C,GAAI,oBAAqB,OAExB,GAAI,CAEH,IAAMM,EAAuB,CAAE,MAAO,CADzB,IAAI,KAAK,CAACF,CAAI,EAAGF,EAAU,CAAE,KAAME,EAAK,IAAK,CAAC,CAChB,CAAE,EAG7C,GAFID,IAAMG,EAAU,KAAOH,GAEvB,UAAU,WAAWG,CAAS,EAAG,CACpC,MAAM,UAAU,MAAMA,CAAS,EAC/B,MACD,CACD,OAASC,EAAO,CACf,QAAQ,KAAK,wBAAyBA,CAAK,CAC5C,CAIDN,EAAa,SAAWC,EACxBD,EAAa,KAAO,IAAI,gBAAgBG,CAAI,EAC5CH,EAAa,MAAM,EACnB,IAAI,gBAAgBA,EAAa,IAAI,CACtC,CACD,CACD,CAOA,IAAOP,EAAQE","names":["save_exports","__export","save_default","__toCommonJS","save","shaderPad","context","gl","canvas","downloadLink","filename","text","blob","resolve","shareData","error"]}
1
+ {"version":3,"sources":["../../src/plugins/save.ts"],"sourcesContent":["import ShaderPad, { PluginContext } from '../index';\n\ndeclare module '../index' {\n\tinterface ShaderPad {\n\t\tsave(filename: string, text?: string): Promise<void>;\n\t}\n}\n\nfunction save() {\n\treturn function (shaderPad: ShaderPad, context: PluginContext) {\n\t\tconst { gl, canvas } = context;\n\t\tconst downloadLink = document.createElement('a');\n\n\t\t(shaderPad as any).save = async function (filename: string, text?: string) {\n\t\t\tgl.clear(gl.COLOR_BUFFER_BIT);\n\t\t\tgl.drawArrays(gl.TRIANGLES, 0, 6);\n\n\t\t\tif (filename && !`${filename}`.toLowerCase().endsWith('.png')) {\n\t\t\t\tfilename = `${filename}.png`;\n\t\t\t}\n\t\t\tfilename = filename || 'export.png';\n\n\t\t\tconst blob: Blob = await (canvas instanceof HTMLCanvasElement\n\t\t\t\t? new Promise(resolve => canvas.toBlob(resolve as BlobCallback, 'image/png'))\n\t\t\t\t: canvas.convertToBlob({ type: 'image/png' }));\n\n\t\t\tif (navigator.share) {\n\t\t\t\ttry {\n\t\t\t\t\tconst file = new File([blob], filename, { type: blob.type });\n\t\t\t\t\tconst shareData: ShareData = { files: [file] };\n\t\t\t\t\tif (text) shareData.text = text;\n\t\t\t\t\tawait navigator.share(shareData);\n\t\t\t\t\treturn;\n\t\t\t\t} catch (_swallowedError) {}\n\t\t\t}\n\n\t\t\tdownloadLink.download = filename;\n\t\t\tdownloadLink.href = URL.createObjectURL(blob);\n\t\t\tdownloadLink.click();\n\t\t\tURL.revokeObjectURL(downloadLink.href);\n\t\t};\n\t};\n}\n\nexport type WithSave<T extends ShaderPad> = T & {\n\tsave(filename: string, text?: string): Promise<void>;\n};\n\nexport default save;\n"],"mappings":"yaAAA,IAAAA,EAAA,GAAAC,EAAAD,EAAA,aAAAE,IAAA,eAAAC,EAAAH,GAQA,SAASI,GAAO,CACf,OAAO,SAAUC,EAAsBC,EAAwB,CAC9D,GAAM,CAAE,GAAAC,EAAI,OAAAC,CAAO,EAAIF,EACjBG,EAAe,SAAS,cAAc,GAAG,EAE9CJ,EAAkB,KAAO,eAAgBK,EAAkBC,EAAe,CAC1EJ,EAAG,MAAMA,EAAG,gBAAgB,EAC5BA,EAAG,WAAWA,EAAG,UAAW,EAAG,CAAC,EAE5BG,GAAY,CAAC,GAAGA,CAAQ,GAAG,YAAY,EAAE,SAAS,MAAM,IAC3DA,EAAW,GAAGA,CAAQ,QAEvBA,EAAWA,GAAY,aAEvB,IAAME,EAAa,MAAOJ,aAAkB,kBACzC,IAAI,QAAQK,GAAWL,EAAO,OAAOK,EAAyB,WAAW,CAAC,EAC1EL,EAAO,cAAc,CAAE,KAAM,WAAY,CAAC,GAE7C,GAAI,UAAU,MACb,GAAI,CAEH,IAAMM,EAAuB,CAAE,MAAO,CADzB,IAAI,KAAK,CAACF,CAAI,EAAGF,EAAU,CAAE,KAAME,EAAK,IAAK,CAAC,CAChB,CAAE,EACzCD,IAAMG,EAAU,KAAOH,GAC3B,MAAM,UAAU,MAAMG,CAAS,EAC/B,MACD,MAA0B,CAAC,CAG5BL,EAAa,SAAWC,EACxBD,EAAa,KAAO,IAAI,gBAAgBG,CAAI,EAC5CH,EAAa,MAAM,EACnB,IAAI,gBAAgBA,EAAa,IAAI,CACtC,CACD,CACD,CAMA,IAAOP,EAAQE","names":["save_exports","__export","save_default","__toCommonJS","save","shaderPad","context","gl","canvas","downloadLink","filename","text","blob","resolve","shareData"]}
@@ -1,2 +1,2 @@
1
- function g(){return function(c,d){let{gl:t,canvas:r}=d,a=document.createElement("a");c.save=async function(e,s){t.clear(t.COLOR_BUFFER_BIT),t.drawArrays(t.TRIANGLES,0,6),e&&!`${e}`.toLowerCase().endsWith(".png")&&(e=`${e}.png`),e=e||"export.png";let o=await(r instanceof HTMLCanvasElement?new Promise(n=>r.toBlob(n,"image/png")):r.convertToBlob({type:"image/png"}));if("ongesturechange"in window)try{let i={files:[new File([o],e,{type:o.type})]};if(s&&(i.text=s),navigator.canShare?.(i)){await navigator.share(i);return}}catch(n){console.warn("Web Share API failed:",n)}a.download=e,a.href=URL.createObjectURL(o),a.click(),URL.revokeObjectURL(a.href)}}}var l=g;export{l as default};
1
+ function l(){return function(c,d){let{gl:t,canvas:r}=d,a=document.createElement("a");c.save=async function(e,s){t.clear(t.COLOR_BUFFER_BIT),t.drawArrays(t.TRIANGLES,0,6),e&&!`${e}`.toLowerCase().endsWith(".png")&&(e=`${e}.png`),e=e||"export.png";let n=await(r instanceof HTMLCanvasElement?new Promise(o=>r.toBlob(o,"image/png")):r.convertToBlob({type:"image/png"}));if(navigator.share)try{let i={files:[new File([n],e,{type:n.type})]};s&&(i.text=s),await navigator.share(i);return}catch{}a.download=e,a.href=URL.createObjectURL(n),a.click(),URL.revokeObjectURL(a.href)}}}var g=l;export{g as default};
2
2
  //# sourceMappingURL=save.mjs.map
@@ -1 +1 @@
1
- {"version":3,"sources":["../../src/plugins/save.ts"],"sourcesContent":["import ShaderPad, { PluginContext } from '../index';\n\ndeclare module '../index' {\n\tinterface ShaderPad {\n\t\tsave(filename: string, text?: string): Promise<void>;\n\t}\n}\n\nfunction save() {\n\treturn function (shaderPad: ShaderPad, context: PluginContext) {\n\t\tconst { gl, canvas } = context;\n\t\tconst downloadLink = document.createElement('a');\n\n\t\t(shaderPad as any).save = async function (filename: string, text?: string) {\n\t\t\tgl.clear(gl.COLOR_BUFFER_BIT);\n\t\t\tgl.drawArrays(gl.TRIANGLES, 0, 6);\n\n\t\t\tif (filename && !`${filename}`.toLowerCase().endsWith('.png')) {\n\t\t\t\tfilename = `${filename}.png`;\n\t\t\t}\n\t\t\tfilename = filename || 'export.png';\n\n\t\t\tconst blob: Blob = await (canvas instanceof HTMLCanvasElement\n\t\t\t\t? new Promise(resolve => canvas.toBlob(resolve as BlobCallback, 'image/png'))\n\t\t\t\t: canvas.convertToBlob({ type: 'image/png' }));\n\n\t\t\tif ('ongesturechange' in window) {\n\t\t\t\t// Mobile.\n\t\t\t\ttry {\n\t\t\t\t\tconst file = new File([blob], filename, { type: blob.type });\n\t\t\t\t\tconst shareData: ShareData = { files: [file] };\n\t\t\t\t\tif (text) shareData.text = text;\n\n\t\t\t\t\tif (navigator.canShare?.(shareData)) {\n\t\t\t\t\t\tawait navigator.share(shareData);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t} catch (error) {\n\t\t\t\t\tconsole.warn('Web Share API failed:', error);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Desktop / mobile fallback.\n\t\t\tdownloadLink.download = filename;\n\t\t\tdownloadLink.href = URL.createObjectURL(blob);\n\t\t\tdownloadLink.click();\n\t\t\tURL.revokeObjectURL(downloadLink.href);\n\t\t};\n\t};\n}\n\n// Type helper.\nexport type WithSave<T extends ShaderPad> = T & {\n\tsave(filename: string, text?: string): Promise<void>;\n};\n\nexport default save;\n"],"mappings":"AAQA,SAASA,GAAO,CACf,OAAO,SAAUC,EAAsBC,EAAwB,CAC9D,GAAM,CAAE,GAAAC,EAAI,OAAAC,CAAO,EAAIF,EACjBG,EAAe,SAAS,cAAc,GAAG,EAE9CJ,EAAkB,KAAO,eAAgBK,EAAkBC,EAAe,CAC1EJ,EAAG,MAAMA,EAAG,gBAAgB,EAC5BA,EAAG,WAAWA,EAAG,UAAW,EAAG,CAAC,EAE5BG,GAAY,CAAC,GAAGA,CAAQ,GAAG,YAAY,EAAE,SAAS,MAAM,IAC3DA,EAAW,GAAGA,CAAQ,QAEvBA,EAAWA,GAAY,aAEvB,IAAME,EAAa,MAAOJ,aAAkB,kBACzC,IAAI,QAAQK,GAAWL,EAAO,OAAOK,EAAyB,WAAW,CAAC,EAC1EL,EAAO,cAAc,CAAE,KAAM,WAAY,CAAC,GAE7C,GAAI,oBAAqB,OAExB,GAAI,CAEH,IAAMM,EAAuB,CAAE,MAAO,CADzB,IAAI,KAAK,CAACF,CAAI,EAAGF,EAAU,CAAE,KAAME,EAAK,IAAK,CAAC,CAChB,CAAE,EAG7C,GAFID,IAAMG,EAAU,KAAOH,GAEvB,UAAU,WAAWG,CAAS,EAAG,CACpC,MAAM,UAAU,MAAMA,CAAS,EAC/B,MACD,CACD,OAASC,EAAO,CACf,QAAQ,KAAK,wBAAyBA,CAAK,CAC5C,CAIDN,EAAa,SAAWC,EACxBD,EAAa,KAAO,IAAI,gBAAgBG,CAAI,EAC5CH,EAAa,MAAM,EACnB,IAAI,gBAAgBA,EAAa,IAAI,CACtC,CACD,CACD,CAOA,IAAOO,EAAQZ","names":["save","shaderPad","context","gl","canvas","downloadLink","filename","text","blob","resolve","shareData","error","save_default"]}
1
+ {"version":3,"sources":["../../src/plugins/save.ts"],"sourcesContent":["import ShaderPad, { PluginContext } from '../index';\n\ndeclare module '../index' {\n\tinterface ShaderPad {\n\t\tsave(filename: string, text?: string): Promise<void>;\n\t}\n}\n\nfunction save() {\n\treturn function (shaderPad: ShaderPad, context: PluginContext) {\n\t\tconst { gl, canvas } = context;\n\t\tconst downloadLink = document.createElement('a');\n\n\t\t(shaderPad as any).save = async function (filename: string, text?: string) {\n\t\t\tgl.clear(gl.COLOR_BUFFER_BIT);\n\t\t\tgl.drawArrays(gl.TRIANGLES, 0, 6);\n\n\t\t\tif (filename && !`${filename}`.toLowerCase().endsWith('.png')) {\n\t\t\t\tfilename = `${filename}.png`;\n\t\t\t}\n\t\t\tfilename = filename || 'export.png';\n\n\t\t\tconst blob: Blob = await (canvas instanceof HTMLCanvasElement\n\t\t\t\t? new Promise(resolve => canvas.toBlob(resolve as BlobCallback, 'image/png'))\n\t\t\t\t: canvas.convertToBlob({ type: 'image/png' }));\n\n\t\t\tif (navigator.share) {\n\t\t\t\ttry {\n\t\t\t\t\tconst file = new File([blob], filename, { type: blob.type });\n\t\t\t\t\tconst shareData: ShareData = { files: [file] };\n\t\t\t\t\tif (text) shareData.text = text;\n\t\t\t\t\tawait navigator.share(shareData);\n\t\t\t\t\treturn;\n\t\t\t\t} catch (_swallowedError) {}\n\t\t\t}\n\n\t\t\tdownloadLink.download = filename;\n\t\t\tdownloadLink.href = URL.createObjectURL(blob);\n\t\t\tdownloadLink.click();\n\t\t\tURL.revokeObjectURL(downloadLink.href);\n\t\t};\n\t};\n}\n\nexport type WithSave<T extends ShaderPad> = T & {\n\tsave(filename: string, text?: string): Promise<void>;\n};\n\nexport default save;\n"],"mappings":"AAQA,SAASA,GAAO,CACf,OAAO,SAAUC,EAAsBC,EAAwB,CAC9D,GAAM,CAAE,GAAAC,EAAI,OAAAC,CAAO,EAAIF,EACjBG,EAAe,SAAS,cAAc,GAAG,EAE9CJ,EAAkB,KAAO,eAAgBK,EAAkBC,EAAe,CAC1EJ,EAAG,MAAMA,EAAG,gBAAgB,EAC5BA,EAAG,WAAWA,EAAG,UAAW,EAAG,CAAC,EAE5BG,GAAY,CAAC,GAAGA,CAAQ,GAAG,YAAY,EAAE,SAAS,MAAM,IAC3DA,EAAW,GAAGA,CAAQ,QAEvBA,EAAWA,GAAY,aAEvB,IAAME,EAAa,MAAOJ,aAAkB,kBACzC,IAAI,QAAQK,GAAWL,EAAO,OAAOK,EAAyB,WAAW,CAAC,EAC1EL,EAAO,cAAc,CAAE,KAAM,WAAY,CAAC,GAE7C,GAAI,UAAU,MACb,GAAI,CAEH,IAAMM,EAAuB,CAAE,MAAO,CADzB,IAAI,KAAK,CAACF,CAAI,EAAGF,EAAU,CAAE,KAAME,EAAK,IAAK,CAAC,CAChB,CAAE,EACzCD,IAAMG,EAAU,KAAOH,GAC3B,MAAM,UAAU,MAAMG,CAAS,EAC/B,MACD,MAA0B,CAAC,CAG5BL,EAAa,SAAWC,EACxBD,EAAa,KAAO,IAAI,gBAAgBG,CAAI,EAC5CH,EAAa,MAAM,EACnB,IAAI,gBAAgBA,EAAa,IAAI,CACtC,CACD,CACD,CAMA,IAAOM,EAAQX","names":["save","shaderPad","context","gl","canvas","downloadLink","filename","text","blob","resolve","shareData","save_default"]}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "shaderpad",
3
- "version": "1.0.0-beta.39",
3
+ "version": "1.0.0-beta.40",
4
4
  "description": "A lightweight, dependency-free library to reduce boilerplate when writing fragment shaders.",
5
5
  "keywords": [
6
6
  "shaders",