@blank-utils/llm 0.3.2 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +8 -12
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -21,8 +21,10 @@
21
21
  - ⚛️ **React hooks** — `useChat`, `useStream`, `useCompletion` with eager background loading
22
22
  - 🔤 **Type-safe model selection** — full autocomplete for 30+ supported models across both backends
23
23
  - 📝 **Streaming support** — real-time token output with abort control
24
+ - 📄 **PDF & Image Processing** — Extract text from PDFs natively and easily pass multimodal image attachments
24
25
  - 🔄 **Message queueing** — users can type while models download; messages are processed once ready
25
26
  - 🧩 **Vanilla JS friendly** — works outside React with DOM helpers and a simple `createLLM()` API
27
+ - ⚡ **Instant Builds** — Bundled via `tsup` for lightning-fast compilation
26
28
  - 📦 **Zero config** — auto-detects WebGPU/WASM and picks the best backend
27
29
 
28
30
  ## Installation
@@ -408,6 +410,8 @@ import { ChatInput } from "@blank-utils/llm/react";
408
410
  - ⌨️ Enter to send, Shift+Enter for newline
409
411
  - ⏹️ Stop button while generating
410
412
  - 🎨 Dark/light theme support
413
+ - 📄 **Drag-and-drop or click to upload PDF files for automatic local text extraction (OCR)**
414
+ - 🖼️ **Paste/upload images directly into multimodal models**
411
415
 
412
416
  ---
413
417
 
@@ -508,10 +512,6 @@ You can use either the **alias** (short name) or the **full model ID** when spec
508
512
  | `qwen-2.5-7b` | Qwen 2.5 7B | Large |
509
513
  | `qwen-2.5-coder-0.5b` | Qwen 2.5 Coder 0.5B | Code-focused |
510
514
  | `qwen-2.5-coder-1.5b` | Qwen 2.5 Coder 1.5B | Code-focused |
511
- | `qwen-3-0.6b` | Qwen 3 0.6B | Latest gen |
512
- | `qwen-3-1.7b` | Qwen 3 1.7B | Latest gen |
513
- | `qwen-3-4b` | Qwen 3 4B | Latest gen |
514
- | `qwen-3-8b` | Qwen 3 8B | Latest gen |
515
515
  | `gemma-2-2b` | Gemma 2 2B | Google, efficient |
516
516
  | `gemma-2-2b-1k` | Gemma 2 2B (1K ctx) | Lower memory |
517
517
  | `gemma-2-9b` | Gemma 2 9B | Large |
@@ -532,7 +532,6 @@ You can use either the **alias** (short name) or the **full model ID** when spec
532
532
  | `qwen-2.5-1.5b` | `onnx-community/Qwen2.5-1.5B-Instruct` | Good quality |
533
533
  | `qwen-2.5-coder-0.5b` | `onnx-community/Qwen2.5-Coder-0.5B-Instruct` | Code |
534
534
  | `qwen-2.5-coder-1.5b` | `onnx-community/Qwen2.5-Coder-1.5B-Instruct` | Code |
535
- | `qwen-3-0.6b` | `onnx-community/Qwen3-0.6B-ONNX` | Latest gen |
536
535
  | `smollm2-135m` | `HuggingFaceTB/SmolLM2-135M-Instruct` | Ultra fast |
537
536
  | `smollm2-360m` | `HuggingFaceTB/SmolLM2-360M-Instruct` | Fast |
538
537
  | `smollm2-1.7b` | `HuggingFaceTB/SmolLM2-1.7B-Instruct` | Good |
@@ -586,13 +585,10 @@ bun test
586
585
 
587
586
  ### Build Pipeline
588
587
 
589
- | Script | What it does |
590
- | ------------- | ------------------------------------------------------------------------------------------------------------------ |
591
- | `clean` | Removes `dist/` |
592
- | `build:js` | Bundles `src/index.ts` `dist/index.js` and `src/react/index.tsx` `dist/react/index.js` (ESM, externals: react) |
593
- | `postbuild` | Copies WASM + ONNX runtime assets into `dist/` and `dist/react/` |
594
- | `build:types` | Generates `.d.ts` declaration files via `tsc` |
595
- | `build` | Runs all of the above in sequence |
588
+ | Script | What it does |
589
+ | ------- | ---------------------------------------------------------- |
590
+ | `clean` | Removes `dist/` |
591
+ | `build` | Super-fast bundling via `tsup`, compiling ESM code & types |
596
592
 
597
593
  ### Package Exports
598
594
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@blank-utils/llm",
3
- "version": "0.3.2",
3
+ "version": "0.3.3",
4
4
  "description": "Run LLMs directly in your browser with WebGPU acceleration. Supports React hooks and eager background loading.",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",