@blockrun/llm 1.3.1 → 1.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,12 +5,16 @@
5
5
  [![npm](https://img.shields.io/npm/v/@blockrun/llm.svg)](https://www.npmjs.com/package/@blockrun/llm)
6
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
7
7
 
8
- **Networks:**
9
- - **Base Mainnet:** Chain ID 8453 - Production with real USDC
10
- - **Base Sepolia (Testnet):** Chain ID 84532 - Developer testing with testnet USDC
11
- - **Solana Mainnet** - Production with real USDC
8
+ ## Supported Chains
9
+
10
+ | Chain | Network | Payment | Status |
11
+ |-------|---------|---------|--------|
12
+ | **Base** | Base Mainnet (Chain ID: 8453) | USDC | Primary |
13
+ | **Base Testnet** | Base Sepolia (Chain ID: 84532) | Testnet USDC | Development |
14
+ | **Solana** | Solana Mainnet | USDC (SPL) | New |
15
+
16
+ > **XRPL (RLUSD):** Use [@blockrun/llm-xrpl](https://www.npmjs.com/package/@blockrun/llm-xrpl) for XRPL payments
12
17
 
13
- **Payment:** USDC
14
18
  **Protocol:** x402 v2 (CDP Facilitator)
15
19
 
16
20
  ## Installation
@@ -87,6 +91,64 @@ const tweet = await client.chat('xai/grok-3-mini', 'What is trending on X?', { s
87
91
 
88
92
  **Your private key never leaves your machine** - it's only used for local signing.
89
93
 
94
+ ## Smart Routing (ClawRouter)
95
+
96
+ Let the SDK automatically pick the cheapest capable model for each request:
97
+
98
+ ```typescript
99
+ import { LLMClient } from '@blockrun/llm';
100
+
101
+ const client = new LLMClient();
102
+
103
+ // Auto-routes to cheapest capable model
104
+ const result = await client.smartChat('What is 2+2?');
105
+ console.log(result.response); // '4'
106
+ console.log(result.model); // 'nvidia/kimi-k2.5' (cheap, fast)
107
+ console.log(`Saved ${(result.routing.savings * 100).toFixed(0)}%`); // 'Saved 78%'
108
+
109
+ // Complex reasoning task -> routes to reasoning model
110
+ const complex = await client.smartChat('Prove the Riemann hypothesis step by step');
111
+ console.log(complex.model); // 'xai/grok-4-1-fast-reasoning'
112
+ ```
113
+
114
+ ### Routing Profiles
115
+
116
+ | Profile | Description | Best For |
117
+ |---------|-------------|----------|
118
+ | `free` | nvidia/gpt-oss-120b only (FREE) | Testing, development |
119
+ | `eco` | Cheapest models per tier (DeepSeek, xAI) | Cost-sensitive production |
120
+ | `auto` | Best balance of cost/quality (default) | General use |
121
+ | `premium` | Top-tier models (OpenAI, Anthropic) | Quality-critical tasks |
122
+
123
+ ```typescript
124
+ // Use premium models for complex tasks
125
+ const result = await client.smartChat(
126
+ 'Write production-grade async TypeScript code',
127
+ { routingProfile: 'premium' }
128
+ );
129
+ console.log(result.model); // 'anthropic/claude-opus-4.5'
130
+ ```
131
+
132
+ ### How ClawRouter Works
133
+
134
+ ClawRouter uses a 14-dimension rule-based classifier to analyze each request:
135
+
136
+ - **Token count** - Short vs long prompts
137
+ - **Code presence** - Programming keywords
138
+ - **Reasoning markers** - "prove", "step by step", etc.
139
+ - **Technical terms** - Architecture, optimization, etc.
140
+ - **Creative markers** - Story, poem, brainstorm, etc.
141
+ - **Agentic patterns** - Multi-step, tool use indicators
142
+
143
+ The classifier runs in <1ms, 100% locally, and routes to one of four tiers:
144
+
145
+ | Tier | Example Tasks | Auto Profile Model |
146
+ |------|---------------|-------------------|
147
+ | SIMPLE | "What is 2+2?", definitions | nvidia/kimi-k2.5 |
148
+ | MEDIUM | Code snippets, explanations | xai/grok-code-fast-1 |
149
+ | COMPLEX | Architecture, long documents | google/gemini-3.1-pro |
150
+ | REASONING | Proofs, multi-step reasoning | xai/grok-4-1-fast-reasoning |
151
+
90
152
  ## Available Models
91
153
 
92
154
  ### OpenAI GPT-5 Family
@@ -203,6 +265,78 @@ All models below have been tested end-to-end via the TypeScript SDK (Feb 2026):
203
265
 
204
266
  *Testnet models use flat pricing (no token counting) for simplicity.*
205
267
 
268
+ ## X/Twitter Data (Powered by AttentionVC)
269
+
270
+ Access X/Twitter user profiles, followers, and followings via [AttentionVC](https://attentionvc.ai) partner API. No API keys needed — pay-per-request via x402.
271
+
272
+ ```typescript
273
+ import { LLMClient } from '@blockrun/llm';
274
+
275
+ const client = new LLMClient();
276
+
277
+ // Look up user profiles ($0.002/user, min $0.02)
278
+ const users = await client.xUserLookup(['elonmusk', 'blockaborr']);
279
+ for (const user of users.users) {
280
+ console.log(`@${user.userName}: ${user.followers} followers`);
281
+ }
282
+
283
+ // Get followers ($0.05/page, ~200 accounts)
284
+ let result = await client.xFollowers('blockaborr');
285
+ for (const f of result.followers) {
286
+ console.log(` @${f.screen_name}`);
287
+ }
288
+
289
+ // Paginate through all followers
290
+ while (result.has_next_page) {
291
+ result = await client.xFollowers('blockaborr', result.next_cursor);
292
+ }
293
+
294
+ // Get followings ($0.05/page)
295
+ const followings = await client.xFollowings('blockaborr');
296
+ ```
297
+
298
+ Works on both `LLMClient` (Base) and `SolanaLLMClient`.
299
+
300
+ ## Standalone Search
301
+
302
+ Search web, X/Twitter, and news without using a chat model:
303
+
304
+ ```typescript
305
+ import { LLMClient } from '@blockrun/llm';
306
+
307
+ const client = new LLMClient();
308
+
309
+ const result = await client.search('latest AI agent frameworks 2026');
310
+ console.log(result.summary);
311
+ for (const cite of result.citations ?? []) {
312
+ console.log(` - ${cite}`);
313
+ }
314
+
315
+ // Filter by source type and date range
316
+ const filtered = await client.search('BlockRun x402', {
317
+ sources: ['web', 'x'],
318
+ fromDate: '2026-01-01',
319
+ maxResults: 5,
320
+ });
321
+ ```
322
+
323
+ ## Image Editing (img2img)
324
+
325
+ Edit existing images with text prompts:
326
+
327
+ ```typescript
328
+ import { LLMClient } from '@blockrun/llm';
329
+
330
+ const client = new LLMClient();
331
+
332
+ const result = await client.imageEdit(
333
+ 'Make the sky purple and add northern lights',
334
+ 'data:image/png;base64,...', // base64 or URL
335
+ { model: 'openai/gpt-image-1' }
336
+ );
337
+ console.log(result.data[0].url);
338
+ ```
339
+
206
340
  ## Testnet Usage
207
341
 
208
342
  For development and testing without real USDC, use the testnet:
@@ -636,6 +770,32 @@ console.log(`Calls: ${summary.calls}`);
636
770
  console.log(`By model:`, summary.byModel);
637
771
  ```
638
772
 
773
+ ## Anthropic SDK Compatibility
774
+
775
+ Use the official Anthropic SDK interface with BlockRun's pay-per-request backend:
776
+
777
+ ```typescript
778
+ import { AnthropicClient } from '@blockrun/llm';
779
+
780
+ const client = new AnthropicClient(); // Auto-detects wallet, auto-pays
781
+
782
+ const response = await client.messages.create({
783
+ model: 'claude-sonnet-4-6',
784
+ max_tokens: 1024,
785
+ messages: [{ role: 'user', content: 'Hello!' }],
786
+ });
787
+ console.log(response.content[0].text);
788
+
789
+ // Any model works in Anthropic format
790
+ const gptResponse = await client.messages.create({
791
+ model: 'openai/gpt-5.4',
792
+ max_tokens: 1024,
793
+ messages: [{ role: 'user', content: 'Hello from GPT!' }],
794
+ });
795
+ ```
796
+
797
+ The `AnthropicClient` wraps the official `@anthropic-ai/sdk` with a custom fetch that handles x402 payment automatically. Your private key never leaves your machine.
798
+
639
799
  ## Links
640
800
 
641
801
  - [Website](https://blockrun.ai)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@blockrun/llm",
3
- "version": "1.3.1",
3
+ "version": "1.4.1",
4
4
  "type": "module",
5
5
  "description": "BlockRun LLM Gateway SDK - Pay-per-request AI via x402 on Base and Solana",
6
6
  "main": "dist/index.cjs",
@@ -18,8 +18,8 @@
18
18
  "README.md"
19
19
  ],
20
20
  "scripts": {
21
- "build": "tsup src/index.ts --format cjs,esm --dts --clean",
22
- "dev": "tsup src/index.ts --format cjs,esm --dts --watch",
21
+ "build": "tsup src/index.ts --format cjs,esm --dts --clean --external @anthropic-ai/sdk",
22
+ "dev": "tsup src/index.ts --format cjs,esm --dts --watch --external @anthropic-ai/sdk",
23
23
  "test": "vitest",
24
24
  "lint": "eslint src/",
25
25
  "typecheck": "tsc --noEmit"
@@ -53,6 +53,7 @@
53
53
  "viem": "^2.21.0"
54
54
  },
55
55
  "optionalDependencies": {
56
+ "@anthropic-ai/sdk": "^0.39.0",
56
57
  "@solana/spl-token": "^0.4.0",
57
58
  "@solana/web3.js": "^1.98.0"
58
59
  },