@tuteliq/sdk 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (105) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +1034 -0
  3. package/dist/cjs/client.js +1283 -0
  4. package/dist/cjs/client.js.map +1 -0
  5. package/dist/cjs/constants.js +179 -0
  6. package/dist/cjs/constants.js.map +1 -0
  7. package/dist/cjs/errors.js +129 -0
  8. package/dist/cjs/errors.js.map +1 -0
  9. package/dist/cjs/index.js +37 -0
  10. package/dist/cjs/index.js.map +1 -0
  11. package/dist/cjs/types/account.js +6 -0
  12. package/dist/cjs/types/account.js.map +1 -0
  13. package/dist/cjs/types/analysis.js +6 -0
  14. package/dist/cjs/types/analysis.js.map +1 -0
  15. package/dist/cjs/types/guidance.js +3 -0
  16. package/dist/cjs/types/guidance.js.map +1 -0
  17. package/dist/cjs/types/index.js +28 -0
  18. package/dist/cjs/types/index.js.map +1 -0
  19. package/dist/cjs/types/media.js +6 -0
  20. package/dist/cjs/types/media.js.map +1 -0
  21. package/dist/cjs/types/policy.js +6 -0
  22. package/dist/cjs/types/policy.js.map +1 -0
  23. package/dist/cjs/types/pricing.js +6 -0
  24. package/dist/cjs/types/pricing.js.map +1 -0
  25. package/dist/cjs/types/reports.js +3 -0
  26. package/dist/cjs/types/reports.js.map +1 -0
  27. package/dist/cjs/types/safety.js +7 -0
  28. package/dist/cjs/types/safety.js.map +1 -0
  29. package/dist/cjs/types/voice-stream.js +6 -0
  30. package/dist/cjs/types/voice-stream.js.map +1 -0
  31. package/dist/cjs/types/webhooks.js +6 -0
  32. package/dist/cjs/types/webhooks.js.map +1 -0
  33. package/dist/cjs/utils/retry.js +64 -0
  34. package/dist/cjs/utils/retry.js.map +1 -0
  35. package/dist/cjs/voice-stream.js +184 -0
  36. package/dist/cjs/voice-stream.js.map +1 -0
  37. package/dist/client.d.ts +643 -0
  38. package/dist/client.d.ts.map +1 -0
  39. package/dist/client.js +1280 -0
  40. package/dist/client.js.map +1 -0
  41. package/dist/constants.d.ts +141 -0
  42. package/dist/constants.d.ts.map +1 -0
  43. package/dist/constants.js +176 -0
  44. package/dist/constants.js.map +1 -0
  45. package/dist/errors.d.ts +81 -0
  46. package/dist/errors.d.ts.map +1 -0
  47. package/dist/errors.js +116 -0
  48. package/dist/errors.js.map +1 -0
  49. package/dist/index.d.ts +6 -0
  50. package/dist/index.d.ts.map +1 -0
  51. package/dist/index.js +9 -0
  52. package/dist/index.js.map +1 -0
  53. package/dist/types/account.d.ts +161 -0
  54. package/dist/types/account.d.ts.map +1 -0
  55. package/dist/types/account.js +5 -0
  56. package/dist/types/account.js.map +1 -0
  57. package/dist/types/analysis.d.ts +41 -0
  58. package/dist/types/analysis.d.ts.map +1 -0
  59. package/dist/types/analysis.js +4 -0
  60. package/dist/types/analysis.js.map +1 -0
  61. package/dist/types/guidance.d.ts +35 -0
  62. package/dist/types/guidance.d.ts.map +1 -0
  63. package/dist/types/guidance.js +2 -0
  64. package/dist/types/guidance.js.map +1 -0
  65. package/dist/types/index.d.ts +231 -0
  66. package/dist/types/index.d.ts.map +1 -0
  67. package/dist/types/index.js +12 -0
  68. package/dist/types/index.js.map +1 -0
  69. package/dist/types/media.d.ts +121 -0
  70. package/dist/types/media.d.ts.map +1 -0
  71. package/dist/types/media.js +4 -0
  72. package/dist/types/media.js.map +1 -0
  73. package/dist/types/policy.d.ts +54 -0
  74. package/dist/types/policy.d.ts.map +1 -0
  75. package/dist/types/policy.js +5 -0
  76. package/dist/types/policy.js.map +1 -0
  77. package/dist/types/pricing.d.ts +53 -0
  78. package/dist/types/pricing.d.ts.map +1 -0
  79. package/dist/types/pricing.js +5 -0
  80. package/dist/types/pricing.js.map +1 -0
  81. package/dist/types/reports.d.ts +44 -0
  82. package/dist/types/reports.d.ts.map +1 -0
  83. package/dist/types/reports.js +2 -0
  84. package/dist/types/reports.js.map +1 -0
  85. package/dist/types/safety.d.ts +151 -0
  86. package/dist/types/safety.d.ts.map +1 -0
  87. package/dist/types/safety.js +4 -0
  88. package/dist/types/safety.js.map +1 -0
  89. package/dist/types/voice-stream.d.ts +97 -0
  90. package/dist/types/voice-stream.d.ts.map +1 -0
  91. package/dist/types/voice-stream.js +5 -0
  92. package/dist/types/voice-stream.js.map +1 -0
  93. package/dist/types/webhooks.d.ts +110 -0
  94. package/dist/types/webhooks.d.ts.map +1 -0
  95. package/dist/types/webhooks.js +4 -0
  96. package/dist/types/webhooks.js.map +1 -0
  97. package/dist/utils/retry.d.ts +17 -0
  98. package/dist/utils/retry.d.ts.map +1 -0
  99. package/dist/utils/retry.js +61 -0
  100. package/dist/utils/retry.js.map +1 -0
  101. package/dist/voice-stream.d.ts +16 -0
  102. package/dist/voice-stream.d.ts.map +1 -0
  103. package/dist/voice-stream.js +148 -0
  104. package/dist/voice-stream.js.map +1 -0
  105. package/package.json +81 -0
package/README.md ADDED
@@ -0,0 +1,1034 @@
1
+ <p align="center">
2
+ <img src="./assets/logo.png" alt="Tuteliq" width="200" />
3
+ </p>
4
+
5
+ <h1 align="center">@tuteliq/sdk</h1>
6
+
7
+ <p align="center">
8
+ <strong>Official TypeScript/JavaScript SDK for the Tuteliq API</strong><br>
9
+ AI-powered child safety analysis for modern applications
10
+ </p>
11
+
12
+ <p align="center">
13
+ <a href="https://www.npmjs.com/package/@tuteliq/sdk"><img src="https://img.shields.io/npm/v/@tuteliq/sdk.svg" alt="npm version"></a>
14
+ <a href="https://www.npmjs.com/package/@tuteliq/sdk"><img src="https://img.shields.io/npm/dm/@tuteliq/sdk.svg" alt="npm downloads"></a>
15
+ <a href="https://github.com/Tuteliq/node/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/@tuteliq/sdk.svg" alt="license"></a>
16
+ <a href="https://github.com/Tuteliq/node/actions"><img src="https://img.shields.io/github/actions/workflow/status/Tuteliq/node/ci.yml" alt="build status"></a>
17
+ <a href="https://bundlephobia.com/package/@tuteliq/sdk"><img src="https://img.shields.io/bundlephobia/minzip/@tuteliq/sdk" alt="bundle size"></a>
18
+ </p>
19
+
20
+ <p align="center">
21
+ <a href="https://ai.tuteliq.ai/docs">Documentation</a> •
22
+ <a href="https://tuteliq.ai/dashboard">Dashboard</a> •
23
+ <a href="https://discord.gg/7kbTeRYRXD">Discord</a> •
24
+ <a href="https://twitter.com/tuteliqdev">Twitter</a>
25
+ </p>
26
+
27
+ ---
28
+
29
+ ## Overview
30
+
31
+ Tuteliq provides AI-powered content analysis to help protect children in digital environments. This SDK makes it easy to integrate Tuteliq's capabilities into your Node.js, browser, or edge runtime applications.
32
+
33
+ ### Key Features
34
+
35
+ - **Bullying Detection** — Identify verbal abuse, exclusion, and harassment patterns
36
+ - **Grooming Risk Analysis** — Detect predatory behavior across conversation threads
37
+ - **Unsafe Content Detection** — Flag self-harm, violence, hate speech, and age-inappropriate content
38
+ - **Voice Analysis** — Transcribe audio and run safety analysis on the transcript with timestamped segments
39
+ - **Image Analysis** — Visual safety classification with OCR text extraction and text safety analysis
40
+ - **Emotional State Analysis** — Understand emotional signals and concerning trends
41
+ - **Action Guidance** — Generate age-appropriate response recommendations
42
+ - **Incident Reports** — Create professional summaries for review
43
+
44
+ ### Why Tuteliq?
45
+
46
+ | Feature | Description |
47
+ |---------|-------------|
48
+ | **Privacy-First** | Stateless analysis, no mandatory data storage |
49
+ | **Human-in-the-Loop** | Designed to assist, not replace, human judgment |
50
+ | **Clear Rationale** | Every decision includes explainable reasoning |
51
+ | **Safe Defaults** | Conservative escalation, no automated responses to children |
52
+
53
+ ---
54
+
55
+ ## Installation
56
+
57
+ ```bash
58
+ # npm
59
+ npm install @tuteliq/sdk
60
+
61
+ # yarn
62
+ yarn add @tuteliq/sdk
63
+
64
+ # pnpm
65
+ pnpm add @tuteliq/sdk
66
+
67
+ # bun
68
+ bun add @tuteliq/sdk
69
+ ```
70
+
71
+ ### Requirements
72
+
73
+ - Node.js 18+ (or any runtime with `fetch` support)
74
+ - TypeScript 4.7+ (optional, for type definitions)
75
+
76
+ ---
77
+
78
+ ## Quick Start
79
+
80
+ ```typescript
81
+ import { Tuteliq } from '@tuteliq/sdk'
82
+
83
+ const tuteliq = new Tuteliq(process.env.TUTELIQ_API_KEY)
84
+
85
+ // Quick safety analysis
86
+ const result = await tuteliq.analyze("User message to analyze")
87
+
88
+ if (result.risk_level !== 'safe') {
89
+ console.log('Risk detected:', result.risk_level)
90
+ console.log('Summary:', result.summary)
91
+ console.log('Action:', result.recommended_action)
92
+ }
93
+ ```
94
+
95
+ ---
96
+
97
+ ## API Reference
98
+
99
+ ### Initialization
100
+
101
+ ```typescript
102
+ import { Tuteliq } from '@tuteliq/sdk'
103
+
104
+ // Simple
105
+ const tuteliq = new Tuteliq('your-api-key')
106
+
107
+ // With options
108
+ const tuteliq = new Tuteliq('your-api-key', {
109
+ timeout: 30000, // Request timeout in ms (default: 30 seconds)
110
+ retries: 3, // Retry attempts for transient failures (default: 3)
111
+ retryDelay: 1000, // Initial retry delay in ms (default: 1000)
112
+ })
113
+ ```
114
+
115
+ ---
116
+
117
+ ### Tracking Fields
118
+
119
+ All detection methods accept optional tracking fields for correlation, multi-tenant routing, and custom metadata:
120
+
121
+ ```typescript
122
+ const result = await tuteliq.detectBullying({
123
+ content: "Nobody likes you, just leave",
124
+ context: 'chat',
125
+
126
+ // Optional tracking fields — echoed back in the response and included in webhooks
127
+ external_id: 'msg_abc123', // Your unique identifier for correlation
128
+ customer_id: 'cust_xyz789', // Your end-customer ID (B2B2C / multi-tenant)
129
+ metadata: { channel: 'discord' } // Arbitrary key-value pairs
130
+ })
131
+
132
+ // Echoed back in response
133
+ console.log(result.external_id) // 'msg_abc123'
134
+ console.log(result.customer_id) // 'cust_xyz789'
135
+ console.log(result.metadata) // { channel: 'discord' }
136
+ ```
137
+
138
+ | Field | Type | Max Length | Description |
139
+ |-------|------|-----------|-------------|
140
+ | `external_id` | `string?` | 255 | Your internal identifier (message ID, content ID, etc.) |
141
+ | `customer_id` | `string?` | 255 | Your end-customer identifier for multi-tenant / B2B2C scenarios |
142
+ | `metadata` | `object?` | — | Custom key-value pairs stored with the detection result |
143
+
144
+ These fields are:
145
+ - **Echoed** in the API response for easy matching
146
+ - **Included** in webhook payloads, enabling you to route alerts to the correct customer from a single webhook endpoint
147
+ - **Stored** with the incident in Firestore for audit trail
148
+
149
+ ---
150
+
151
+ ### Safety Detection
152
+
153
+ #### `detectBullying(input)`
154
+
155
+ Detects bullying and harassment in text content.
156
+
157
+ ```typescript
158
+ const result = await tuteliq.detectBullying({
159
+ content: "Nobody likes you, just leave",
160
+ context: 'chat' // or { ageGroup: '11-13', relationship: 'classmates' }
161
+ })
162
+
163
+ console.log(result.is_bullying) // true
164
+ console.log(result.severity) // 'medium' | 'high' | 'critical'
165
+ console.log(result.bullying_type) // ['exclusion', 'verbal_abuse']
166
+ console.log(result.confidence) // 0.92
167
+ console.log(result.risk_score) // 0.75
168
+ console.log(result.rationale) // "Direct exclusion language..."
169
+ console.log(result.recommended_action) // 'flag_for_moderator'
170
+ ```
171
+
172
+ #### `detectGrooming(input)`
173
+
174
+ Analyzes conversation threads for grooming patterns.
175
+
176
+ ```typescript
177
+ const result = await tuteliq.detectGrooming({
178
+ messages: [
179
+ { role: 'adult', content: "This is our special secret" },
180
+ { role: 'child', content: "Ok I won't tell anyone" }
181
+ ],
182
+ childAge: 12
183
+ })
184
+
185
+ console.log(result.grooming_risk) // 'none' | 'low' | 'medium' | 'high' | 'critical'
186
+ console.log(result.flags) // ['secrecy', 'isolation', 'trust_building']
187
+ console.log(result.confidence) // 0.89
188
+ console.log(result.risk_score) // 0.85
189
+ console.log(result.rationale) // "Multiple grooming indicators..."
190
+ console.log(result.recommended_action) // 'immediate_intervention'
191
+ ```
192
+
193
+ #### `detectUnsafe(input)`
194
+
195
+ Identifies potentially dangerous or harmful content.
196
+
197
+ ```typescript
198
+ const result = await tuteliq.detectUnsafe({
199
+ content: "I don't want to be here anymore"
200
+ })
201
+
202
+ console.log(result.unsafe) // true
203
+ console.log(result.categories) // ['self_harm', 'crisis']
204
+ console.log(result.severity) // 'critical'
205
+ console.log(result.risk_score) // 0.9
206
+ console.log(result.rationale) // "Expression of suicidal ideation..."
207
+ console.log(result.recommended_action) // 'immediate_intervention'
208
+ ```
209
+
210
+ #### `analyze(content)`
211
+
212
+ Quick combined analysis — runs bullying and unsafe detection in parallel.
213
+
214
+ > **Note**: This method fires one API call per detection type included (default: 2 calls for bullying + unsafe). Each call counts against your monthly quota. Use `include` to run only the checks you need.
215
+
216
+ ```typescript
217
+ // Simple string input (costs 2 API calls: bullying + unsafe)
218
+ const result = await tuteliq.analyze("Message to check")
219
+
220
+ // With options — run only bullying (costs 1 API call)
221
+ const result = await tuteliq.analyze({
222
+ content: "Message to check",
223
+ context: 'social_media',
224
+ include: ['bullying', 'unsafe'] // Select which checks to run
225
+ })
226
+
227
+ console.log(result.risk_level) // 'safe' | 'low' | 'medium' | 'high' | 'critical'
228
+ console.log(result.risk_score) // 0.0 - 1.0
229
+ console.log(result.summary) // "Bullying detected (medium). Unsafe content: self_harm"
230
+ console.log(result.bullying) // Full bullying result (if included)
231
+ console.log(result.unsafe) // Full unsafe result (if included)
232
+ console.log(result.recommended_action) // Combined recommendation
233
+ ```
234
+
235
+ ---
236
+
237
+ ### Media Analysis
238
+
239
+ #### `analyzeVoice(input)`
240
+
241
+ Transcribes audio and runs safety analysis on the transcript. Accepts `Buffer` or file data.
242
+
243
+ ```typescript
244
+ import { readFileSync } from 'fs'
245
+
246
+ const result = await tuteliq.analyzeVoice({
247
+ file: readFileSync('./recording.mp3'),
248
+ filename: 'recording.mp3',
249
+ analysisType: 'all', // 'bullying' | 'unsafe' | 'grooming' | 'emotions' | 'all'
250
+ ageGroup: '11-13',
251
+ fileId: 'my-file-ref-123', // Optional: echoed in response
252
+ })
253
+
254
+ console.log(result.transcription.text) // Full transcript
255
+ console.log(result.transcription.segments) // Timestamped segments
256
+ console.log(result.analysis?.bullying) // Bullying analysis on transcript
257
+ console.log(result.overall_risk_score) // 0.0 - 1.0
258
+ console.log(result.overall_severity) // 'none' | 'low' | 'medium' | 'high' | 'critical'
259
+ ```
260
+
261
+ Supported audio formats: mp3, wav, m4a, ogg, flac, webm, mp4 (max 25MB).
262
+
263
+ #### `analyzeImage(input)`
264
+
265
+ Analyzes images for visual safety concerns and extracts text via OCR. If text is found, runs text safety analysis.
266
+
267
+ ```typescript
268
+ import { readFileSync } from 'fs'
269
+
270
+ const result = await tuteliq.analyzeImage({
271
+ file: readFileSync('./screenshot.png'),
272
+ filename: 'screenshot.png',
273
+ analysisType: 'all', // 'bullying' | 'unsafe' | 'emotions' | 'all'
274
+ fileId: 'img-ref-456', // Optional: echoed in response
275
+ })
276
+
277
+ console.log(result.vision.extracted_text) // OCR text
278
+ console.log(result.vision.visual_severity) // Visual content severity
279
+ console.log(result.vision.visual_categories) // Visual harm categories
280
+ console.log(result.text_analysis?.bullying) // Text safety analysis (if OCR found text)
281
+ console.log(result.overall_risk_score) // 0.0 - 1.0
282
+ console.log(result.overall_severity) // Combined severity
283
+ ```
284
+
285
+ Supported image formats: png, jpg, jpeg, gif, webp (max 10MB).
286
+
287
+ #### `voiceStream(config?, handlers?)`
288
+
289
+ Real-time voice streaming with live safety analysis over WebSocket. Requires the `ws` package:
290
+
291
+ ```bash
292
+ npm install ws
293
+ ```
294
+
295
+ ```typescript
296
+ const session = tuteliq.voiceStream(
297
+ { intervalSeconds: 10, analysisTypes: ['bullying', 'unsafe'] },
298
+ {
299
+ onTranscription: (e) => console.log('Transcript:', e.text),
300
+ onAlert: (e) => console.log('Alert:', e.category, e.severity),
301
+ }
302
+ )
303
+
304
+ // Send audio chunks as they arrive
305
+ session.sendAudio(audioBuffer)
306
+
307
+ // End session and get summary
308
+ const summary = await session.end()
309
+ console.log('Risk:', summary.overall_risk)
310
+ console.log('Score:', summary.overall_risk_score)
311
+ console.log('Full transcript:', summary.transcript)
312
+ ```
313
+
314
+ ---
315
+
316
+ ### Emotional Analysis
317
+
318
+ #### `analyzeEmotions(input)`
319
+
320
+ Summarizes emotional signals in content or conversations.
321
+
322
+ ```typescript
323
+ // Single content
324
+ const result = await tuteliq.analyzeEmotions({
325
+ content: "I'm so stressed about everything lately"
326
+ })
327
+
328
+ // Or conversation history
329
+ const result = await tuteliq.analyzeEmotions({
330
+ messages: [
331
+ { sender: 'child', content: "I failed the test" },
332
+ { sender: 'child', content: "Everyone else did fine" },
333
+ { sender: 'child', content: "I'm so stupid" }
334
+ ]
335
+ })
336
+
337
+ console.log(result.dominant_emotions) // ['anxiety', 'sadness', 'frustration']
338
+ console.log(result.emotion_scores) // { anxiety: 0.8, sadness: 0.6, ... }
339
+ console.log(result.trend) // 'improving' | 'stable' | 'worsening'
340
+ console.log(result.summary) // "Child expressing academic anxiety..."
341
+ console.log(result.recommended_followup) // "Check in about school stress..."
342
+ ```
343
+
344
+ ---
345
+
346
+ ### Guidance & Reports
347
+
348
+ #### `getActionPlan(input)`
349
+
350
+ Generates age-appropriate action guidance.
351
+
352
+ ```typescript
353
+ const plan = await tuteliq.getActionPlan({
354
+ situation: 'Someone is spreading rumors about me at school',
355
+ childAge: 12,
356
+ audience: 'child', // 'child' | 'parent' | 'educator' | 'platform'
357
+ severity: 'medium'
358
+ })
359
+
360
+ console.log(plan.audience) // 'child'
361
+ console.log(plan.steps) // ['Talk to a trusted adult', ...]
362
+ console.log(plan.tone) // 'supportive'
363
+ console.log(plan.reading_level) // 'grade_5'
364
+ ```
365
+
366
+ #### `generateReport(input)`
367
+
368
+ Creates structured incident summaries for professional review.
369
+
370
+ ```typescript
371
+ const report = await tuteliq.generateReport({
372
+ messages: [
373
+ { sender: 'user1', content: 'Threatening message' },
374
+ { sender: 'child', content: 'Please stop' }
375
+ ],
376
+ childAge: 14,
377
+ incident: {
378
+ type: 'harassment',
379
+ occurredAt: new Date()
380
+ }
381
+ })
382
+
383
+ console.log(report.summary) // "Incident summary..."
384
+ console.log(report.risk_level) // 'low' | 'medium' | 'high' | 'critical'
385
+ console.log(report.categories) // ['bullying', 'threats']
386
+ console.log(report.recommended_next_steps) // ['Document incident', ...]
387
+ ```
388
+
389
+ ---
390
+
391
+ ### Webhooks
392
+
393
+ #### `listWebhooks()`
394
+
395
+ List all webhooks for your account.
396
+
397
+ ```typescript
398
+ const { webhooks } = await tuteliq.listWebhooks()
399
+ webhooks.forEach(w => console.log(w.name, w.is_active, w.events))
400
+ ```
401
+
402
+ #### `createWebhook(input)`
403
+
404
+ Create a new webhook. The returned `secret` is only shown once — store it securely.
405
+
406
+ ```typescript
407
+ import { WebhookEventType } from '@tuteliq/sdk'
408
+
409
+ const result = await tuteliq.createWebhook({
410
+ name: 'Safety Alerts',
411
+ url: 'https://example.com/webhooks/tuteliq',
412
+ events: [
413
+ WebhookEventType.INCIDENT_CRITICAL,
414
+ WebhookEventType.GROOMING_DETECTED,
415
+ WebhookEventType.SELF_HARM_DETECTED
416
+ ]
417
+ })
418
+
419
+ console.log('Webhook ID:', result.id)
420
+ console.log('Secret:', result.secret) // Store this securely!
421
+ ```
422
+
423
+ #### `updateWebhook(id, input)`
424
+
425
+ Update an existing webhook.
426
+
427
+ ```typescript
428
+ await tuteliq.updateWebhook('webhook-123', {
429
+ name: 'Updated Name',
430
+ isActive: false,
431
+ events: [WebhookEventType.INCIDENT_CRITICAL]
432
+ })
433
+ ```
434
+
435
+ #### `deleteWebhook(id)`
436
+
437
+ ```typescript
438
+ await tuteliq.deleteWebhook('webhook-123')
439
+ ```
440
+
441
+ #### `testWebhook(id)`
442
+
443
+ Send a test payload to verify webhook delivery.
444
+
445
+ ```typescript
446
+ const result = await tuteliq.testWebhook('webhook-123')
447
+ console.log('Success:', result.success)
448
+ console.log('Latency:', result.latency_ms, 'ms')
449
+ ```
450
+
451
+ #### `regenerateWebhookSecret(id)`
452
+
453
+ Regenerate the signing secret. The old secret is immediately invalidated.
454
+
455
+ ```typescript
456
+ const { secret } = await tuteliq.regenerateWebhookSecret('webhook-123')
457
+ // Update your verification logic with the new secret
458
+ ```
459
+
460
+ ---
461
+
462
+ ### Pricing
463
+
464
+ #### `getPricing()`
465
+
466
+ Get public pricing plans (no authentication required).
467
+
468
+ ```typescript
469
+ const { plans } = await tuteliq.getPricing()
470
+ plans.forEach(p => console.log(p.name, p.price, p.features))
471
+ ```
472
+
473
+ #### `getPricingDetails()`
474
+
475
+ Get detailed pricing plans with monthly/yearly prices and rate limits.
476
+
477
+ ```typescript
478
+ const { plans } = await tuteliq.getPricingDetails()
479
+ plans.forEach(p => {
480
+ console.log(p.name, `$${p.price_monthly}/mo`, `${p.api_calls_per_month} calls/mo`)
481
+ })
482
+ ```
483
+
484
+ ---
485
+
486
+ ### Policy Configuration
487
+
488
+ #### `getPolicy()` / `setPolicy(config)`
489
+
490
+ Customize safety thresholds for your application.
491
+
492
+ ```typescript
493
+ // Get current policy
494
+ const policy = await tuteliq.getPolicy()
495
+
496
+ // Update policy
497
+ await tuteliq.setPolicy({
498
+ bullying: {
499
+ enabled: true,
500
+ minRiskScoreToFlag: 0.5,
501
+ minRiskScoreToBlock: 0.8
502
+ },
503
+ selfHarm: {
504
+ enabled: true,
505
+ alwaysEscalate: true
506
+ }
507
+ })
508
+ ```
509
+
510
+ ---
511
+
512
+ ### Account Management (GDPR)
513
+
514
+ #### `deleteAccountData()`
515
+
516
+ Permanently delete all data associated with your account (Right to Erasure, GDPR Article 17).
517
+
518
+ ```typescript
519
+ const result = await tuteliq.deleteAccountData()
520
+
521
+ console.log(result.message) // "All user data has been deleted"
522
+ console.log(result.deleted_count) // 42
523
+ ```
524
+
525
+ #### `exportAccountData()`
526
+
527
+ Export all data associated with your account as JSON (Right to Data Portability, GDPR Article 20).
528
+
529
+ ```typescript
530
+ const data = await tuteliq.exportAccountData()
531
+
532
+ console.log(data.userId) // 'user_123'
533
+ console.log(data.exportedAt) // '2026-02-11T...'
534
+ console.log(Object.keys(data.data)) // ['api_keys', 'incidents', ...]
535
+ console.log(data.data.incidents.length) // 5
536
+ ```
537
+
538
+ ---
539
+
540
+ ## Usage Tracking
541
+
542
+ The SDK automatically captures usage metadata from API responses:
543
+
544
+ ```typescript
545
+ const result = await tuteliq.detectBullying({ content: 'test' })
546
+
547
+ // Each response includes the number of credits consumed
548
+ console.log(result.credits_used) // 1
549
+
550
+ // Access cumulative usage stats (from response headers)
551
+ console.log(tuteliq.usage)
552
+ // { limit: 10000, used: 5234, remaining: 4766 }
553
+
554
+ // Access request metadata
555
+ console.log(tuteliq.lastRequestId) // 'req_1a2b3c...'
556
+ console.log(tuteliq.lastLatencyMs) // 145
557
+ ```
558
+
559
+ ### Weighted Credits
560
+
561
+ Different endpoints consume different amounts of credits based on complexity:
562
+
563
+ | Method | Credits | Notes |
564
+ |--------|---------|-------|
565
+ | `detectBullying()` | 1 | Single text analysis |
566
+ | `detectUnsafe()` | 1 | Single text analysis |
567
+ | `detectGrooming()` | 1 per 10 msgs | `ceil(messages / 10)`, min 1 |
568
+ | `analyzeEmotions()` | 1 per 10 msgs | `ceil(messages / 10)`, min 1 |
569
+ | `getActionPlan()` | 2 | Longer generation |
570
+ | `generateReport()` | 3 | Structured output |
571
+ | `analyzeVoice()` | 5 | Transcription + analysis |
572
+ | `analyzeImage()` | 3 | Vision + OCR + analysis |
573
+
574
+ The `credits_used` field is included in every response body. Credit balance is also available via the `X-Credits-Remaining` response header.
575
+
576
+ ### Usage API Methods
577
+
578
+ #### `getUsageSummary()`
579
+
580
+ Get usage summary for the current billing period.
581
+
582
+ ```typescript
583
+ const summary = await tuteliq.getUsageSummary()
584
+ console.log('Used:', summary.messages_used)
585
+ console.log('Limit:', summary.message_limit)
586
+ console.log('Percent:', summary.usage_percentage)
587
+ ```
588
+
589
+ #### `getQuota()`
590
+
591
+ Get current rate limit quota status.
592
+
593
+ ```typescript
594
+ const quota = await tuteliq.getQuota()
595
+ console.log('Rate limit:', quota.rate_limit, '/min')
596
+ console.log('Remaining:', quota.remaining)
597
+ ```
598
+
599
+ #### `getUsageHistory(days?)`
600
+
601
+ Get daily usage history for the past N days (1-30, defaults to 7).
602
+
603
+ ```typescript
604
+ const { days } = await tuteliq.getUsageHistory(14)
605
+ days.forEach(d => console.log(d.date, d.total_requests, d.success_requests))
606
+ ```
607
+
608
+ #### `getUsageByTool(date?)`
609
+
610
+ Get usage broken down by tool/endpoint.
611
+
612
+ ```typescript
613
+ const result = await tuteliq.getUsageByTool()
614
+ console.log('Tools:', result.tools) // { detectBullying: 150, detectGrooming: 45, ... }
615
+ console.log('Endpoints:', result.endpoints) // { '/api/v1/safety/bullying': 150, ... }
616
+ ```
617
+
618
+ #### `getUsageMonthly()`
619
+
620
+ Get monthly usage, billing info, and upgrade recommendations.
621
+
622
+ ```typescript
623
+ const monthly = await tuteliq.getUsageMonthly()
624
+ console.log('Tier:', monthly.tier_display_name)
625
+ console.log('Used:', monthly.usage.used, '/', monthly.usage.limit)
626
+ console.log('Days left:', monthly.billing.days_remaining)
627
+
628
+ if (monthly.recommendations?.should_upgrade) {
629
+ console.log('Consider upgrading to:', monthly.recommendations.suggested_tier)
630
+ }
631
+ ```
632
+
633
+ ---
634
+
635
+ ## Error Handling
636
+
637
+ The SDK provides typed error classes for different failure scenarios:
638
+
639
+ ```typescript
640
+ import {
641
+ Tuteliq,
642
+ TuteliqError,
643
+ AuthenticationError,
644
+ RateLimitError,
645
+ QuotaExceededError,
646
+ TierAccessError,
647
+ ValidationError,
648
+ NotFoundError,
649
+ ServerError,
650
+ TimeoutError,
651
+ NetworkError,
652
+ } from '@tuteliq/sdk'
653
+
654
+ try {
655
+ const result = await tuteliq.detectBullying({ content: 'test' })
656
+ } catch (error) {
657
+ if (error instanceof AuthenticationError) {
658
+ // 401 - Invalid or missing API key
659
+ console.error('Check your API key')
660
+ } else if (error instanceof TierAccessError) {
661
+ // 403 - Endpoint not available on your plan
662
+ console.error('Upgrade your plan:', error.suggestion)
663
+ } else if (error instanceof QuotaExceededError) {
664
+ // 429 - Monthly quota exceeded
665
+ console.error('Quota exceeded, upgrade or buy credits')
666
+ } else if (error instanceof RateLimitError) {
667
+ // 429 - Too many requests per minute
668
+ console.error('Rate limited, retry after:', error.retryAfter)
669
+ } else if (error instanceof ValidationError) {
670
+ // 400 - Invalid request parameters
671
+ console.error('Invalid input:', error.details)
672
+ } else if (error instanceof NotFoundError) {
673
+ // 404 - Resource not found
674
+ console.error('Resource not found')
675
+ } else if (error instanceof ServerError) {
676
+ // 5xx - Server error
677
+ console.error('Server error, try again later')
678
+ } else if (error instanceof TimeoutError) {
679
+ // Request timed out
680
+ console.error('Request timed out')
681
+ } else if (error instanceof NetworkError) {
682
+ // Network connectivity issue
683
+ console.error('Check your connection')
684
+ } else if (error instanceof TuteliqError) {
685
+ // Generic SDK error
686
+ console.error('Error:', error.message)
687
+ }
688
+ }
689
+ ```
690
+
691
+ ---
692
+
693
+ ## TypeScript Support
694
+
695
+ Full TypeScript support with comprehensive type definitions:
696
+
697
+ ```typescript
698
+ import { Tuteliq } from '@tuteliq/sdk'
699
+ import type {
700
+ // Safety Results
701
+ BullyingResult,
702
+ GroomingResult,
703
+ UnsafeResult,
704
+ EmotionsResult,
705
+ ActionPlanResult,
706
+ ReportResult,
707
+ AnalyzeResult,
708
+
709
+ // Media Results
710
+ VoiceAnalysisResult,
711
+ ImageAnalysisResult,
712
+ VisionResult,
713
+ TranscriptionResult,
714
+ TranscriptionSegment,
715
+
716
+ // Webhook Types
717
+ Webhook,
718
+ WebhookListResult,
719
+ CreateWebhookInput,
720
+ CreateWebhookResult,
721
+ UpdateWebhookInput,
722
+ TestWebhookResult,
723
+ RegenerateSecretResult,
724
+
725
+ // Pricing Types
726
+ PricingResult,
727
+ PricingDetailsResult,
728
+
729
+ // Usage Types
730
+ UsageHistoryResult,
731
+ UsageByToolResult,
732
+ UsageMonthlyResult,
733
+
734
+ // Inputs
735
+ DetectBullyingInput,
736
+ DetectGroomingInput,
737
+ DetectUnsafeInput,
738
+ AnalyzeEmotionsInput,
739
+ GetActionPlanInput,
740
+ GenerateReportInput,
741
+ AnalyzeVoiceInput,
742
+ AnalyzeImageInput,
743
+
744
+ // Account (GDPR)
745
+ AccountDeletionResult,
746
+ AccountExportResult,
747
+
748
+ // Utilities
749
+ Usage,
750
+ ContextInput,
751
+ GroomingMessage,
752
+ EmotionMessage,
753
+ ReportMessage,
754
+ } from '@tuteliq/sdk'
755
+ ```
756
+
757
+ ### Using Enums
758
+
759
+ The SDK exports enums for type-safe comparisons:
760
+
761
+ ```typescript
762
+ import {
763
+ Severity,
764
+ GroomingRisk,
765
+ RiskLevel,
766
+ RiskCategory,
767
+ AnalysisType,
768
+ ContentSeverity,
769
+ EmotionTrend,
770
+ WebhookEventType,
771
+ IncidentStatus,
772
+ ErrorCode,
773
+ } from '@tuteliq/sdk'
774
+
775
+ // Type-safe severity checks
776
+ if (result.severity === Severity.CRITICAL) {
777
+ // Handle critical severity
778
+ }
779
+
780
+ // Grooming risk comparisons
781
+ if (result.grooming_risk === GroomingRisk.HIGH) {
782
+ // Handle high grooming risk
783
+ }
784
+
785
+ // Error code handling
786
+ if (error.code === ErrorCode.RATE_LIMIT_EXCEEDED) {
787
+ // Handle rate limiting
788
+ }
789
+ ```
790
+
791
+ You can also import enums separately:
792
+
793
+ ```typescript
794
+ import { Severity, RiskCategory } from '@tuteliq/sdk/constants'
795
+ ```
796
+
797
+ ---
798
+
799
+ ## Examples
800
+
801
+ ### Next.js Integration (App Router)
802
+
803
+ Use a server-side API route to keep your API key secure:
804
+
805
+ ```typescript
806
+ // app/api/safety/route.ts (server-side — API key stays on the server)
807
+ import { Tuteliq } from '@tuteliq/sdk'
808
+ import { NextResponse } from 'next/server'
809
+
810
+ const tuteliq = new Tuteliq(process.env.TUTELIQ_API_KEY!)
811
+
812
+ export async function POST(req: Request) {
813
+ const { message } = await req.json()
814
+ const result = await tuteliq.analyze(message)
815
+ return NextResponse.json(result)
816
+ }
817
+ ```
818
+
819
+ ```typescript
820
+ // app/components/MessageInput.tsx (client-side — no API key exposed)
821
+ 'use client'
822
+ import { useState } from 'react'
823
+
824
+ function MessageInput() {
825
+ const [message, setMessage] = useState('')
826
+ const [warning, setWarning] = useState<string | null>(null)
827
+
828
+ const handleSubmit = async () => {
829
+ const res = await fetch('/api/safety', {
830
+ method: 'POST',
831
+ headers: { 'Content-Type': 'application/json' },
832
+ body: JSON.stringify({ message }),
833
+ })
834
+ const result = await res.json()
835
+
836
+ if (result.risk_level !== 'safe') {
837
+ setWarning(result.summary)
838
+ return
839
+ }
840
+
841
+ // Submit message...
842
+ }
843
+
844
+ return (
845
+ <div>
846
+ <input value={message} onChange={e => setMessage(e.target.value)} />
847
+ {warning && <p className="warning">{warning}</p>}
848
+ <button onClick={handleSubmit}>Send</button>
849
+ </div>
850
+ )
851
+ }
852
+ ```
853
+
854
+ ### Express Middleware
855
+
856
+ ```typescript
857
+ import { Tuteliq, RateLimitError } from '@tuteliq/sdk'
858
+ import express from 'express'
859
+
860
+ const tuteliq = new Tuteliq(process.env.TUTELIQ_API_KEY)
861
+
862
+ const safetyMiddleware = async (req, res, next) => {
863
+ const { message } = req.body
864
+
865
+ try {
866
+ const result = await tuteliq.analyze(message)
867
+
868
+ if (result.risk_level === 'critical') {
869
+ return res.status(400).json({
870
+ error: 'Message blocked for safety reasons',
871
+ details: result.summary
872
+ })
873
+ }
874
+
875
+ req.safetyResult = result
876
+ next()
877
+ } catch (error) {
878
+ if (error instanceof RateLimitError) {
879
+ return res.status(429).json({ error: 'Too many requests' })
880
+ }
881
+ next(error)
882
+ }
883
+ }
884
+
885
+ app.post('/messages', safetyMiddleware, (req, res) => {
886
+ // Message passed safety check
887
+ })
888
+ ```
889
+
890
+ ### Batch Processing
891
+
892
+ ```typescript
893
+ const messages = ['message1', 'message2', 'message3']
894
+
895
+ const results = await Promise.all(
896
+ messages.map(content => tuteliq.analyze(content))
897
+ )
898
+
899
+ const flagged = results.filter(r => r.risk_level !== 'safe')
900
+ console.log(`${flagged.length} messages flagged for review`)
901
+ ```
902
+
903
+ ---
904
+
905
+ ## Browser Support
906
+
907
+ The SDK works in browsers that support the Fetch API:
908
+
909
+ ```html
910
+ <script type="module">
911
+ import { Tuteliq } from 'https://esm.sh/@tuteliq/sdk'
912
+
913
+ const tuteliq = new Tuteliq('your-api-key')
914
+ const result = await tuteliq.analyze('Hello world')
915
+ </script>
916
+ ```
917
+
918
+ > **Note**: Never expose your API key in client-side code for production applications. Use a backend proxy to protect your credentials.
919
+
920
+ ---
921
+
922
+ ## Contributing
923
+
924
+ We welcome contributions! Please see our [Contributing Guide](https://github.com/Tuteliq/node/blob/main/CONTRIBUTING.md) for details.
925
+
926
+ ```bash
927
+ # Clone the repo
928
+ git clone https://github.com/Tuteliq/node.git
929
+ cd node
930
+
931
+ # Install dependencies
932
+ npm install
933
+
934
+ # Run tests
935
+ npm test
936
+
937
+ # Build
938
+ npm run build
939
+ ```
940
+
941
+ ---
942
+
943
+ ## API Documentation
944
+
945
+ - **Base URL**: `https://api.tuteliq.ai`
946
+ - **Swagger UI**: [api.tuteliq.ai/docs](https://api.tuteliq.ai/docs)
947
+ - **OpenAPI JSON**: [api.tuteliq.ai/docs/json](https://api.tuteliq.ai/docs/json)
948
+
949
+ ### Rate Limits
950
+
951
+ Rate limits depend on your subscription tier:
952
+
953
+ | Plan | Price | Credits/month | Rate Limit | Features |
954
+ |------|-------|---------------|------------|----------|
955
+ | **Starter** | Free | 1,000 | 60/min | 3 Safety endpoints, 1 API key, Community support |
956
+ | **Indie** | $29/mo | 10,000 | 300/min | All endpoints, 2 API keys, Dashboard analytics |
957
+ | **Pro** | $99/mo | 50,000 | 1,000/min | 5 API keys, Webhooks, Custom policy, Priority latency |
958
+ | **Business** | $349/mo | 200,000 | 5,000/min | 20 API keys, SSO, SLA 99.9%, HIPAA/SOC2 docs |
959
+ | **Enterprise** | Custom | Unlimited | Custom | Dedicated infra, 24/7 support, SCIM, On-premise |
960
+
961
+ **Credit Packs** (available to all tiers): 5K credits/$15 | 25K credits/$59 | 100K credits/$199
962
+
963
+ > **Note:** Credits are weighted by endpoint complexity. A simple text check costs 1 credit, while voice analysis costs 5. See the [Weighted Credits](#weighted-credits) table above.
964
+
965
+ ---
966
+
967
+ ## Best Practices
968
+
969
+ ### Message Batching
970
+
971
+ The **bullying** and **unsafe content** methods analyze a single `text` field per request. If your platform receives messages one at a time (e.g., a chat app), concatenate a **sliding window of recent messages** into one string before calling the API. Single words or short fragments lack context for accurate detection and can be exploited to bypass safety filters.
972
+
973
+ ```typescript
974
+ // Bad — each message analyzed in isolation, easily evaded
975
+ for (const msg of messages) {
976
+ await client.detectBullying({ content: msg });
977
+ }
978
+
979
+ // Good — recent messages analyzed together
980
+ const window = recentMessages.slice(-10).join(' ');
981
+ await client.detectBullying({ content: window });
982
+ ```
983
+
984
+ The **grooming** method already accepts a `messages[]` array and analyzes the full conversation in context.
985
+
986
+ ### PII Redaction
987
+
988
+ PII redaction is **enabled by default** on the Tuteliq API. It automatically strips emails, phone numbers, URLs, social handles, IPs, and other PII from detection summaries and webhook payloads. The original text is still analyzed in full — only stored outputs are scrubbed. Set `PII_REDACTION_ENABLED=false` to disable.
989
+
990
+ ---
991
+
992
+ ## Support
993
+
994
+ - **Documentation**: [ai.tuteliq.ai/docs](https://ai.tuteliq.ai/docs)
995
+ - **Discord**: [discord.gg/7kbTeRYRXD](https://discord.gg/7kbTeRYRXD)
996
+ - **Email**: support@tuteliq.ai
997
+ - **Issues**: [GitHub Issues](https://github.com/Tuteliq/node/issues)
998
+
999
+ ---
1000
+
1001
+ ## License
1002
+
1003
+ MIT License - see [LICENSE](LICENSE) for details.
1004
+
1005
+ ---
1006
+
1007
+ ## The Mission: Why This Matters
1008
+
1009
+ Before you decide to contribute or sponsor, read these numbers. They are not projections. They are not estimates from a pitch deck. They are verified statistics from the University of Edinburgh, UNICEF, NCMEC, and Interpol.
1010
+
1011
+ - **302 million** children are victims of online sexual exploitation and abuse every year. That is **10 children every second**. *(Childlight / University of Edinburgh, 2024)*
1012
+ - **1 in 8** children globally have been victims of non-consensual sexual imagery in the past year. *(Childlight, 2024)*
1013
+ - **370 million** girls and women alive today experienced rape or sexual assault in childhood. An estimated **240–310 million** boys and men experienced the same. *(UNICEF, 2024)*
1014
+ - **29.2 million** incidents of suspected child sexual exploitation were reported to NCMEC's CyberTipline in 2024 alone — containing **62.9 million files** (images, videos). *(NCMEC, 2025)*
1015
+ - **546,000** reports of online enticement (adults grooming children) in 2024 — a **192% increase** from the year before. *(NCMEC, 2025)*
1016
+ - **1,325% increase** in AI-generated child sexual abuse material reports between 2023 and 2024. The technology that should protect children is being weaponized against them. *(NCMEC, 2025)*
1017
+ - **100 sextortion reports per day** to NCMEC. Since 2021, at least **36 teenage boys** have taken their own lives because they were victimized by sextortion. *(NCMEC, 2025)*
1018
+ - **84%** of reports resolve outside the United States. This is not an American problem. This is a **global emergency**. *(NCMEC, 2025)*
1019
+
1020
+ End-to-end encryption is making platforms blind. In 2024, platforms reported **7 million fewer incidents** than the year before — not because abuse stopped, but because they can no longer see it. The tools that catch known images are failing. The systems that rely on human moderators are overwhelmed. The technology to detect behavior — grooming patterns, escalation, manipulation — in real-time text conversations **exists right now**. It is running at [api.tuteliq.ai](https://api.tuteliq.ai).
1021
+
1022
+ The question is not whether this technology is possible. The question is whether we build the company to put it everywhere it needs to be.
1023
+
1024
+ **Every second we wait, another child is harmed.**
1025
+
1026
+ We have the technology. We need the support.
1027
+
1028
+ If this mission matters to you, consider [sponsoring our open-source work](https://github.com/sponsors/Tuteliq) so we can keep building the tools that protect children — and keep them free and accessible for everyone.
1029
+
1030
+ ---
1031
+
1032
+ <p align="center">
1033
+ <sub>Built with care for child safety by the <a href="https://tuteliq.ai">Tuteliq</a> team</sub>
1034
+ </p>