@safenest/sdk 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +601 -0
  3. package/dist/client.d.ts +228 -0
  4. package/dist/client.d.ts.map +1 -0
  5. package/dist/client.js +536 -0
  6. package/dist/client.js.map +1 -0
  7. package/dist/constants.d.ts +127 -0
  8. package/dist/constants.d.ts.map +1 -0
  9. package/dist/constants.js +160 -0
  10. package/dist/constants.js.map +1 -0
  11. package/dist/errors.d.ts +52 -0
  12. package/dist/errors.d.ts.map +1 -0
  13. package/dist/errors.js +87 -0
  14. package/dist/errors.js.map +1 -0
  15. package/dist/index.d.ts +5 -0
  16. package/dist/index.d.ts.map +1 -0
  17. package/dist/index.js +7 -0
  18. package/dist/index.js.map +1 -0
  19. package/dist/types/analysis.d.ts +39 -0
  20. package/dist/types/analysis.d.ts.map +1 -0
  21. package/dist/types/analysis.js +4 -0
  22. package/dist/types/analysis.js.map +1 -0
  23. package/dist/types/guidance.d.ts +33 -0
  24. package/dist/types/guidance.d.ts.map +1 -0
  25. package/dist/types/guidance.js +2 -0
  26. package/dist/types/guidance.js.map +1 -0
  27. package/dist/types/index.d.ts +55 -0
  28. package/dist/types/index.d.ts.map +1 -0
  29. package/dist/types/index.js +7 -0
  30. package/dist/types/index.js.map +1 -0
  31. package/dist/types/policy.d.ts +54 -0
  32. package/dist/types/policy.d.ts.map +1 -0
  33. package/dist/types/policy.js +5 -0
  34. package/dist/types/policy.js.map +1 -0
  35. package/dist/types/reports.d.ts +42 -0
  36. package/dist/types/reports.d.ts.map +1 -0
  37. package/dist/types/reports.js +2 -0
  38. package/dist/types/reports.js.map +1 -0
  39. package/dist/types/safety.d.ts +135 -0
  40. package/dist/types/safety.d.ts.map +1 -0
  41. package/dist/types/safety.js +4 -0
  42. package/dist/types/safety.js.map +1 -0
  43. package/dist/utils/retry.d.ts +17 -0
  44. package/dist/utils/retry.d.ts.map +1 -0
  45. package/dist/utils/retry.js +61 -0
  46. package/dist/utils/retry.js.map +1 -0
  47. package/package.json +67 -0
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024-present SafeNest AB
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,601 @@
1
+ <p align="center">
2
+ <img src="https://safenest.dev/logo.svg" alt="SafeNest" width="120" />
3
+ </p>
4
+
5
+ <h1 align="center">@safenest/sdk</h1>
6
+
7
+ <p align="center">
8
+ <strong>Official TypeScript/JavaScript SDK for the SafeNest API</strong><br>
9
+ AI-powered child safety analysis for modern applications
10
+ </p>
11
+
12
+ <p align="center">
13
+ <a href="https://www.npmjs.com/package/@safenest/sdk"><img src="https://img.shields.io/npm/v/@safenest/sdk.svg" alt="npm version"></a>
14
+ <a href="https://www.npmjs.com/package/@safenest/sdk"><img src="https://img.shields.io/npm/dm/@safenest/sdk.svg" alt="npm downloads"></a>
15
+ <a href="https://github.com/safenest/sdk-typescript/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/@safenest/sdk.svg" alt="license"></a>
16
+ <a href="https://github.com/safenest/sdk-typescript/actions"><img src="https://img.shields.io/github/actions/workflow/status/safenest/sdk-typescript/ci.yml" alt="build status"></a>
17
+ <a href="https://bundlephobia.com/package/@safenest/sdk"><img src="https://img.shields.io/bundlephobia/minzip/@safenest/sdk" alt="bundle size"></a>
18
+ </p>
19
+
20
+ <p align="center">
21
+ <a href="https://docs.safenest.dev">Documentation</a> •
22
+ <a href="https://safenest.dev/dashboard">Dashboard</a> •
23
+ <a href="https://discord.gg/safenest">Discord</a> •
24
+ <a href="https://twitter.com/safenestdev">Twitter</a>
25
+ </p>
26
+
27
+ ---
28
+
29
+ ## Overview
30
+
31
+ SafeNest provides AI-powered content analysis to help protect children in digital environments. This SDK makes it easy to integrate SafeNest's capabilities into your Node.js, browser, or edge runtime applications.
32
+
33
+ ### Key Features
34
+
35
+ - **Bullying Detection** — Identify verbal abuse, exclusion, and harassment patterns
36
+ - **Grooming Risk Analysis** — Detect predatory behavior across conversation threads
37
+ - **Unsafe Content Detection** — Flag self-harm, violence, hate speech, and age-inappropriate content
38
+ - **Emotional State Analysis** — Understand emotional signals and concerning trends
39
+ - **Action Guidance** — Generate age-appropriate response recommendations
40
+ - **Incident Reports** — Create professional summaries for review
41
+
42
+ ### Why SafeNest?
43
+
44
+ | Feature | Description |
45
+ |---------|-------------|
46
+ | **Privacy-First** | Stateless analysis, no mandatory data storage |
47
+ | **Human-in-the-Loop** | Designed to assist, not replace, human judgment |
48
+ | **Clear Rationale** | Every decision includes explainable reasoning |
49
+ | **Safe Defaults** | Conservative escalation, no automated responses to children |
50
+
51
+ ---
52
+
53
+ ## Installation
54
+
55
+ ```bash
56
+ # npm
57
+ npm install @safenest/sdk
58
+
59
+ # yarn
60
+ yarn add @safenest/sdk
61
+
62
+ # pnpm
63
+ pnpm add @safenest/sdk
64
+
65
+ # bun
66
+ bun add @safenest/sdk
67
+ ```
68
+
69
+ ### Requirements
70
+
71
+ - Node.js 18+ (or any runtime with `fetch` support)
72
+ - TypeScript 4.7+ (optional, for type definitions)
73
+
74
+ ---
75
+
76
+ ## Quick Start
77
+
78
+ ```typescript
79
+ import { SafeNest } from '@safenest/sdk'
80
+
81
+ const safenest = new SafeNest(process.env.SAFENEST_API_KEY)
82
+
83
+ // Quick safety analysis
84
+ const result = await safenest.analyze("User message to analyze")
85
+
86
+ if (result.risk_level !== 'safe') {
87
+ console.log('Risk detected:', result.risk_level)
88
+ console.log('Summary:', result.summary)
89
+ console.log('Action:', result.recommended_action)
90
+ }
91
+ ```
92
+
93
+ ---
94
+
95
+ ## API Reference
96
+
97
+ ### Initialization
98
+
99
+ ```typescript
100
+ import { SafeNest } from '@safenest/sdk'
101
+
102
+ // Simple
103
+ const safenest = new SafeNest('your-api-key')
104
+
105
+ // With options
106
+ const safenest = new SafeNest('your-api-key', {
107
+ timeout: 30000, // Request timeout in ms (default: 30 seconds)
108
+ retries: 3, // Retry attempts for transient failures (default: 3)
109
+ retryDelay: 1000, // Initial retry delay in ms (default: 1000)
110
+ })
111
+ ```
112
+
113
+ ---
114
+
115
+ ### Safety Detection
116
+
117
+ #### `detectBullying(input)`
118
+
119
+ Detects bullying and harassment in text content.
120
+
121
+ ```typescript
122
+ const result = await safenest.detectBullying({
123
+ content: "Nobody likes you, just leave",
124
+ context: 'chat' // or { ageGroup: '11-13', relationship: 'classmates' }
125
+ })
126
+
127
+ console.log(result.is_bullying) // true
128
+ console.log(result.severity) // 'medium' | 'high' | 'critical'
129
+ console.log(result.bullying_type) // ['exclusion', 'verbal_abuse']
130
+ console.log(result.confidence) // 0.92
131
+ console.log(result.risk_score) // 0.75
132
+ console.log(result.rationale) // "Direct exclusion language..."
133
+ console.log(result.recommended_action) // 'flag_for_moderator'
134
+ ```
135
+
136
+ #### `detectGrooming(input)`
137
+
138
+ Analyzes conversation threads for grooming patterns.
139
+
140
+ ```typescript
141
+ const result = await safenest.detectGrooming({
142
+ messages: [
143
+ { role: 'adult', content: "This is our special secret" },
144
+ { role: 'child', content: "Ok I won't tell anyone" }
145
+ ],
146
+ childAge: 12
147
+ })
148
+
149
+ console.log(result.grooming_risk) // 'none' | 'low' | 'medium' | 'high' | 'critical'
150
+ console.log(result.flags) // ['secrecy', 'isolation', 'trust_building']
151
+ console.log(result.confidence) // 0.89
152
+ console.log(result.risk_score) // 0.85
153
+ console.log(result.rationale) // "Multiple grooming indicators..."
154
+ console.log(result.recommended_action) // 'immediate_intervention'
155
+ ```
156
+
157
+ #### `detectUnsafe(input)`
158
+
159
+ Identifies potentially dangerous or harmful content.
160
+
161
+ ```typescript
162
+ const result = await safenest.detectUnsafe({
163
+ content: "I don't want to be here anymore"
164
+ })
165
+
166
+ console.log(result.unsafe) // true
167
+ console.log(result.categories) // ['self_harm', 'crisis']
168
+ console.log(result.severity) // 'critical'
169
+ console.log(result.risk_score) // 0.9
170
+ console.log(result.rationale) // "Expression of suicidal ideation..."
171
+ console.log(result.recommended_action) // 'immediate_intervention'
172
+ ```
173
+
174
+ #### `analyze(content)`
175
+
176
+ Quick combined analysis — runs bullying and unsafe detection in parallel.
177
+
178
+ ```typescript
179
+ // Simple string input
180
+ const result = await safenest.analyze("Message to check")
181
+
182
+ // With options
183
+ const result = await safenest.analyze({
184
+ content: "Message to check",
185
+ context: 'social_media',
186
+ include: ['bullying', 'unsafe'] // Select which checks to run
187
+ })
188
+
189
+ console.log(result.risk_level) // 'safe' | 'low' | 'medium' | 'high' | 'critical'
190
+ console.log(result.risk_score) // 0.0 - 1.0
191
+ console.log(result.summary) // "Bullying detected (medium). Unsafe content: self_harm"
192
+ console.log(result.bullying) // Full bullying result (if included)
193
+ console.log(result.unsafe) // Full unsafe result (if included)
194
+ console.log(result.recommended_action) // Combined recommendation
195
+ ```
196
+
197
+ ---
198
+
199
+ ### Emotional Analysis
200
+
201
+ #### `analyzeEmotions(input)`
202
+
203
+ Summarizes emotional signals in content or conversations.
204
+
205
+ ```typescript
206
+ // Single content
207
+ const result = await safenest.analyzeEmotions({
208
+ content: "I'm so stressed about everything lately"
209
+ })
210
+
211
+ // Or conversation history
212
+ const result = await safenest.analyzeEmotions({
213
+ messages: [
214
+ { sender: 'child', content: "I failed the test" },
215
+ { sender: 'child', content: "Everyone else did fine" },
216
+ { sender: 'child', content: "I'm so stupid" }
217
+ ]
218
+ })
219
+
220
+ console.log(result.dominant_emotions) // ['anxiety', 'sadness', 'frustration']
221
+ console.log(result.emotion_scores) // { anxiety: 0.8, sadness: 0.6, ... }
222
+ console.log(result.trend) // 'improving' | 'stable' | 'worsening'
223
+ console.log(result.summary) // "Child expressing academic anxiety..."
224
+ console.log(result.recommended_followup) // "Check in about school stress..."
225
+ ```
226
+
227
+ ---
228
+
229
+ ### Guidance & Reports
230
+
231
+ #### `getActionPlan(input)`
232
+
233
+ Generates age-appropriate action guidance.
234
+
235
+ ```typescript
236
+ const plan = await safenest.getActionPlan({
237
+ situation: 'Someone is spreading rumors about me at school',
238
+ childAge: 12,
239
+ audience: 'child', // 'child' | 'parent' | 'educator' | 'platform'
240
+ severity: 'medium'
241
+ })
242
+
243
+ console.log(plan.audience) // 'child'
244
+ console.log(plan.steps) // ['Talk to a trusted adult', ...]
245
+ console.log(plan.tone) // 'supportive'
246
+ console.log(plan.reading_level) // 'grade_5'
247
+ ```
248
+
249
+ #### `generateReport(input)`
250
+
251
+ Creates structured incident summaries for professional review.
252
+
253
+ ```typescript
254
+ const report = await safenest.generateReport({
255
+ messages: [
256
+ { sender: 'user1', content: 'Threatening message' },
257
+ { sender: 'child', content: 'Please stop' }
258
+ ],
259
+ childAge: 14,
260
+ incident: {
261
+ type: 'harassment',
262
+ occurredAt: new Date()
263
+ }
264
+ })
265
+
266
+ console.log(report.summary) // "Incident summary..."
267
+ console.log(report.risk_level) // 'low' | 'medium' | 'high' | 'critical'
268
+ console.log(report.categories) // ['bullying', 'threats']
269
+ console.log(report.recommended_next_steps) // ['Document incident', ...]
270
+ ```
271
+
272
+ ---
273
+
274
+ ### Policy Configuration
275
+
276
+ #### `getPolicy()` / `setPolicy(config)`
277
+
278
+ Customize safety thresholds for your application.
279
+
280
+ ```typescript
281
+ // Get current policy
282
+ const policy = await safenest.getPolicy()
283
+
284
+ // Update policy
285
+ await safenest.setPolicy({
286
+ bullying: {
287
+ enabled: true,
288
+ minRiskScoreToFlag: 0.5,
289
+ minRiskScoreToBlock: 0.8
290
+ },
291
+ selfHarm: {
292
+ enabled: true,
293
+ alwaysEscalate: true
294
+ }
295
+ })
296
+ ```
297
+
298
+ ---
299
+
300
+ ## Usage Tracking
301
+
302
+ The SDK automatically captures usage metadata from API responses:
303
+
304
+ ```typescript
305
+ const result = await safenest.detectBullying({ content: 'test' })
306
+
307
+ // Access usage stats
308
+ console.log(safenest.usage)
309
+ // { limit: 10000, used: 5234, remaining: 4766 }
310
+
311
+ // Access request metadata
312
+ console.log(safenest.lastRequestId) // 'req_1a2b3c...'
313
+ console.log(safenest.lastLatencyMs) // 145
314
+ ```
315
+
316
+ ---
317
+
318
+ ## Error Handling
319
+
320
+ The SDK provides typed error classes for different failure scenarios:
321
+
322
+ ```typescript
323
+ import {
324
+ SafeNest,
325
+ SafeNestError,
326
+ AuthenticationError,
327
+ RateLimitError,
328
+ ValidationError,
329
+ NotFoundError,
330
+ ServerError,
331
+ TimeoutError,
332
+ NetworkError,
333
+ } from '@safenest/sdk'
334
+
335
+ try {
336
+ const result = await safenest.detectBullying({ content: 'test' })
337
+ } catch (error) {
338
+ if (error instanceof AuthenticationError) {
339
+ // 401 - Invalid or missing API key
340
+ console.error('Check your API key')
341
+ } else if (error instanceof RateLimitError) {
342
+ // 429 - Too many requests
343
+ console.error('Rate limited, slow down')
344
+ } else if (error instanceof ValidationError) {
345
+ // 400 - Invalid request parameters
346
+ console.error('Invalid input:', error.details)
347
+ } else if (error instanceof NotFoundError) {
348
+ // 404 - Resource not found
349
+ console.error('Resource not found')
350
+ } else if (error instanceof ServerError) {
351
+ // 5xx - Server error
352
+ console.error('Server error, try again later')
353
+ } else if (error instanceof TimeoutError) {
354
+ // Request timed out
355
+ console.error('Request timed out')
356
+ } else if (error instanceof NetworkError) {
357
+ // Network connectivity issue
358
+ console.error('Check your connection')
359
+ } else if (error instanceof SafeNestError) {
360
+ // Generic SDK error
361
+ console.error('Error:', error.message)
362
+ }
363
+ }
364
+ ```
365
+
366
+ ---
367
+
368
+ ## TypeScript Support
369
+
370
+ Full TypeScript support with comprehensive type definitions:
371
+
372
+ ```typescript
373
+ import { SafeNest } from '@safenest/sdk'
374
+ import type {
375
+ // Results
376
+ BullyingResult,
377
+ GroomingResult,
378
+ UnsafeResult,
379
+ EmotionsResult,
380
+ ActionPlanResult,
381
+ ReportResult,
382
+ AnalyzeResult,
383
+
384
+ // Inputs
385
+ DetectBullyingInput,
386
+ DetectGroomingInput,
387
+ DetectUnsafeInput,
388
+ AnalyzeEmotionsInput,
389
+ GetActionPlanInput,
390
+ GenerateReportInput,
391
+
392
+ // Utilities
393
+ Usage,
394
+ ContextInput,
395
+ GroomingMessage,
396
+ EmotionMessage,
397
+ ReportMessage,
398
+ } from '@safenest/sdk'
399
+ ```
400
+
401
+ ### Using Enums
402
+
403
+ The SDK exports enums for type-safe comparisons:
404
+
405
+ ```typescript
406
+ import {
407
+ Severity,
408
+ GroomingRisk,
409
+ RiskLevel,
410
+ RiskCategory,
411
+ AnalysisType,
412
+ EmotionTrend,
413
+ IncidentStatus,
414
+ ErrorCode,
415
+ } from '@safenest/sdk'
416
+
417
+ // Type-safe severity checks
418
+ if (result.severity === Severity.CRITICAL) {
419
+ // Handle critical severity
420
+ }
421
+
422
+ // Grooming risk comparisons
423
+ if (result.grooming_risk === GroomingRisk.HIGH) {
424
+ // Handle high grooming risk
425
+ }
426
+
427
+ // Error code handling
428
+ if (error.code === ErrorCode.RATE_LIMIT_EXCEEDED) {
429
+ // Handle rate limiting
430
+ }
431
+ ```
432
+
433
+ You can also import enums separately:
434
+
435
+ ```typescript
436
+ import { Severity, RiskCategory } from '@safenest/sdk/constants'
437
+ ```
438
+
439
+ ---
440
+
441
+ ## Examples
442
+
443
+ ### React Integration
444
+
445
+ ```typescript
446
+ import { SafeNest } from '@safenest/sdk'
447
+ import { useState } from 'react'
448
+
449
+ const safenest = new SafeNest(process.env.NEXT_PUBLIC_SAFENEST_API_KEY)
450
+
451
+ function MessageInput() {
452
+ const [message, setMessage] = useState('')
453
+ const [warning, setWarning] = useState<string | null>(null)
454
+
455
+ const handleSubmit = async () => {
456
+ const result = await safenest.analyze(message)
457
+
458
+ if (result.risk_level !== 'safe') {
459
+ setWarning(result.summary)
460
+ return
461
+ }
462
+
463
+ // Submit message...
464
+ }
465
+
466
+ return (
467
+ <div>
468
+ <input value={message} onChange={e => setMessage(e.target.value)} />
469
+ {warning && <p className="warning">{warning}</p>}
470
+ <button onClick={handleSubmit}>Send</button>
471
+ </div>
472
+ )
473
+ }
474
+ ```
475
+
476
+ ### Express Middleware
477
+
478
+ ```typescript
479
+ import { SafeNest, RateLimitError } from '@safenest/sdk'
480
+ import express from 'express'
481
+
482
+ const safenest = new SafeNest(process.env.SAFENEST_API_KEY)
483
+
484
+ const safetyMiddleware = async (req, res, next) => {
485
+ const { message } = req.body
486
+
487
+ try {
488
+ const result = await safenest.analyze(message)
489
+
490
+ if (result.risk_level === 'critical') {
491
+ return res.status(400).json({
492
+ error: 'Message blocked for safety reasons',
493
+ details: result.summary
494
+ })
495
+ }
496
+
497
+ req.safetyResult = result
498
+ next()
499
+ } catch (error) {
500
+ if (error instanceof RateLimitError) {
501
+ return res.status(429).json({ error: 'Too many requests' })
502
+ }
503
+ next(error)
504
+ }
505
+ }
506
+
507
+ app.post('/messages', safetyMiddleware, (req, res) => {
508
+ // Message passed safety check
509
+ })
510
+ ```
511
+
512
+ ### Batch Processing
513
+
514
+ ```typescript
515
+ const messages = ['message1', 'message2', 'message3']
516
+
517
+ const results = await Promise.all(
518
+ messages.map(content => safenest.analyze(content))
519
+ )
520
+
521
+ const flagged = results.filter(r => r.risk_level !== 'safe')
522
+ console.log(`${flagged.length} messages flagged for review`)
523
+ ```
524
+
525
+ ---
526
+
527
+ ## Browser Support
528
+
529
+ The SDK works in browsers that support the Fetch API:
530
+
531
+ ```html
532
+ <script type="module">
533
+ import { SafeNest } from 'https://esm.sh/@safenest/sdk'
534
+
535
+ const safenest = new SafeNest('your-api-key')
536
+ const result = await safenest.analyze('Hello world')
537
+ </script>
538
+ ```
539
+
540
+ > **Note**: Never expose your API key in client-side code for production applications. Use a backend proxy to protect your credentials.
541
+
542
+ ---
543
+
544
+ ## Contributing
545
+
546
+ We welcome contributions! Please see our [Contributing Guide](https://github.com/safenest/sdk-typescript/blob/main/CONTRIBUTING.md) for details.
547
+
548
+ ```bash
549
+ # Clone the repo
550
+ git clone https://github.com/safenest/sdk-typescript.git
551
+ cd sdk-typescript
552
+
553
+ # Install dependencies
554
+ npm install
555
+
556
+ # Run tests
557
+ npm test
558
+
559
+ # Build
560
+ npm run build
561
+ ```
562
+
563
+ ---
564
+
565
+ ## API Documentation
566
+
567
+ - **Base URL**: `https://api.safenest.dev`
568
+ - **Swagger UI**: [api.safenest.dev/docs](https://api.safenest.dev/docs)
569
+ - **OpenAPI JSON**: [api.safenest.dev/docs/json](https://api.safenest.dev/docs/json)
570
+
571
+ ### Rate Limits
572
+
573
+ Rate limits depend on your subscription tier:
574
+
575
+ | Plan | Price | API Calls/month | Features |
576
+ |------|-------|-----------------|----------|
577
+ | **Starter** | Free | 1,000 | Basic moderation, JS SDK, Community support |
578
+ | **Pro** | $99/mo | 100,000 | Advanced AI, All SDKs, Edge network (sub-100ms), Real-time analytics |
579
+ | **Business** | $199/mo | 250,000 | Everything in Pro + 5 team seats, Custom webhooks, SSO, 99.9% SLA |
580
+ | **Enterprise** | Custom | Unlimited | Custom AI training, Dedicated infrastructure, 24/7 support, SOC 2 |
581
+
582
+ ---
583
+
584
+ ## Support
585
+
586
+ - **Documentation**: [docs.safenest.dev](https://docs.safenest.dev)
587
+ - **Discord**: [discord.gg/safenest](https://discord.gg/safenest)
588
+ - **Email**: support@safenest.dev
589
+ - **Issues**: [GitHub Issues](https://github.com/safenest/safenest-sdk/issues)
590
+
591
+ ---
592
+
593
+ ## License
594
+
595
+ MIT License - see [LICENSE](LICENSE) for details.
596
+
597
+ ---
598
+
599
+ <p align="center">
600
+ <sub>Built with care for child safety by the <a href="https://safenest.dev">SafeNest</a> team</sub>
601
+ </p>