graphlit-client 1.0.20250531005 → 1.0.20250610001
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +64 -0
- package/README.md +441 -53
- package/dist/client.d.ts +126 -1
- package/dist/client.js +1616 -1569
- package/dist/generated/graphql-documents.d.ts +1 -0
- package/dist/generated/graphql-documents.js +372 -310
- package/dist/generated/graphql-types.d.ts +207 -23
- package/dist/generated/graphql-types.js +299 -246
- package/dist/model-mapping.d.ts +18 -0
- package/dist/model-mapping.js +95 -0
- package/dist/stream-helpers.d.ts +106 -0
- package/dist/stream-helpers.js +237 -0
- package/dist/streaming/chunk-buffer.d.ts +25 -0
- package/dist/streaming/chunk-buffer.js +170 -0
- package/dist/streaming/llm-formatters.d.ts +64 -0
- package/dist/streaming/llm-formatters.js +187 -0
- package/dist/streaming/providers.d.ts +18 -0
- package/dist/streaming/providers.js +353 -0
- package/dist/streaming/ui-event-adapter.d.ts +52 -0
- package/dist/streaming/ui-event-adapter.js +288 -0
- package/dist/types/agent.d.ts +39 -0
- package/dist/types/agent.js +1 -0
- package/dist/types/streaming.d.ts +58 -0
- package/dist/types/streaming.js +7 -0
- package/dist/types/ui-events.d.ts +38 -0
- package/dist/types/ui-events.js +1 -0
- package/package.json +32 -5
package/CHANGELOG.md
ADDED
@@ -0,0 +1,64 @@
|
|
1
|
+
# Changelog
|
2
|
+
|
3
|
+
All notable changes to the Graphlit TypeScript Client will be documented in this file.
|
4
|
+
|
5
|
+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
6
|
+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
7
|
+
|
8
|
+
## [1.1.0] - 2025-01-06
|
9
|
+
|
10
|
+
### Added
|
11
|
+
- **Real-time streaming support** with the new `streamAgent` method
|
12
|
+
- Native streaming integration with OpenAI, Anthropic, and Google Gemini models
|
13
|
+
- Automatic fallback to regular API calls when streaming providers are not available
|
14
|
+
- UI-optimized event stream with automatic message accumulation
|
15
|
+
- Support for tool calling during streaming conversations
|
16
|
+
- Configurable chunking strategies (character, word, sentence)
|
17
|
+
- Smooth streaming with configurable delays to prevent UI flicker
|
18
|
+
|
19
|
+
- **Custom LLM client support**
|
20
|
+
- `setOpenAIClient()` method to use custom OpenAI instances
|
21
|
+
- `setAnthropicClient()` method to use custom Anthropic instances
|
22
|
+
- `setGoogleClient()` method to use custom Google Generative AI instances
|
23
|
+
- Support for proxy configurations and custom endpoints
|
24
|
+
|
25
|
+
- **Enhanced streaming features**
|
26
|
+
- AbortController support for cancelling ongoing streams
|
27
|
+
- Conversation continuity - continue streaming in existing conversations
|
28
|
+
- Comprehensive error handling with recoverable error detection
|
29
|
+
- Tool execution with streaming responses
|
30
|
+
- Real-time token-by-token updates
|
31
|
+
|
32
|
+
- **New TypeScript types**
|
33
|
+
- `UIStreamEvent` - High-level streaming events for UI integration
|
34
|
+
- `StreamEvent` - Low-level streaming events for fine control
|
35
|
+
- `StreamAgentOptions` - Configuration options for streaming
|
36
|
+
- `ToolHandler` - Type-safe tool handler functions
|
37
|
+
- `AgentResult`, `ToolCallResult`, `UsageInfo` - Supporting types
|
38
|
+
|
39
|
+
### Changed
|
40
|
+
- LLM client libraries (openai, @anthropic-ai/sdk, @google/generative-ai) are now optional peer dependencies
|
41
|
+
- Improved TypeScript typing throughout the codebase
|
42
|
+
- Enhanced error messages for better debugging
|
43
|
+
|
44
|
+
### Fixed
|
45
|
+
- Message formatting issues with trailing whitespace
|
46
|
+
- Tool calling message role assignment
|
47
|
+
- Conversation history handling in streaming mode
|
48
|
+
- Pre-aborted signal handling
|
49
|
+
- Google Gemini streaming text completeness
|
50
|
+
|
51
|
+
### Technical Improvements
|
52
|
+
- Queue-based chunk emission for consistent streaming behavior
|
53
|
+
- Unicode-aware text segmentation using Intl.Segmenter
|
54
|
+
- Proper cleanup of resources in test suites
|
55
|
+
- Comprehensive test coverage for streaming functionality
|
56
|
+
|
57
|
+
## [1.0.0] - Previous Release
|
58
|
+
|
59
|
+
### Initial Features
|
60
|
+
- GraphQL client for Graphlit API
|
61
|
+
- Support for all Graphlit operations (content, conversations, specifications, etc.)
|
62
|
+
- JWT-based authentication
|
63
|
+
- Environment variable configuration
|
64
|
+
- TypeScript support with generated types
|
package/README.md
CHANGED
@@ -1,53 +1,441 @@
|
|
1
|
-
# Node.js Client for Graphlit Platform
|
2
|
-
|
3
|
-
|
4
|
-
|
5
|
-
|
6
|
-
|
7
|
-
|
8
|
-
|
9
|
-
|
10
|
-
|
11
|
-
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
53
|
-
|
1
|
+
# Node.js Client for Graphlit Platform
|
2
|
+
|
3
|
+
## Overview
|
4
|
+
|
5
|
+
The Graphlit Client for Node.js enables straightforward interactions with the Graphlit API, allowing developers to execute GraphQL queries and mutations against the Graphlit service. This document outlines the setup process and provides examples of using the client, including the new streaming capabilities.
|
6
|
+
|
7
|
+
## Prerequisites
|
8
|
+
|
9
|
+
Before you begin, ensure you have the following:
|
10
|
+
|
11
|
+
- Node.js installed on your system (recommended version 18.x or higher).
|
12
|
+
- An active account on the [Graphlit Platform](https://portal.graphlit.dev) with access to the API settings dashboard.
|
13
|
+
|
14
|
+
## Installation
|
15
|
+
|
16
|
+
### Basic Installation
|
17
|
+
|
18
|
+
To install the Graphlit Client with core functionality:
|
19
|
+
|
20
|
+
```bash
|
21
|
+
npm install graphlit-client
|
22
|
+
```
|
23
|
+
|
24
|
+
or
|
25
|
+
|
26
|
+
```bash
|
27
|
+
yarn add graphlit-client
|
28
|
+
```
|
29
|
+
|
30
|
+
### Streaming Support (Optional)
|
31
|
+
|
32
|
+
For real-time token streaming with LLM conversations, install the desired LLM clients as additional dependencies:
|
33
|
+
|
34
|
+
**All streaming providers:**
|
35
|
+
|
36
|
+
```bash
|
37
|
+
npm install graphlit-client openai @anthropic-ai/sdk @google/generative-ai
|
38
|
+
```
|
39
|
+
|
40
|
+
**Selective streaming providers:**
|
41
|
+
|
42
|
+
```bash
|
43
|
+
# OpenAI streaming only
|
44
|
+
npm install graphlit-client openai
|
45
|
+
|
46
|
+
# Anthropic streaming only
|
47
|
+
npm install graphlit-client @anthropic-ai/sdk
|
48
|
+
|
49
|
+
# Google streaming only
|
50
|
+
npm install graphlit-client @google/generative-ai
|
51
|
+
```
|
52
|
+
|
53
|
+
> **Note:** The LLM client libraries are optional peer dependencies. The base Graphlit client works perfectly without them, but streaming functionality will gracefully fall back to regular API calls.
|
54
|
+
|
55
|
+
## Configuration
|
56
|
+
|
57
|
+
The Graphlit Client supports environment variables to be set for authentication and configuration:
|
58
|
+
|
59
|
+
- `GRAPHLIT_ENVIRONMENT_ID`: Your environment ID.
|
60
|
+
- `GRAPHLIT_ORGANIZATION_ID`: Your organization ID.
|
61
|
+
- `GRAPHLIT_JWT_SECRET`: Your JWT secret for signing the JWT token.
|
62
|
+
|
63
|
+
Alternately, you can pass these values with the constructor of the Graphlit client.
|
64
|
+
|
65
|
+
You can find these values in the API settings dashboard on the [Graphlit Platform](https://portal.graphlit.dev).
|
66
|
+
|
67
|
+
### Setting Environment Variables
|
68
|
+
|
69
|
+
To set these environment variables on your system, you can place them in a `.env` file at the root of your project:
|
70
|
+
|
71
|
+
```env
|
72
|
+
GRAPHLIT_ENVIRONMENT_ID=your_environment_id_value
|
73
|
+
GRAPHLIT_ORGANIZATION_ID=your_organization_id_value
|
74
|
+
GRAPHLIT_JWT_SECRET=your_jwt_secret_value
|
75
|
+
|
76
|
+
# Optional: For native streaming support
|
77
|
+
OPENAI_API_KEY=your_openai_api_key
|
78
|
+
ANTHROPIC_API_KEY=your_anthropic_api_key
|
79
|
+
GOOGLE_API_KEY=your_google_api_key
|
80
|
+
```
|
81
|
+
|
82
|
+
## Usage
|
83
|
+
|
84
|
+
### Basic Client Usage
|
85
|
+
|
86
|
+
```typescript
|
87
|
+
import { Graphlit } from "graphlit-client";
|
88
|
+
|
89
|
+
const client = new Graphlit();
|
90
|
+
|
91
|
+
// Create a content
|
92
|
+
const contentResponse = await client.ingestUri({
|
93
|
+
uri: "https://example.com/document.pdf",
|
94
|
+
name: "My Document"
|
95
|
+
});
|
96
|
+
|
97
|
+
// Query contents
|
98
|
+
const contents = await client.queryContents({
|
99
|
+
filter: { name: { contains: "Document" } }
|
100
|
+
});
|
101
|
+
```
|
102
|
+
|
103
|
+
### Streaming Conversations with streamAgent
|
104
|
+
|
105
|
+
The new `streamAgent` method provides real-time streaming responses with automatic UI event handling:
|
106
|
+
|
107
|
+
```typescript
|
108
|
+
import { Graphlit, UIStreamEvent } from "graphlit-client";
|
109
|
+
|
110
|
+
const client = new Graphlit();
|
111
|
+
|
112
|
+
// Basic streaming conversation
|
113
|
+
await client.streamAgent(
|
114
|
+
"Tell me about artificial intelligence",
|
115
|
+
(event: UIStreamEvent) => {
|
116
|
+
switch (event.type) {
|
117
|
+
case "conversation_started":
|
118
|
+
console.log(`Started conversation: ${event.conversationId}`);
|
119
|
+
break;
|
120
|
+
|
121
|
+
case "message_update":
|
122
|
+
// Complete message text - automatically accumulated
|
123
|
+
console.log(`Assistant: ${event.message.message}`);
|
124
|
+
break;
|
125
|
+
|
126
|
+
case "conversation_completed":
|
127
|
+
console.log(`Completed: ${event.message.message}`);
|
128
|
+
break;
|
129
|
+
|
130
|
+
case "error":
|
131
|
+
console.error(`Error: ${event.error.message}`);
|
132
|
+
break;
|
133
|
+
}
|
134
|
+
}
|
135
|
+
);
|
136
|
+
```
|
137
|
+
|
138
|
+
### Tool Calling with Streaming
|
139
|
+
|
140
|
+
`streamAgent` supports tool calling with automatic execution:
|
141
|
+
|
142
|
+
```typescript
|
143
|
+
// Define tools
|
144
|
+
const tools = [{
|
145
|
+
name: "get_weather",
|
146
|
+
description: "Get weather for a city",
|
147
|
+
schema: JSON.stringify({
|
148
|
+
type: "object",
|
149
|
+
properties: {
|
150
|
+
city: { type: "string", description: "City name" }
|
151
|
+
},
|
152
|
+
required: ["city"]
|
153
|
+
})
|
154
|
+
}];
|
155
|
+
|
156
|
+
// Tool handlers
|
157
|
+
const toolHandlers = {
|
158
|
+
get_weather: async (args: { city: string }) => {
|
159
|
+
// Your weather API call here
|
160
|
+
return { temperature: 72, condition: "sunny" };
|
161
|
+
}
|
162
|
+
};
|
163
|
+
|
164
|
+
// Stream with tools
|
165
|
+
await client.streamAgent(
|
166
|
+
"What's the weather in San Francisco?",
|
167
|
+
(event: UIStreamEvent) => {
|
168
|
+
if (event.type === "tool_update") {
|
169
|
+
console.log(`Tool ${event.toolCall.name}: ${event.status}`);
|
170
|
+
if (event.result) {
|
171
|
+
console.log(`Result: ${JSON.stringify(event.result)}`);
|
172
|
+
}
|
173
|
+
} else if (event.type === "conversation_completed") {
|
174
|
+
console.log(`Final: ${event.message.message}`);
|
175
|
+
}
|
176
|
+
},
|
177
|
+
undefined, // conversationId
|
178
|
+
{ id: "your-specification-id" }, // specification
|
179
|
+
tools,
|
180
|
+
toolHandlers
|
181
|
+
);
|
182
|
+
```
|
183
|
+
|
184
|
+
### Using Custom LLM Clients
|
185
|
+
|
186
|
+
You can provide your own configured LLM clients for streaming:
|
187
|
+
|
188
|
+
```typescript
|
189
|
+
import OpenAI from "openai";
|
190
|
+
import Anthropic from "@anthropic-ai/sdk";
|
191
|
+
|
192
|
+
// Configure custom clients
|
193
|
+
const openai = new OpenAI({
|
194
|
+
apiKey: "your-api-key",
|
195
|
+
baseURL: "https://your-proxy.com/v1" // Optional proxy
|
196
|
+
});
|
197
|
+
|
198
|
+
const anthropic = new Anthropic({
|
199
|
+
apiKey: "your-api-key",
|
200
|
+
baseURL: "https://your-proxy.com" // Optional proxy
|
201
|
+
});
|
202
|
+
|
203
|
+
// Set custom clients
|
204
|
+
client.setOpenAIClient(openai);
|
205
|
+
client.setAnthropicClient(anthropic);
|
206
|
+
|
207
|
+
// Now streaming will use your custom clients
|
208
|
+
await client.streamAgent("Hello!", (event) => {
|
209
|
+
// Your event handler
|
210
|
+
});
|
211
|
+
```
|
212
|
+
|
213
|
+
### Streaming Options
|
214
|
+
|
215
|
+
Configure streaming behavior with options:
|
216
|
+
|
217
|
+
```typescript
|
218
|
+
await client.streamAgent(
|
219
|
+
"Your prompt",
|
220
|
+
(event) => { /* handler */ },
|
221
|
+
undefined, // conversationId
|
222
|
+
{ id: "spec-id" }, // specification
|
223
|
+
undefined, // tools
|
224
|
+
undefined, // toolHandlers
|
225
|
+
{
|
226
|
+
maxToolRounds: 10, // Maximum tool calling rounds (default: 100)
|
227
|
+
showTokenStream: true, // Show individual tokens (default: true)
|
228
|
+
smoothingEnabled: true, // Enable smooth streaming (default: true)
|
229
|
+
chunkingStrategy: 'word', // 'character' | 'word' | 'sentence' (default: 'word')
|
230
|
+
smoothingDelay: 30, // Milliseconds between chunks (default: 30)
|
231
|
+
abortSignal: controller.signal // AbortController signal for cancellation
|
232
|
+
}
|
233
|
+
);
|
234
|
+
```
|
235
|
+
|
236
|
+
### Conversation Continuity
|
237
|
+
|
238
|
+
Continue existing conversations by passing the conversation ID:
|
239
|
+
|
240
|
+
```typescript
|
241
|
+
let conversationId: string;
|
242
|
+
|
243
|
+
// First message
|
244
|
+
await client.streamAgent(
|
245
|
+
"Remember that my favorite color is blue",
|
246
|
+
(event) => {
|
247
|
+
if (event.type === "conversation_started") {
|
248
|
+
conversationId = event.conversationId;
|
249
|
+
}
|
250
|
+
}
|
251
|
+
);
|
252
|
+
|
253
|
+
// Continue conversation
|
254
|
+
await client.streamAgent(
|
255
|
+
"What's my favorite color?",
|
256
|
+
(event) => {
|
257
|
+
if (event.type === "conversation_completed") {
|
258
|
+
console.log(event.message.message); // Should mention "blue"
|
259
|
+
}
|
260
|
+
},
|
261
|
+
conversationId // Pass the conversation ID
|
262
|
+
);
|
263
|
+
```
|
264
|
+
|
265
|
+
### Error Handling
|
266
|
+
|
267
|
+
The streaming client provides comprehensive error handling:
|
268
|
+
|
269
|
+
```typescript
|
270
|
+
await client.streamAgent(
|
271
|
+
"Your prompt",
|
272
|
+
(event) => {
|
273
|
+
if (event.type === "error") {
|
274
|
+
console.error(`Error: ${event.error.message}`);
|
275
|
+
console.log(`Recoverable: ${event.error.recoverable}`);
|
276
|
+
|
277
|
+
// Handle specific error types
|
278
|
+
if (event.error.code === "RATE_LIMIT") {
|
279
|
+
// Implement retry logic
|
280
|
+
}
|
281
|
+
}
|
282
|
+
}
|
283
|
+
);
|
284
|
+
```
|
285
|
+
|
286
|
+
### Cancelling Streams
|
287
|
+
|
288
|
+
Use an AbortController to cancel ongoing streams:
|
289
|
+
|
290
|
+
```typescript
|
291
|
+
const controller = new AbortController();
|
292
|
+
|
293
|
+
// Start streaming
|
294
|
+
client.streamAgent(
|
295
|
+
"Write a long story...",
|
296
|
+
(event) => {
|
297
|
+
// Your handler
|
298
|
+
},
|
299
|
+
undefined,
|
300
|
+
undefined,
|
301
|
+
undefined,
|
302
|
+
undefined,
|
303
|
+
{ abortSignal: controller.signal }
|
304
|
+
);
|
305
|
+
|
306
|
+
// Cancel after 5 seconds
|
307
|
+
setTimeout(() => controller.abort(), 5000);
|
308
|
+
```
|
309
|
+
|
310
|
+
## Stream Event Reference
|
311
|
+
|
312
|
+
### UI Stream Events
|
313
|
+
|
314
|
+
| Event Type | Description | Properties |
|
315
|
+
|------------|-------------|------------|
|
316
|
+
| `conversation_started` | Conversation initialized | `conversationId`, `timestamp` |
|
317
|
+
| `message_update` | Message text updated | `message` (complete text), `isStreaming` |
|
318
|
+
| `tool_update` | Tool execution status | `toolCall`, `status`, `result?`, `error?` |
|
319
|
+
| `conversation_completed` | Streaming finished | `message` (final) |
|
320
|
+
| `error` | Error occurred | `error` object with `message`, `code`, `recoverable` |
|
321
|
+
|
322
|
+
### Tool Execution Statuses
|
323
|
+
|
324
|
+
- `preparing` - Tool call detected, preparing to execute
|
325
|
+
- `executing` - Tool handler is running
|
326
|
+
- `completed` - Tool executed successfully
|
327
|
+
- `failed` - Tool execution failed
|
328
|
+
|
329
|
+
## Examples
|
330
|
+
|
331
|
+
### Basic Chat UI Integration
|
332
|
+
|
333
|
+
```typescript
|
334
|
+
// React example
|
335
|
+
function ChatComponent() {
|
336
|
+
const [message, setMessage] = useState("");
|
337
|
+
const [isLoading, setIsLoading] = useState(false);
|
338
|
+
|
339
|
+
const handleSend = async (prompt: string) => {
|
340
|
+
setIsLoading(true);
|
341
|
+
setMessage("");
|
342
|
+
|
343
|
+
await client.streamAgent(
|
344
|
+
prompt,
|
345
|
+
(event) => {
|
346
|
+
if (event.type === "message_update") {
|
347
|
+
setMessage(event.message.message);
|
348
|
+
} else if (event.type === "conversation_completed") {
|
349
|
+
setIsLoading(false);
|
350
|
+
} else if (event.type === "error") {
|
351
|
+
setMessage(`Error: ${event.error.message}`);
|
352
|
+
setIsLoading(false);
|
353
|
+
}
|
354
|
+
}
|
355
|
+
);
|
356
|
+
};
|
357
|
+
|
358
|
+
return (
|
359
|
+
<div>
|
360
|
+
<div>{message}</div>
|
361
|
+
<button onClick={() => handleSend("Hello!")} disabled={isLoading}>
|
362
|
+
Send
|
363
|
+
</button>
|
364
|
+
</div>
|
365
|
+
);
|
366
|
+
}
|
367
|
+
```
|
368
|
+
|
369
|
+
### Multi-Model Support
|
370
|
+
|
371
|
+
```typescript
|
372
|
+
// Create specifications for different models
|
373
|
+
const gpt4Spec = await client.createSpecification({
|
374
|
+
name: "GPT-4",
|
375
|
+
type: Types.SpecificationTypes.Completion,
|
376
|
+
serviceType: Types.ModelServiceTypes.OpenAi,
|
377
|
+
openAI: {
|
378
|
+
model: Types.OpenAiModels.Gpt4O_128K,
|
379
|
+
temperature: 0.7
|
380
|
+
}
|
381
|
+
});
|
382
|
+
|
383
|
+
const claudeSpec = await client.createSpecification({
|
384
|
+
name: "Claude",
|
385
|
+
type: Types.SpecificationTypes.Completion,
|
386
|
+
serviceType: Types.ModelServiceTypes.Anthropic,
|
387
|
+
anthropic: {
|
388
|
+
model: Types.AnthropicModels.Claude_3_5Haiku,
|
389
|
+
temperature: 0.7
|
390
|
+
}
|
391
|
+
});
|
392
|
+
|
393
|
+
// Use different models
|
394
|
+
await client.streamAgent(
|
395
|
+
"Hello!",
|
396
|
+
handler,
|
397
|
+
undefined,
|
398
|
+
{ id: gpt4Spec.createSpecification.id } // Use GPT-4
|
399
|
+
);
|
400
|
+
|
401
|
+
await client.streamAgent(
|
402
|
+
"Hello!",
|
403
|
+
handler,
|
404
|
+
undefined,
|
405
|
+
{ id: claudeSpec.createSpecification.id } // Use Claude
|
406
|
+
);
|
407
|
+
```
|
408
|
+
|
409
|
+
## Migration Guide
|
410
|
+
|
411
|
+
If you're upgrading from `promptConversation` to `streamAgent`:
|
412
|
+
|
413
|
+
```typescript
|
414
|
+
// Before
|
415
|
+
const response = await client.promptConversation(
|
416
|
+
"Your prompt",
|
417
|
+
undefined,
|
418
|
+
{ id: specId }
|
419
|
+
);
|
420
|
+
console.log(response.promptConversation.message.message);
|
421
|
+
|
422
|
+
// After
|
423
|
+
await client.streamAgent(
|
424
|
+
"Your prompt",
|
425
|
+
(event) => {
|
426
|
+
if (event.type === "conversation_completed") {
|
427
|
+
console.log(event.message.message);
|
428
|
+
}
|
429
|
+
},
|
430
|
+
undefined,
|
431
|
+
{ id: specId }
|
432
|
+
);
|
433
|
+
```
|
434
|
+
|
435
|
+
## Support
|
436
|
+
|
437
|
+
Please refer to the [Graphlit API Documentation](https://docs.graphlit.dev/).
|
438
|
+
|
439
|
+
For support with the Graphlit Client, please submit a [GitHub Issue](https://github.com/graphlit/graphlit-client-typescript/issues).
|
440
|
+
|
441
|
+
For further support with the Graphlit Platform, please join our [Discord](https://discord.gg/ygFmfjy3Qx) community.
|