@roboflow/inference-sdk 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +14 -211
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,8 +1,6 @@
1
1
  # @roboflow/inference-sdk
2
2
 
3
- Lightweight client package for Roboflow inference via WebRTC streaming and hosted API.
4
-
5
- This package provides WebRTC streaming capabilities and hosted inference API access without bundling TensorFlow or local inference models, making it ideal for production applications.
3
+ Lightweight client for Roboflow's hosted inference API with WebRTC streaming support for real-time computer vision in the browser.
6
4
 
7
5
  ## Installation
8
6
 
@@ -10,234 +8,39 @@ This package provides WebRTC streaming capabilities and hosted inference API acc
10
8
  npm install @roboflow/inference-sdk
11
9
  ```
12
10
 
13
- ## Quick Start
14
-
15
- ### Basic WebRTC Streaming Example
11
+ ## Quick Example
16
12
 
17
13
  ```typescript
18
- import { useStream } from '@roboflow/inference-sdk/webrtc';
19
- import { connectors } from '@roboflow/inference-sdk/api';
14
+ import { useStream, connectors } from '@roboflow/inference-sdk';
20
15
  import { useCamera } from '@roboflow/inference-sdk/streams';
21
16
 
22
- // Create connector (use proxy for production!)
23
- const connector = connectors.withApiKey("your-api-key");
24
-
25
- // Get camera stream
26
- const stream = await useCamera({
27
- video: {
28
- facingMode: { ideal: "environment" }
29
- }
30
- });
31
-
32
- // Start WebRTC connection
17
+ const stream = await useCamera({ video: { facingMode: "environment" } });
33
18
  const connection = await useStream({
34
19
  source: stream,
35
- connector,
36
- wrtcParams: {
37
- workflowSpec: {
38
- // Your workflow specification
39
- version: "1.0",
40
- inputs: [{ type: "InferenceImage", name: "image" }],
41
- steps: [/* ... */],
42
- outputs: [/* ... */]
43
- },
44
- imageInputName: "image",
45
- streamOutputNames: ["output"],
46
- dataOutputNames: ["predictions"]
47
- },
48
- onData: (data) => {
49
- // Receive real-time inference results
50
- console.log("Inference results:", data);
51
- }
52
- });
53
-
54
- // Display processed video
55
- const remoteStream = await connection.remoteStream();
56
- videoElement.srcObject = remoteStream;
57
-
58
- // Clean up when done
59
- await connection.cleanup();
60
- ```
61
-
62
- ## Security Best Practices
63
-
64
- ### ⚠️ API Key Security
65
-
66
- **NEVER expose your API key in frontend code for production applications.**
67
-
68
- The `connectors.withApiKey()` method is convenient for demos and testing, but it exposes your API key in the browser. For production applications, always use a backend proxy:
69
-
70
- ### Using a Backend Proxy (Recommended)
71
-
72
- **Frontend:**
73
- ```typescript
74
- import { useStream } from '@roboflow/inference-sdk/webrtc';
75
- import { connectors } from '@roboflow/inference-sdk/api';
76
-
77
- // Use proxy endpoint instead of direct API key
78
- const connector = connectors.withProxyUrl('/api/init-webrtc');
79
-
80
- const connection = await useStream({
81
- source: stream,
82
- connector,
83
- wrtcParams: {
84
- workflowSpec: { /* ... */ },
85
- imageInputName: "image",
86
- streamOutputNames: ["output"]
87
- }
88
- });
89
- ```
90
-
91
- **Backend (Express example):**
92
- ```typescript
93
- import { InferenceHTTPClient } from '@roboflow/inference-sdk/api';
94
-
95
- app.post('/api/init-webrtc', async (req, res) => {
96
- const { offer, wrtcParams } = req.body;
97
-
98
- // API key stays secure on the server
99
- const client = InferenceHTTPClient.init({
100
- apiKey: process.env.ROBOFLOW_API_KEY
101
- });
102
-
103
- const answer = await client.initializeWebrtcWorker({
104
- offer,
105
- workflowSpec: wrtcParams.workflowSpec,
106
- workspaceName: wrtcParams.workspaceName,
107
- workflowId: wrtcParams.workflowId,
108
- config: {
109
- imageInputName: wrtcParams.imageInputName,
110
- streamOutputNames: wrtcParams.streamOutputNames,
111
- dataOutputNames: wrtcParams.dataOutputNames,
112
- threadPoolWorkers: wrtcParams.threadPoolWorkers
113
- }
114
- });
115
-
116
- res.json(answer);
117
- });
118
- ```
119
-
120
- ## API Reference
121
-
122
- ### WebRTC Functions
123
-
124
- #### `useStream(params)`
125
-
126
- Establishes a WebRTC connection for real-time video inference.
127
-
128
- **Parameters:**
129
- - `source: MediaStream` - Input video stream (from camera or other source)
130
- - `connector: Connector` - Connection method (withApiKey or withProxyUrl)
131
- - `wrtcParams: WebRTCParams` - Workflow configuration
132
- - `workflowSpec?: WorkflowSpec` - Workflow specification object
133
- - `workspaceName?: string` - Workspace name (alternative to workflowSpec)
134
- - `workflowId?: string` - Workflow ID (alternative to workflowSpec)
135
- - `imageInputName?: string` - Input image name (default: "image")
136
- - `streamOutputNames?: string[]` - Output stream names
137
- - `dataOutputNames?: string[]` - Output data names
138
- - `threadPoolWorkers?: number` - Thread pool workers (default: 4)
139
- - `onData?: (data: any) => void` - Callback for data output
140
- - `options?: UseStreamOptions` - Additional options
141
-
142
- **Returns:** `Promise<RFWebRTCConnection>`
143
-
144
- ### Connection Methods
145
-
146
- #### `connection.remoteStream()`
147
-
148
- Get the processed video stream from Roboflow.
149
-
150
- **Returns:** `Promise<MediaStream>`
151
-
152
- #### `connection.localStream()`
153
-
154
- Get the local input video stream.
155
-
156
- **Returns:** `MediaStream`
157
-
158
- #### `connection.cleanup()`
159
-
160
- Close the connection and clean up resources.
161
-
162
- **Returns:** `Promise<void>`
163
-
164
- #### `connection.reconfigureOutputs(config)`
165
-
166
- Dynamically change stream and data outputs at runtime without restarting the connection.
167
-
168
- **Parameters:**
169
- - `config.streamOutput?: string[] | null` - Stream output names
170
- - `undefined` or not provided: Unchanged
171
- - `[]`: Auto-detect first valid image output
172
- - `["output_name"]`: Use specified output
173
- - `null`: Unchanged
174
- - `config.dataOutput?: string[] | null` - Data output names
175
- - `undefined` or not provided: Unchanged
176
- - `[]`: Disable all data outputs
177
- - `["output_name"]`: Use specified outputs
178
- - `null`: Enable all data outputs
179
-
180
- **Examples:**
181
- ```typescript
182
- // Change to different stream output
183
- connection.reconfigureOutputs({
184
- streamOutput: ["annotated_image"]
185
- });
186
-
187
- // Enable all data outputs
188
- connection.reconfigureOutputs({
189
- dataOutput: null
190
- });
191
-
192
- // Disable all data outputs
193
- connection.reconfigureOutputs({
194
- dataOutput: []
20
+ connector: connectors.withProxyUrl('/api/init-webrtc'), // Use backend proxy
21
+ wrtcParams: { workflowSpec: { /* ... */ } },
22
+ onData: (data) => console.log("Inference results:", data)
195
23
  });
196
24
 
197
- // Change both at once
198
- connection.reconfigureOutputs({
199
- streamOutput: ["visualization"],
200
- dataOutput: ["predictions", "metadata"]
201
- });
25
+ const videoElement.srcObject = await connection.remoteStream();
202
26
  ```
203
27
 
204
- ### Camera Functions
205
-
206
- #### `useCamera(constraints)`
207
-
208
- Access device camera with specified constraints.
209
-
210
- **Parameters:**
211
- - `constraints: MediaStreamConstraints` - Media constraints
212
-
213
- **Returns:** `Promise<MediaStream>`
214
-
215
- #### `stopStream(stream)`
216
-
217
- Stop a media stream and release camera.
28
+ See the [sample app](https://github.com/roboflow/inferenceSampleApp) for a complete working example.
218
29
 
219
- **Parameters:**
220
- - `stream: MediaStream` - Stream to stop
30
+ ## Security Warning
221
31
 
222
- ## When to Use This Package
32
+ **Never expose your API key in frontend code.** Always use a backend proxy for production applications. The sample app demonstrates the recommended proxy pattern.
223
33
 
224
- ### Use `@roboflow/inference-sdk` when:
225
- - Building production web applications
226
- - You need WebRTC streaming inference
227
- - You want a smaller bundle size
228
- - You're deploying to browsers
34
+ ## Get Started
229
35
 
230
- ### Use the full `inferencejs` package when:
231
- - You need local inference with TensorFlow.js
232
- - You want to run models offline in the browser
233
- - You need both local and hosted inference options
36
+ For a complete working example with backend proxy setup, see:
37
+ **[github.com/roboflow/inferenceSampleApp](https://github.com/roboflow/inferenceSampleApp)**
234
38
 
235
39
  ## Resources
236
40
 
237
41
  - [Roboflow Documentation](https://docs.roboflow.com/)
238
42
  - [API Authentication Guide](https://docs.roboflow.com/api-reference/authentication)
239
43
  - [Workflows Documentation](https://docs.roboflow.com/workflows)
240
- - [GitHub Repository](https://github.com/roboflow/inferencejs)
241
44
 
242
45
  ## License
243
46
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@roboflow/inference-sdk",
3
- "version": "0.1.0",
3
+ "version": "0.1.1",
4
4
  "description": "Lightweight client for Roboflow's hosted inference API with WebRTC streaming support",
5
5
  "keywords": [
6
6
  "roboflow",