@archetypeai/ds-cli 0.3.7 → 0.3.10
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +25 -67
- package/commands/create.js +5 -27
- package/commands/init.js +5 -27
- package/files/AGENTS.md +19 -3
- package/files/CLAUDE.md +21 -3
- package/files/rules/accessibility.md +49 -0
- package/files/rules/frontend-architecture.md +77 -0
- package/files/skills/apply-ds/SKILL.md +92 -80
- package/files/skills/apply-ds/scripts/audit.sh +169 -0
- package/files/skills/apply-ds/scripts/setup.sh +48 -166
- package/files/skills/create-dashboard/SKILL.md +12 -0
- package/files/skills/embedding-from-file/SKILL.md +415 -0
- package/files/skills/embedding-from-sensor/SKILL.md +406 -0
- package/files/skills/embedding-upload/SKILL.md +414 -0
- package/files/skills/fix-accessibility/SKILL.md +57 -9
- package/files/skills/newton-activity-monitor-lens-on-video/SKILL.md +817 -0
- package/files/skills/newton-camera-frame-analysis/SKILL.md +611 -0
- package/files/skills/newton-camera-frame-analysis/scripts/activity-monitor-frame.py +165 -0
- package/files/skills/newton-camera-frame-analysis/scripts/captures/logs/api_responses_20260206_105610.json +62 -0
- package/files/skills/newton-camera-frame-analysis/scripts/continuous_monitor.py +119 -0
- package/files/skills/newton-direct-query/SKILL.md +212 -0
- package/files/skills/newton-direct-query/scripts/direct_query.py +129 -0
- package/files/skills/newton-machine-state-from-file/SKILL.md +545 -0
- package/files/skills/newton-machine-state-from-sensor/SKILL.md +707 -0
- package/files/skills/newton-machine-state-upload/SKILL.md +986 -0
- package/lib/add-ds-ui-svelte.js +5 -2
- package/lib/scaffold-ds-svelte-project.js +25 -18
- package/package.json +13 -2
|
@@ -0,0 +1,406 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: embedding-from-sensor
|
|
3
|
+
description: Run an Embedding Lens by streaming real-time data from a physical sensor (BLE, USB, UDP, or recording playback). Use when extracting live embeddings from sensor hardware for real-time visualization or clustering.
|
|
4
|
+
argument-hint: [source-type]
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Embedding Lens — Stream from Sensor
|
|
8
|
+
|
|
9
|
+
Generate a script that streams real-time IMU sensor data to the Archetype AI Embedding Lens for live embedding extraction. Supports both Python and JavaScript/Web.
|
|
10
|
+
|
|
11
|
+
> **Frontend architecture:** When building a web UI for this skill, decompose into components (sensor connection, status display, embedding visualization) rather than a monolithic page. Extract sensor and API logic into `$lib/api/`. See `@rules/frontend-architecture` for conventions and `@skills/create-dashboard` / `@skills/build-pattern` for layout and component patterns.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Python Implementation
|
|
16
|
+
|
|
17
|
+
### Requirements
|
|
18
|
+
|
|
19
|
+
- `archetypeai` Python package
|
|
20
|
+
- `numpy`
|
|
21
|
+
- `bleak` (for BLE sources)
|
|
22
|
+
- `pyserial` (for USB sources)
|
|
23
|
+
- Environment variables: `ATAI_API_KEY`, optionally `ATAI_API_ENDPOINT`
|
|
24
|
+
|
|
25
|
+
### Supported Source Types
|
|
26
|
+
|
|
27
|
+
| Source | Description | Extra args |
|
|
28
|
+
|--------|-------------|------------|
|
|
29
|
+
| `ble` | Bluetooth Low Energy IMU device | None (auto-discovers) |
|
|
30
|
+
| `usb` | USB serial IMU device | `--sensor-port` |
|
|
31
|
+
| `udp` | UDP relay (from BLE relay server) | `--udp-port` |
|
|
32
|
+
| `recording` | Replay a CSV recording | `--file-path` |
|
|
33
|
+
|
|
34
|
+
### Architecture
|
|
35
|
+
|
|
36
|
+
#### 1. API Client Setup
|
|
37
|
+
|
|
38
|
+
```python
|
|
39
|
+
from archetypeai.api_client import ArchetypeAI
|
|
40
|
+
|
|
41
|
+
client = ArchetypeAI(api_key, api_endpoint=api_endpoint)
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
#### 2. Lens YAML Config
|
|
45
|
+
|
|
46
|
+
No n-shot files or KNN config — just the embedding processor:
|
|
47
|
+
|
|
48
|
+
```yaml
|
|
49
|
+
lens_name: Embedding Lens
|
|
50
|
+
lens_config:
|
|
51
|
+
model_pipeline:
|
|
52
|
+
- processor_name: lens_timeseries_embedding_processor
|
|
53
|
+
processor_config: {}
|
|
54
|
+
model_parameters:
|
|
55
|
+
model_name: OmegaEncoder
|
|
56
|
+
model_version: OmegaEncoder::omega_embeddings_01
|
|
57
|
+
normalize_input: true
|
|
58
|
+
buffer_size: {window_size}
|
|
59
|
+
csv_configs:
|
|
60
|
+
timestamp_column: timestamp
|
|
61
|
+
data_columns: ['a1', 'a2', 'a3', 'a4']
|
|
62
|
+
window_size: {window_size}
|
|
63
|
+
step_size: {step_size}
|
|
64
|
+
output_streams:
|
|
65
|
+
- stream_type: server_sent_events_writer
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
#### 3. ImuReceiver — Multi-Source Data Acquisition
|
|
69
|
+
|
|
70
|
+
Same `ImuReceiver` class as the machine state sensor skill:
|
|
71
|
+
|
|
72
|
+
```python
|
|
73
|
+
class ImuReceiver:
|
|
74
|
+
def __init__(self, incoming_data, num_samples_per_packet=10,
|
|
75
|
+
num_sensor_packets_per_packets_out=10):
|
|
76
|
+
self.packet_queue = queue.Queue()
|
|
77
|
+
# ... source detection (recording/sensor/ble)
|
|
78
|
+
|
|
79
|
+
def get_data(self):
|
|
80
|
+
"""Returns (packet_out, timestamp) or (None, None)"""
|
|
81
|
+
if self.packet_queue.qsize() >= self.num_sensor_packets_per_packets_out:
|
|
82
|
+
packets = [self.packet_queue.get()
|
|
83
|
+
for _ in range(self.num_sensor_packets_per_packets_out)]
|
|
84
|
+
packet_out = np.vstack([p['data'] for p in packets]).tolist()
|
|
85
|
+
return packet_out, packets[-1]['sensor_timestamp']
|
|
86
|
+
return None, None
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
See the machine-state-from-sensor skill for full BLE/USB/UDP acquisition implementations.
|
|
90
|
+
|
|
91
|
+
#### 4. Real-Time Streaming with Buffering
|
|
92
|
+
|
|
93
|
+
```python
|
|
94
|
+
from collections import deque
|
|
95
|
+
|
|
96
|
+
data_buffer = deque(maxlen=window_size * 4)
|
|
97
|
+
embeddings = []
|
|
98
|
+
|
|
99
|
+
while not stop_event.is_set():
|
|
100
|
+
packet_out, packet_timestamp = imu_receiver.get_data()
|
|
101
|
+
|
|
102
|
+
if packet_out is not None:
|
|
103
|
+
for row in packet_out[:, :3] if hasattr(packet_out, 'shape') else packet_out:
|
|
104
|
+
ax, ay, az = int(row[0]), int(row[1]), int(row[2])
|
|
105
|
+
a4 = int((ax*ax + ay*ay + az*az) ** 0.5)
|
|
106
|
+
data_buffer.append((ax, ay, az, a4))
|
|
107
|
+
|
|
108
|
+
if len(data_buffer) >= window_size:
|
|
109
|
+
window_rows = list(data_buffer)[:window_size]
|
|
110
|
+
|
|
111
|
+
a1 = [r[0] for r in window_rows]
|
|
112
|
+
a2 = [r[1] for r in window_rows]
|
|
113
|
+
a3 = [r[2] for r in window_rows]
|
|
114
|
+
a4 = [r[3] for r in window_rows]
|
|
115
|
+
|
|
116
|
+
payload = {
|
|
117
|
+
"type": "session.update",
|
|
118
|
+
"event_data": {
|
|
119
|
+
"type": "data.json",
|
|
120
|
+
"event_data": {
|
|
121
|
+
"sensor_data": [a1, a2, a3, a4],
|
|
122
|
+
"sensor_metadata": {
|
|
123
|
+
"sensor_timestamp": packet_timestamp,
|
|
124
|
+
"sensor_id": f"live_sensor_{counter}"
|
|
125
|
+
}
|
|
126
|
+
}
|
|
127
|
+
}
|
|
128
|
+
}
|
|
129
|
+
client.lens.sessions.process_event(session_id, payload)
|
|
130
|
+
|
|
131
|
+
# Advance by step_size
|
|
132
|
+
for _ in range(min(step_size, len(data_buffer))):
|
|
133
|
+
data_buffer.popleft()
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
#### 5. SSE Event Listening — Collect Embeddings
|
|
137
|
+
|
|
138
|
+
```python
|
|
139
|
+
sse_reader = client.lens.sessions.create_sse_consumer(
|
|
140
|
+
session_id, max_read_time_sec=max_run_time_sec
|
|
141
|
+
)
|
|
142
|
+
|
|
143
|
+
for event in sse_reader.read(block=True):
|
|
144
|
+
if event.get("type") == "inference.result":
|
|
145
|
+
embedding = event["event_data"].get("response")
|
|
146
|
+
|
|
147
|
+
# Flatten 4×768 → 3072D
|
|
148
|
+
if isinstance(embedding, list) and len(embedding) > 0:
|
|
149
|
+
if isinstance(embedding[0], list):
|
|
150
|
+
flat = [val for row in embedding for val in row]
|
|
151
|
+
else:
|
|
152
|
+
flat = embedding
|
|
153
|
+
|
|
154
|
+
embeddings.append(flat)
|
|
155
|
+
print(f"Embedding {len(embeddings)}: {len(flat)}D")
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
#### 6. Threading Model
|
|
159
|
+
|
|
160
|
+
```
|
|
161
|
+
Main Thread: session_callback → starts SSE listener
|
|
162
|
+
Thread 1: ImuReceiver (BLE async / USB serial / UDP socket)
|
|
163
|
+
Thread 2: Streaming loop (buffer → API)
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
### Embedding Response Structure
|
|
167
|
+
|
|
168
|
+
- `response`: nested list `(4, 768)` — one 768D vector per channel (a1, a2, a3, a4)
|
|
169
|
+
- Flatten to `3072D` by concatenating rows
|
|
170
|
+
- `query_metadata.sensor_id`: which sensor window this came from
|
|
171
|
+
|
|
172
|
+
### CLI Arguments
|
|
173
|
+
|
|
174
|
+
```
|
|
175
|
+
--api-key API key (fallback to ATAI_API_KEY env var)
|
|
176
|
+
--api-endpoint API endpoint (default from SDK)
|
|
177
|
+
--source-type {ble, usb, recording, udp} (required)
|
|
178
|
+
--file-path Recording file path (for recording mode)
|
|
179
|
+
--sensor-port USB serial port (default: /dev/tty.usbmodem1101)
|
|
180
|
+
--udp-port UDP relay port (default: 5556)
|
|
181
|
+
--window-size Window size in samples (default: 100)
|
|
182
|
+
--step-size Step size in samples (default: 100)
|
|
183
|
+
--max-run-time-sec Max runtime (default: 500)
|
|
184
|
+
--output-file Path to save embeddings CSV (optional)
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
---
|
|
188
|
+
|
|
189
|
+
## Web / JavaScript Implementation
|
|
190
|
+
|
|
191
|
+
Uses direct `fetch` calls to the Archetype AI REST API with Web Bluetooth API or WebSocket for sensor data.
|
|
192
|
+
|
|
193
|
+
### API Reference
|
|
194
|
+
|
|
195
|
+
| Operation | Method | Endpoint | Body |
|
|
196
|
+
|-----------|--------|----------|------|
|
|
197
|
+
| Register lens | POST | `/lens/register` | `{ lens_config: config }` |
|
|
198
|
+
| Create session | POST | `/lens/sessions/create` | `{ lens_id }` |
|
|
199
|
+
| Process event | POST | `/lens/sessions/events/process` | `{ session_id, event }` |
|
|
200
|
+
| Delete lens | POST | `/lens/delete` | `{ lens_id }` |
|
|
201
|
+
| Destroy session | POST | `/lens/sessions/destroy` | `{ session_id }` |
|
|
202
|
+
| SSE consumer | GET | `/lens/sessions/consumer/{sessionId}` | — |
|
|
203
|
+
|
|
204
|
+
### Helper: API fetch wrapper
|
|
205
|
+
|
|
206
|
+
```typescript
|
|
207
|
+
const API_ENDPOINT = 'https://api.u1.archetypeai.app/v0.5'
|
|
208
|
+
|
|
209
|
+
async function apiPost<T>(path: string, apiKey: string, body: unknown, timeoutMs = 5000): Promise<T> {
|
|
210
|
+
const controller = new AbortController()
|
|
211
|
+
const timeoutId = setTimeout(() => controller.abort(), timeoutMs)
|
|
212
|
+
|
|
213
|
+
try {
|
|
214
|
+
const response = await fetch(`${API_ENDPOINT}${path}`, {
|
|
215
|
+
method: 'POST',
|
|
216
|
+
headers: {
|
|
217
|
+
Authorization: `Bearer ${apiKey}`,
|
|
218
|
+
'Content-Type': 'application/json',
|
|
219
|
+
},
|
|
220
|
+
body: JSON.stringify(body),
|
|
221
|
+
signal: controller.signal,
|
|
222
|
+
})
|
|
223
|
+
|
|
224
|
+
if (!response.ok) {
|
|
225
|
+
const errorBody = await response.json().catch(() => ({}))
|
|
226
|
+
throw new Error(`API POST ${path} failed: ${response.status} - ${JSON.stringify(errorBody)}`)
|
|
227
|
+
}
|
|
228
|
+
|
|
229
|
+
return response.json()
|
|
230
|
+
} finally {
|
|
231
|
+
clearTimeout(timeoutId)
|
|
232
|
+
}
|
|
233
|
+
}
|
|
234
|
+
```
|
|
235
|
+
|
|
236
|
+
### Step 1: Register embedding lens and create session
|
|
237
|
+
|
|
238
|
+
```typescript
|
|
239
|
+
const windowSize = 100
|
|
240
|
+
const stepSize = 100
|
|
241
|
+
|
|
242
|
+
const lensConfig = {
|
|
243
|
+
lens_name: 'embedding_lens',
|
|
244
|
+
lens_config: {
|
|
245
|
+
model_pipeline: [
|
|
246
|
+
{ processor_name: 'lens_timeseries_embedding_processor', processor_config: {} },
|
|
247
|
+
],
|
|
248
|
+
model_parameters: {
|
|
249
|
+
model_name: 'OmegaEncoder',
|
|
250
|
+
model_version: 'OmegaEncoder::omega_embeddings_01',
|
|
251
|
+
normalize_input: true,
|
|
252
|
+
buffer_size: windowSize,
|
|
253
|
+
csv_configs: {
|
|
254
|
+
timestamp_column: 'timestamp',
|
|
255
|
+
data_columns: ['a1', 'a2', 'a3', 'a4'],
|
|
256
|
+
window_size: windowSize,
|
|
257
|
+
step_size: stepSize,
|
|
258
|
+
},
|
|
259
|
+
},
|
|
260
|
+
output_streams: [
|
|
261
|
+
{ stream_type: 'server_sent_events_writer' },
|
|
262
|
+
],
|
|
263
|
+
},
|
|
264
|
+
}
|
|
265
|
+
|
|
266
|
+
const registeredLens = await apiPost<{ lens_id: string }>(
|
|
267
|
+
'/lens/register', apiKey, { lens_config: lensConfig }
|
|
268
|
+
)
|
|
269
|
+
const lensId = registeredLens.lens_id
|
|
270
|
+
|
|
271
|
+
const session = await apiPost<{ session_id: string }>(
|
|
272
|
+
'/lens/sessions/create', apiKey, { lens_id: lensId }
|
|
273
|
+
)
|
|
274
|
+
const sessionId = session.session_id
|
|
275
|
+
|
|
276
|
+
await apiPost('/lens/delete', apiKey, { lens_id: lensId })
|
|
277
|
+
|
|
278
|
+
// Wait for session ready (same waitForSessionReady as machine state skills)
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
### Step 2: Acquire sensor data (Web Bluetooth)
|
|
282
|
+
|
|
283
|
+
```typescript
|
|
284
|
+
const IMU_SERVICE = '0000fff0-0000-1000-8000-00805f9b34fb'
|
|
285
|
+
const IMU_CHARACTERISTIC = '0000fff1-0000-1000-8000-00805f9b34fb'
|
|
286
|
+
|
|
287
|
+
const device = await navigator.bluetooth.requestDevice({
|
|
288
|
+
filters: [{ services: [IMU_SERVICE] }],
|
|
289
|
+
})
|
|
290
|
+
const server = await device.gatt.connect()
|
|
291
|
+
const service = await server.getPrimaryService(IMU_SERVICE)
|
|
292
|
+
const characteristic = await service.getCharacteristic(IMU_CHARACTERISTIC)
|
|
293
|
+
|
|
294
|
+
const dataBuffer: [number, number, number][] = []
|
|
295
|
+
|
|
296
|
+
characteristic.addEventListener('characteristicvaluechanged', (event) => {
|
|
297
|
+
const value = (event.target as BluetoothRemoteGATTCharacteristic).value!
|
|
298
|
+
const samples = new Int16Array(value.buffer)
|
|
299
|
+
const payload = samples.slice(1)
|
|
300
|
+
for (let i = 0; i + 2 < payload.length; i += 3) {
|
|
301
|
+
dataBuffer.push([payload[i], payload[i + 1], payload[i + 2]])
|
|
302
|
+
}
|
|
303
|
+
})
|
|
304
|
+
|
|
305
|
+
await characteristic.startNotifications()
|
|
306
|
+
```
|
|
307
|
+
|
|
308
|
+
### Step 3: Stream buffered data in windows
|
|
309
|
+
|
|
310
|
+
```typescript
|
|
311
|
+
let counter = 0
|
|
312
|
+
|
|
313
|
+
const streamLoop = setInterval(async () => {
|
|
314
|
+
if (dataBuffer.length < windowSize) return
|
|
315
|
+
|
|
316
|
+
const window = dataBuffer.splice(0, windowSize)
|
|
317
|
+
|
|
318
|
+
const a1 = window.map(r => r[0])
|
|
319
|
+
const a2 = window.map(r => r[1])
|
|
320
|
+
const a3 = window.map(r => r[2])
|
|
321
|
+
const a4 = window.map(([ax, ay, az]) =>
|
|
322
|
+
Math.floor(Math.sqrt(ax * ax + ay * ay + az * az))
|
|
323
|
+
)
|
|
324
|
+
|
|
325
|
+
await apiPost('/lens/sessions/events/process', apiKey, {
|
|
326
|
+
session_id: sessionId,
|
|
327
|
+
event: {
|
|
328
|
+
type: 'session.update',
|
|
329
|
+
event_data: {
|
|
330
|
+
type: 'data.json',
|
|
331
|
+
event_data: {
|
|
332
|
+
sensor_data: [a1, a2, a3, a4],
|
|
333
|
+
sensor_metadata: {
|
|
334
|
+
sensor_timestamp: Date.now() / 1000,
|
|
335
|
+
sensor_id: `web_ble_sensor_${counter++}`,
|
|
336
|
+
},
|
|
337
|
+
},
|
|
338
|
+
},
|
|
339
|
+
},
|
|
340
|
+
}, 10000)
|
|
341
|
+
}, 200)
|
|
342
|
+
```
|
|
343
|
+
|
|
344
|
+
### Step 4: Consume SSE embedding results
|
|
345
|
+
|
|
346
|
+
```typescript
|
|
347
|
+
import { fetchEventSource } from '@microsoft/fetch-event-source'
|
|
348
|
+
|
|
349
|
+
interface EmbeddingResult {
|
|
350
|
+
windowIndex: number
|
|
351
|
+
embedding: number[]
|
|
352
|
+
}
|
|
353
|
+
|
|
354
|
+
const embeddings: EmbeddingResult[] = []
|
|
355
|
+
const abortController = new AbortController()
|
|
356
|
+
|
|
357
|
+
fetchEventSource(`${API_ENDPOINT}/lens/sessions/consumer/${sessionId}`, {
|
|
358
|
+
headers: { Authorization: `Bearer ${apiKey}` },
|
|
359
|
+
signal: abortController.signal,
|
|
360
|
+
onmessage(event) {
|
|
361
|
+
const parsed = JSON.parse(event.data)
|
|
362
|
+
|
|
363
|
+
if (parsed.type === 'inference.result') {
|
|
364
|
+
const response = parsed.event_data.response
|
|
365
|
+
const flat = Array.isArray(response[0]) ? response.flat() : response
|
|
366
|
+
|
|
367
|
+
embeddings.push({
|
|
368
|
+
windowIndex: embeddings.length,
|
|
369
|
+
embedding: flat,
|
|
370
|
+
})
|
|
371
|
+
console.log(`Embedding ${embeddings.length}: ${flat.length}D`)
|
|
372
|
+
}
|
|
373
|
+
},
|
|
374
|
+
})
|
|
375
|
+
```
|
|
376
|
+
|
|
377
|
+
### Step 5: Cleanup
|
|
378
|
+
|
|
379
|
+
```typescript
|
|
380
|
+
clearInterval(streamLoop)
|
|
381
|
+
abortController.abort()
|
|
382
|
+
await device.gatt.disconnect()
|
|
383
|
+
await apiPost('/lens/sessions/destroy', apiKey, { session_id: sessionId })
|
|
384
|
+
```
|
|
385
|
+
|
|
386
|
+
### Web Lifecycle Summary
|
|
387
|
+
|
|
388
|
+
```
|
|
389
|
+
1. Register lens -> POST /lens/register { lens_config: config }
|
|
390
|
+
2. Create session -> POST /lens/sessions/create { lens_id }
|
|
391
|
+
3. Wait for ready -> POST /lens/sessions/events/process (poll)
|
|
392
|
+
4. Connect sensor (BLE / WS) -> Web Bluetooth API or WebSocket
|
|
393
|
+
5. Buffer + stream windows -> POST /lens/sessions/events/process (loop)
|
|
394
|
+
6. Consume SSE embeddings -> GET /lens/sessions/consumer/{sessionId}
|
|
395
|
+
7. Disconnect + destroy -> POST /lens/sessions/destroy { session_id }
|
|
396
|
+
```
|
|
397
|
+
|
|
398
|
+
---
|
|
399
|
+
|
|
400
|
+
## Key Implementation Notes
|
|
401
|
+
|
|
402
|
+
- Default `window_size` and `step_size`: **100**
|
|
403
|
+
- Embeddings are `(4, 768)` per window — flatten to `3072D` for downstream use
|
|
404
|
+
- No n-shot files needed — this lens outputs raw embeddings, not classifications
|
|
405
|
+
- Use UMAP or t-SNE to reduce to 2D/3D for visualization
|
|
406
|
+
- Combine with machine state lens to overlay class labels on embedding plots
|