@archetypeai/ds-cli 0.3.7 → 0.3.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (28) hide show
  1. package/README.md +25 -67
  2. package/commands/create.js +5 -27
  3. package/commands/init.js +5 -27
  4. package/files/AGENTS.md +19 -3
  5. package/files/CLAUDE.md +21 -3
  6. package/files/rules/accessibility.md +49 -0
  7. package/files/rules/frontend-architecture.md +77 -0
  8. package/files/skills/apply-ds/SKILL.md +92 -80
  9. package/files/skills/apply-ds/scripts/audit.sh +169 -0
  10. package/files/skills/apply-ds/scripts/setup.sh +48 -166
  11. package/files/skills/create-dashboard/SKILL.md +12 -0
  12. package/files/skills/embedding-from-file/SKILL.md +415 -0
  13. package/files/skills/embedding-from-sensor/SKILL.md +406 -0
  14. package/files/skills/embedding-upload/SKILL.md +414 -0
  15. package/files/skills/fix-accessibility/SKILL.md +57 -9
  16. package/files/skills/newton-activity-monitor-lens-on-video/SKILL.md +817 -0
  17. package/files/skills/newton-camera-frame-analysis/SKILL.md +611 -0
  18. package/files/skills/newton-camera-frame-analysis/scripts/activity-monitor-frame.py +165 -0
  19. package/files/skills/newton-camera-frame-analysis/scripts/captures/logs/api_responses_20260206_105610.json +62 -0
  20. package/files/skills/newton-camera-frame-analysis/scripts/continuous_monitor.py +119 -0
  21. package/files/skills/newton-direct-query/SKILL.md +212 -0
  22. package/files/skills/newton-direct-query/scripts/direct_query.py +129 -0
  23. package/files/skills/newton-machine-state-from-file/SKILL.md +545 -0
  24. package/files/skills/newton-machine-state-from-sensor/SKILL.md +707 -0
  25. package/files/skills/newton-machine-state-upload/SKILL.md +986 -0
  26. package/lib/add-ds-ui-svelte.js +5 -2
  27. package/lib/scaffold-ds-svelte-project.js +25 -18
  28. package/package.json +13 -2
@@ -0,0 +1,414 @@
1
+ ---
2
+ name: embedding-upload
3
+ description: Run an Embedding Lens by uploading a CSV file for server-side processing. Use when you want to upload a file and get embeddings without local streaming.
4
+ argument-hint: [csv-file-path]
5
+ ---
6
+
7
+ # Embedding Lens — Upload File (Server-Side Processing)
8
+
9
+ Generate a script that uploads a CSV file to the Archetype AI platform and extracts embeddings server-side. The server reads the file directly — no local streaming loop required. Supports both Python and JavaScript/Web.
10
+
11
+ > **Frontend architecture:** When building a web UI for this skill, decompose into components (file upload, status display, results view) rather than a monolithic page. Extract API logic into `$lib/api/`. See `@rules/frontend-architecture` for conventions and `@skills/create-dashboard` / `@skills/build-pattern` for layout and component patterns.
12
+
13
+ ---
14
+
15
+ ## Python Implementation
16
+
17
+ ### Requirements
18
+
19
+ - `archetypeai` Python package
20
+ - Environment variables: `ATAI_API_KEY`, optionally `ATAI_API_ENDPOINT`
21
+
22
+ ### Architecture
23
+
24
+ Uses `create_and_run_lens` with YAML config. After the session is created, upload the data CSV and configure a `csv_file_reader` input stream for server-side reading.
25
+
26
+ #### 1. API Client Setup
27
+
28
+ ```python
29
+ from archetypeai.api_client import ArchetypeAI
30
+ import os
31
+
32
+ api_key = os.getenv("ATAI_API_KEY")
33
+ api_endpoint = os.getenv("ATAI_API_ENDPOINT", ArchetypeAI.get_default_endpoint())
34
+ client = ArchetypeAI(api_key, api_endpoint=api_endpoint)
35
+ ```
36
+
37
+ #### 2. Lens YAML Configuration
38
+
39
+ ```yaml
40
+ lens_name: Embedding Lens
41
+ lens_config:
42
+ model_pipeline:
43
+ - processor_name: lens_timeseries_embedding_processor
44
+ processor_config: {}
45
+ model_parameters:
46
+ model_name: OmegaEncoder
47
+ model_version: OmegaEncoder::omega_embeddings_01
48
+ normalize_input: true
49
+ buffer_size: {window_size}
50
+ csv_configs:
51
+ timestamp_column: timestamp
52
+ data_columns: ['a1', 'a2', 'a3', 'a4']
53
+ window_size: {window_size}
54
+ step_size: {step_size}
55
+ output_streams:
56
+ - stream_type: server_sent_events_writer
57
+ ```
58
+
59
+ #### 3. Event Builders
60
+
61
+ ```python
62
+ def build_input_event(file_id, window_size, step_size):
63
+ return {
64
+ "type": "input_stream.set",
65
+ "event_data": {
66
+ "stream_type": "csv_file_reader",
67
+ "stream_config": {
68
+ "file_id": file_id,
69
+ "window_size": window_size,
70
+ "step_size": step_size,
71
+ "loop_recording": False,
72
+ "output_format": ""
73
+ }
74
+ }
75
+ }
76
+
77
+ def build_output_event():
78
+ return {
79
+ "type": "output_stream.set",
80
+ "event_data": {
81
+ "stream_type": "server_side_events_writer",
82
+ "stream_config": {}
83
+ }
84
+ }
85
+ ```
86
+
87
+ #### 4. Session Callback
88
+
89
+ ```python
90
+ def session_callback(session_id, session_endpoint, client, args):
91
+ print(f"Session created: {session_id}")
92
+
93
+ # Upload the data CSV
94
+ data_resp = client.files.local.upload(args["data_file_path"])
95
+ data_file_id = data_resp["file_id"]
96
+
97
+ # Tell server to read the uploaded CSV
98
+ client.lens.sessions.process_event(
99
+ session_id,
100
+ build_input_event(data_file_id, args["window_size"], args["step_size"])
101
+ )
102
+ client.lens.sessions.process_event(
103
+ session_id,
104
+ build_output_event()
105
+ )
106
+
107
+ # Collect embeddings via SSE
108
+ sse_reader = client.lens.sessions.create_sse_consumer(
109
+ session_id, max_read_time_sec=args["max_run_time_sec"]
110
+ )
111
+
112
+ embeddings = []
113
+ try:
114
+ for event in sse_reader.read(block=True):
115
+ if stop_flag:
116
+ break
117
+ if isinstance(event, dict) and event.get("type") == "inference.result":
118
+ ed = event.get("event_data", {})
119
+ embedding = ed.get("response")
120
+ meta = ed.get("query_metadata", {})
121
+
122
+ # Flatten 4×768 → 3072D
123
+ if isinstance(embedding, list) and len(embedding) > 0:
124
+ if isinstance(embedding[0], list):
125
+ flat = [val for row in embedding for val in row]
126
+ else:
127
+ flat = embedding
128
+
129
+ embeddings.append({
130
+ "window_index": len(embeddings),
131
+ "query_timestamp": meta.get("query_timestamp", "N/A"),
132
+ "read_index": meta.get("query_metadata", {}).get("read_index", "N/A"),
133
+ "embedding": flat,
134
+ })
135
+ print(f"[{len(embeddings)}] Embedding: {len(flat)}D")
136
+ finally:
137
+ sse_reader.close()
138
+ print(f"Collected {len(embeddings)} embeddings. Stopped.")
139
+ ```
140
+
141
+ #### 5. Create and Run Lens
142
+
143
+ ```python
144
+ client.lens.create_and_run_lens(
145
+ yaml_config, session_callback,
146
+ client=client, args=args
147
+ )
148
+ ```
149
+
150
+ ### CLI Arguments
151
+
152
+ ```
153
+ --api-key API key (fallback to ATAI_API_KEY env var)
154
+ --api-endpoint API endpoint (default from SDK)
155
+ --data-file Path to CSV file to analyze (required)
156
+ --window-size Window size in samples (default: 100)
157
+ --step-size Step size in samples (default: 100)
158
+ --max-run-time-sec Max runtime (default: 600)
159
+ --output-file Path to save embeddings CSV (optional)
160
+ ```
161
+
162
+ ---
163
+
164
+ ## Web / JavaScript Implementation
165
+
166
+ Uses direct `fetch` calls to the Archetype AI REST API. The simplest embedding approach on web — just upload and collect results.
167
+
168
+ ### API Reference
169
+
170
+ | Operation | Method | Endpoint | Body |
171
+ |-----------|--------|----------|------|
172
+ | Upload file | POST | `/files` | `FormData` |
173
+ | Register lens | POST | `/lens/register` | `{ lens_config: config }` |
174
+ | Create session | POST | `/lens/sessions/create` | `{ lens_id }` |
175
+ | Process event | POST | `/lens/sessions/events/process` | `{ session_id, event }` |
176
+ | Delete lens | POST | `/lens/delete` | `{ lens_id }` |
177
+ | Destroy session | POST | `/lens/sessions/destroy` | `{ session_id }` |
178
+ | SSE consumer | GET | `/lens/sessions/consumer/{sessionId}` | — |
179
+
180
+ ### Helper: API fetch wrapper
181
+
182
+ ```typescript
183
+ const API_ENDPOINT = 'https://api.u1.archetypeai.app/v0.5'
184
+
185
+ async function apiPost<T>(path: string, apiKey: string, body: unknown, timeoutMs = 5000): Promise<T> {
186
+ const controller = new AbortController()
187
+ const timeoutId = setTimeout(() => controller.abort(), timeoutMs)
188
+
189
+ try {
190
+ const response = await fetch(`${API_ENDPOINT}${path}`, {
191
+ method: 'POST',
192
+ headers: {
193
+ Authorization: `Bearer ${apiKey}`,
194
+ 'Content-Type': 'application/json',
195
+ },
196
+ body: JSON.stringify(body),
197
+ signal: controller.signal,
198
+ })
199
+
200
+ if (!response.ok) {
201
+ const errorBody = await response.json().catch(() => ({}))
202
+ throw new Error(`API POST ${path} failed: ${response.status} - ${JSON.stringify(errorBody)}`)
203
+ }
204
+
205
+ return response.json()
206
+ } finally {
207
+ clearTimeout(timeoutId)
208
+ }
209
+ }
210
+ ```
211
+
212
+ ### Step 1: Upload the data CSV
213
+
214
+ ```typescript
215
+ const dataFormData = new FormData()
216
+ dataFormData.append('file', dataFile) // File from <input type="file">
217
+
218
+ const dataResponse = await fetch(`${API_ENDPOINT}/files`, {
219
+ method: 'POST',
220
+ headers: { Authorization: `Bearer ${apiKey}` },
221
+ body: dataFormData,
222
+ })
223
+ const dataUpload = await dataResponse.json()
224
+ const dataFileId = dataUpload.file_id
225
+ ```
226
+
227
+ ### Step 2: Register embedding lens and create session
228
+
229
+ ```typescript
230
+ const windowSize = 100
231
+ const stepSize = 100
232
+
233
+ const lensConfig = {
234
+ lens_name: 'embedding_lens',
235
+ lens_config: {
236
+ model_pipeline: [
237
+ { processor_name: 'lens_timeseries_embedding_processor', processor_config: {} },
238
+ ],
239
+ model_parameters: {
240
+ model_name: 'OmegaEncoder',
241
+ model_version: 'OmegaEncoder::omega_embeddings_01',
242
+ normalize_input: true,
243
+ buffer_size: windowSize,
244
+ csv_configs: {
245
+ timestamp_column: 'timestamp',
246
+ data_columns: ['a1', 'a2', 'a3', 'a4'],
247
+ window_size: windowSize,
248
+ step_size: stepSize,
249
+ },
250
+ },
251
+ output_streams: [
252
+ { stream_type: 'server_sent_events_writer' },
253
+ ],
254
+ },
255
+ }
256
+
257
+ const registeredLens = await apiPost<{ lens_id: string }>(
258
+ '/lens/register', apiKey, { lens_config: lensConfig }
259
+ )
260
+ const lensId = registeredLens.lens_id
261
+
262
+ const session = await apiPost<{ session_id: string }>(
263
+ '/lens/sessions/create', apiKey, { lens_id: lensId }
264
+ )
265
+ const sessionId = session.session_id
266
+
267
+ await apiPost('/lens/delete', apiKey, { lens_id: lensId })
268
+
269
+ // Wait for session ready (same waitForSessionReady pattern)
270
+ async function waitForSessionReady(sessionId: string, maxWaitMs = 30000): Promise<boolean> {
271
+ const start = Date.now()
272
+ while (Date.now() - start < maxWaitMs) {
273
+ const status = await apiPost<{ session_status: string }>(
274
+ '/lens/sessions/events/process', apiKey,
275
+ { session_id: sessionId, event: { type: 'session.status' } },
276
+ 10000
277
+ )
278
+ if (status.session_status === 'LensSessionStatus.SESSION_STATUS_RUNNING' ||
279
+ status.session_status === '3') return true
280
+ if (status.session_status === 'LensSessionStatus.SESSION_STATUS_FAILED' ||
281
+ status.session_status === '6') return false
282
+ await new Promise(r => setTimeout(r, 500))
283
+ }
284
+ return false
285
+ }
286
+
287
+ await waitForSessionReady(sessionId)
288
+ ```
289
+
290
+ ### Step 3: Tell server to read the uploaded CSV
291
+
292
+ ```typescript
293
+ // Set input stream to CSV file reader
294
+ await apiPost('/lens/sessions/events/process', apiKey, {
295
+ session_id: sessionId,
296
+ event: {
297
+ type: 'input_stream.set',
298
+ event_data: {
299
+ stream_type: 'csv_file_reader',
300
+ stream_config: {
301
+ file_id: dataFileId,
302
+ window_size: windowSize,
303
+ step_size: stepSize,
304
+ loop_recording: false,
305
+ output_format: '',
306
+ },
307
+ },
308
+ },
309
+ }, 10000)
310
+
311
+ // Enable SSE output
312
+ await apiPost('/lens/sessions/events/process', apiKey, {
313
+ session_id: sessionId,
314
+ event: {
315
+ type: 'output_stream.set',
316
+ event_data: {
317
+ stream_type: 'server_side_events_writer',
318
+ stream_config: {},
319
+ },
320
+ },
321
+ }, 10000)
322
+ ```
323
+
324
+ ### Step 4: Consume SSE embedding results
325
+
326
+ ```typescript
327
+ import { fetchEventSource } from '@microsoft/fetch-event-source'
328
+
329
+ interface EmbeddingResult {
330
+ windowIndex: number
331
+ queryTimestamp: string
332
+ readIndex: number | string
333
+ embedding: number[] // 3072D flattened
334
+ }
335
+
336
+ const embeddings: EmbeddingResult[] = []
337
+ const abortController = new AbortController()
338
+
339
+ fetchEventSource(`${API_ENDPOINT}/lens/sessions/consumer/${sessionId}`, {
340
+ headers: { Authorization: `Bearer ${apiKey}` },
341
+ signal: abortController.signal,
342
+ onmessage(event) {
343
+ const parsed = JSON.parse(event.data)
344
+
345
+ if (parsed.type === 'inference.result') {
346
+ const response = parsed.event_data.response
347
+ const meta = parsed.event_data.query_metadata
348
+ const queryMeta = meta?.query_metadata ?? {}
349
+
350
+ const flat = Array.isArray(response[0]) ? response.flat() : response
351
+
352
+ embeddings.push({
353
+ windowIndex: embeddings.length,
354
+ queryTimestamp: meta?.query_timestamp ?? 'N/A',
355
+ readIndex: queryMeta.read_index ?? 'N/A',
356
+ embedding: flat,
357
+ })
358
+ console.log(`[${embeddings.length}] Embedding: ${flat.length}D`)
359
+ }
360
+
361
+ if (parsed.type === 'sse.stream.end') {
362
+ console.log(`Complete. ${embeddings.length} embeddings collected.`)
363
+ abortController.abort()
364
+ }
365
+ },
366
+ })
367
+ ```
368
+
369
+ ### Step 5: Cleanup
370
+
371
+ ```typescript
372
+ abortController.abort()
373
+ await apiPost('/lens/sessions/destroy', apiKey, { session_id: sessionId })
374
+ ```
375
+
376
+ ### Web Lifecycle Summary
377
+
378
+ ```
379
+ 1. Upload data CSV -> POST /files (FormData)
380
+ 2. Register lens -> POST /lens/register { lens_config: config }
381
+ 3. Create session -> POST /lens/sessions/create { lens_id }
382
+ 4. Wait for ready -> POST /lens/sessions/events/process (poll)
383
+ 5. Set input stream -> POST /lens/sessions/events/process { session_id, event: input_stream.set }
384
+ 6. Set output stream -> POST /lens/sessions/events/process { session_id, event: output_stream.set }
385
+ 7. Consume SSE results -> GET /lens/sessions/consumer/{sessionId}
386
+ 8. Destroy session -> POST /lens/sessions/destroy { session_id }
387
+ ```
388
+
389
+ ---
390
+
391
+ ## Embedding Response Structure
392
+
393
+ The `inference.result` response contains:
394
+ - `response`: nested list `(4, 768)` — one 768D embedding per input channel
395
+ - Flatten to `3072D` by concatenating: `[a1_768D, a2_768D, a3_768D, a4_768D]`
396
+ - `query_metadata.query_timestamp`: timestamp
397
+ - `query_metadata.query_metadata.read_index`: window position in file
398
+ - `query_metadata.query_metadata.file_id`: the file being analyzed
399
+
400
+ ## Key Differences from Streaming Approaches
401
+
402
+ | | Upload (this skill) | Stream from File | Stream from Sensor |
403
+ |---|---|---|---|
404
+ | Data reading | Server-side `csv_file_reader` | Local pandas/JS + windowed push | Local sensor + buffered push |
405
+ | Local processing | None (just upload) | Window slicing | Sensor acquisition + buffering |
406
+ | Best for | Batch embedding extraction | Controlled local streaming | Real-time from hardware |
407
+
408
+ ## Key Implementation Notes
409
+
410
+ - Default `window_size` and `step_size`: **100**
411
+ - No n-shot files or KNN config — this is pure embedding extraction
412
+ - Embeddings are `(4, 768)` per window — flatten to `3072D` for downstream use
413
+ - Use UMAP/t-SNE for 2D/3D visualization
414
+ - Combine with machine state lens results for labeled embedding plots
@@ -11,15 +11,18 @@ Audit and fix accessibility issues in projects built with the design system.
11
11
 
12
12
  Walk through the project in this order:
13
13
 
14
- 1. **Icon-only buttons** — search for `<Button size="icon"` and similar patterns, verify each has `aria-label`
15
- 2. **Decorative icons** — icons next to text labels should have `aria-hidden="true"`
16
- 3. **Form inputs** — verify `aria-invalid` support for error states
17
- 4. **Focus rings** — confirm all interactive elements have `focus-visible:ring-*` styles
18
- 5. **Disabled states** — check `disabled:pointer-events-none disabled:opacity-50`
19
- 6. **Lists and groups** — verify `role="list"`, `role="listitem"`, `role="group"` where appropriate
20
- 7. **Screen reader text** — add `sr-only` spans where visual context is missing
21
- 8. **Keyboard navigation** — tab through the entire UI, verify all controls are reachable
22
- 9. **Dialog focus traps** — open dialogs, confirm focus is trapped and Escape closes them
14
+ 1. **Skip link** — verify the page has a skip-to-content link as the first focusable element. If missing, add `<a href="#main-content" class="sr-only focus:not-sr-only ...">Skip to content</a>` targeting `<main id="main-content">`.
15
+ 2. **Landmarks** — verify the page uses semantic HTML landmarks: `<main>`, `<nav>`, `<header>`. Replace generic `<div>` wrappers with the correct landmark element.
16
+ 3. **Heading hierarchy** — verify there is an `<h1>` on every page and headings don't skip levels (h1 → h3). Add `sr-only` headings where visual design omits them.
17
+ 4. **Icon-only buttons** — search for `<Button size="icon"` and similar patterns, verify each has `aria-label`
18
+ 5. **Decorative icons** — icons next to text labels should have `aria-hidden="true"`
19
+ 6. **Form inputs** — verify `aria-invalid` support for error states
20
+ 7. **Focus rings** — confirm all interactive elements have `focus-visible:ring-*` styles
21
+ 8. **Disabled states** — check `disabled:pointer-events-none disabled:opacity-50`
22
+ 9. **Lists and groups** — verify `role="list"`, `role="listitem"`, `role="group"` where appropriate
23
+ 10. **Screen reader text** — add `sr-only` spans where visual context is missing
24
+ 11. **Keyboard navigation** — tab through the entire UI, verify all controls are reachable
25
+ 12. **Dialog focus traps** — open dialogs, confirm focus is trapped and Escape closes them
23
26
 
24
27
  ## Common Issues and Fixes
25
28
 
@@ -135,6 +138,47 @@ Add visually hidden text where icons or visual cues carry meaning:
135
138
  </button>
136
139
  ```
137
140
 
141
+ ## Page Structure
142
+
143
+ ### Skip Link
144
+
145
+ Every page should have a skip link as the first focusable element:
146
+
147
+ ```svelte
148
+ <a
149
+ href="#main-content"
150
+ class="sr-only focus:not-sr-only focus:fixed focus:top-4 focus:left-4 focus:z-50 focus:rounded-md focus:bg-background focus:px-4 focus:py-2 focus:text-foreground focus:ring-2 focus:ring-ring"
151
+ >
152
+ Skip to content
153
+ </a>
154
+
155
+ <main id="main-content">
156
+ <!-- page content -->
157
+ </main>
158
+ ```
159
+
160
+ ### Semantic Landmarks
161
+
162
+ ```svelte
163
+ <!-- Before -->
164
+ <div class="header">...</div>
165
+ <div class="content">...</div>
166
+
167
+ <!-- After -->
168
+ <header>...</header>
169
+ <main id="main-content">...</main>
170
+ ```
171
+
172
+ ### Heading Hierarchy
173
+
174
+ Every page needs an `<h1>`. If the visual design doesn't include one, add it as screen-reader-only:
175
+
176
+ ```svelte
177
+ <h1 class="sr-only">Dashboard</h1>
178
+ ```
179
+
180
+ Never skip heading levels (e.g. `<h1>` → `<h3>`). Use the correct level for the document outline.
181
+
138
182
  ## Keyboard Navigation for Custom Elements
139
183
 
140
184
  When building custom interactive elements (not using bits-ui primitives), ensure keyboard support:
@@ -168,6 +212,10 @@ If focus trapping is broken, check that the bits-ui primitive is used correctly
168
212
 
169
213
  After fixing, walk through the project and confirm:
170
214
 
215
+ - [ ] Page has a skip-to-content link as the first focusable element
216
+ - [ ] Page uses `<main>` landmark with `id="main-content"`
217
+ - [ ] Page has an `<h1>` (visible or `sr-only`)
218
+ - [ ] Heading hierarchy doesn't skip levels
171
219
  - [ ] All icon-only buttons have `aria-label`
172
220
  - [ ] All decorative icons have `aria-hidden="true"`
173
221
  - [ ] Form inputs support `aria-invalid` styling