systemlink-cli 1.3.1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- slcli/__init__.py +1 -0
- slcli/__main__.py +23 -0
- slcli/_version.py +4 -0
- slcli/asset_click.py +1289 -0
- slcli/cli_formatters.py +218 -0
- slcli/cli_utils.py +504 -0
- slcli/comment_click.py +602 -0
- slcli/completion_click.py +418 -0
- slcli/config.py +81 -0
- slcli/config_click.py +498 -0
- slcli/dff_click.py +979 -0
- slcli/dff_decorators.py +24 -0
- slcli/example_click.py +404 -0
- slcli/example_loader.py +274 -0
- slcli/example_provisioner.py +2777 -0
- slcli/examples/README.md +134 -0
- slcli/examples/_schema/schema-v1.0.json +169 -0
- slcli/examples/demo-complete-workflow/README.md +323 -0
- slcli/examples/demo-complete-workflow/config.yaml +638 -0
- slcli/examples/demo-test-plans/README.md +132 -0
- slcli/examples/demo-test-plans/config.yaml +154 -0
- slcli/examples/exercise-5-1-parametric-insights/README.md +101 -0
- slcli/examples/exercise-5-1-parametric-insights/config.yaml +1589 -0
- slcli/examples/exercise-7-1-test-plans/README.md +93 -0
- slcli/examples/exercise-7-1-test-plans/config.yaml +323 -0
- slcli/examples/spec-compliance-notebooks/README.md +140 -0
- slcli/examples/spec-compliance-notebooks/config.yaml +112 -0
- slcli/examples/spec-compliance-notebooks/notebooks/SpecAnalysis_ComplianceCalculation.ipynb +1553 -0
- slcli/examples/spec-compliance-notebooks/notebooks/SpecComplianceCalculation.ipynb +1577 -0
- slcli/examples/spec-compliance-notebooks/notebooks/SpecfileExtractionAndIngestion.ipynb +912 -0
- slcli/examples/spec-compliance-notebooks/spec_template.xlsx +0 -0
- slcli/feed_click.py +892 -0
- slcli/file_click.py +932 -0
- slcli/function_click.py +1400 -0
- slcli/function_templates.py +85 -0
- slcli/main.py +406 -0
- slcli/mcp_click.py +269 -0
- slcli/mcp_server.py +748 -0
- slcli/notebook_click.py +1770 -0
- slcli/platform.py +345 -0
- slcli/policy_click.py +679 -0
- slcli/policy_utils.py +411 -0
- slcli/profiles.py +411 -0
- slcli/response_handlers.py +359 -0
- slcli/routine_click.py +763 -0
- slcli/skill_click.py +253 -0
- slcli/skills/slcli/SKILL.md +713 -0
- slcli/skills/slcli/references/analysis-recipes.md +474 -0
- slcli/skills/slcli/references/filtering.md +236 -0
- slcli/skills/systemlink-webapp/SKILL.md +744 -0
- slcli/skills/systemlink-webapp/references/deployment.md +123 -0
- slcli/skills/systemlink-webapp/references/nimble-angular.md +380 -0
- slcli/skills/systemlink-webapp/references/systemlink-services.md +192 -0
- slcli/ssl_trust.py +93 -0
- slcli/system_click.py +2216 -0
- slcli/table_utils.py +124 -0
- slcli/tag_click.py +794 -0
- slcli/templates_click.py +599 -0
- slcli/testmonitor_click.py +1667 -0
- slcli/universal_handlers.py +305 -0
- slcli/user_click.py +1218 -0
- slcli/utils.py +832 -0
- slcli/web_editor.py +295 -0
- slcli/webapp_click.py +981 -0
- slcli/workflow_preview.py +287 -0
- slcli/workflows_click.py +988 -0
- slcli/workitem_click.py +2258 -0
- slcli/workspace_click.py +576 -0
- slcli/workspace_utils.py +206 -0
- systemlink_cli-1.3.1.dist-info/METADATA +20 -0
- systemlink_cli-1.3.1.dist-info/RECORD +74 -0
- systemlink_cli-1.3.1.dist-info/WHEEL +4 -0
- systemlink_cli-1.3.1.dist-info/entry_points.txt +7 -0
- systemlink_cli-1.3.1.dist-info/licenses/LICENSE +21 -0
|
@@ -0,0 +1,474 @@
|
|
|
1
|
+
# Analysis Recipes
|
|
2
|
+
|
|
3
|
+
Step-by-step recipes for answering common data analysis questions with `slcli`.
|
|
4
|
+
Each recipe maps to a real-world scenario and shows the exact commands needed.
|
|
5
|
+
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Recipe 1: Operator failure rate analysis
|
|
9
|
+
|
|
10
|
+
**Question:** Which operators have the highest failure rates and what should they
|
|
11
|
+
focus on?
|
|
12
|
+
|
|
13
|
+
**Steps:**
|
|
14
|
+
|
|
15
|
+
```bash
|
|
16
|
+
# Step 1: Get summary by operator to see total counts per operator
|
|
17
|
+
slcli testmonitor result list --summary --group-by operator -f json
|
|
18
|
+
|
|
19
|
+
# Step 2: Get failed counts per operator
|
|
20
|
+
slcli testmonitor result list --status FAILED --summary --group-by operator -f json
|
|
21
|
+
|
|
22
|
+
# Step 3: Deep-dive into a specific operator's failures
|
|
23
|
+
slcli testmonitor result list --operator "xli" --status FAILED -f json --take 500
|
|
24
|
+
|
|
25
|
+
# Step 4: See which programs the operator fails on most
|
|
26
|
+
slcli testmonitor result list --operator "xli" --status FAILED --summary --group-by programName -f json
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
**Post-processing with jq:**
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
# Calculate failure rate per operator (combine step 1 and 2 outputs)
|
|
33
|
+
slcli testmonitor result list --operator "xli" -f json --take 500 | \
|
|
34
|
+
jq '[group_by(.status.statusType) | .[] | {status: .[0].status.statusType, count: length}]'
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Recipe 2: Calibration overdue tracking
|
|
40
|
+
|
|
41
|
+
**Question:** Which assets are most overdue for calibration and what test
|
|
42
|
+
capacity is at risk?
|
|
43
|
+
|
|
44
|
+
**Steps:**
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
# Step 1: List all assets past calibration due date
|
|
48
|
+
slcli asset list --calibration-status PAST_RECOMMENDED_DUE_DATE -f json --take 100
|
|
49
|
+
|
|
50
|
+
# Step 2: Get the fleet-wide calibration summary
|
|
51
|
+
slcli asset summary -f json
|
|
52
|
+
|
|
53
|
+
# Step 3: Check assets approaching due date (proactive)
|
|
54
|
+
slcli asset list --calibration-status APPROACHING_RECOMMENDED_DUE_DATE -f json
|
|
55
|
+
|
|
56
|
+
# Step 4: Get calibration history for a specific overdue asset
|
|
57
|
+
slcli asset calibration <ASSET_ID> -f json
|
|
58
|
+
|
|
59
|
+
# Step 5: Check which system the asset belongs to
|
|
60
|
+
slcli asset location-history <ASSET_ID> -f json
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
**Post-processing with jq:**
|
|
64
|
+
|
|
65
|
+
```bash
|
|
66
|
+
# Group overdue assets by model
|
|
67
|
+
slcli asset list --calibration-status PAST_RECOMMENDED_DUE_DATE -f json --take 200 | \
|
|
68
|
+
jq '[group_by(.modelName) | .[] | {model: .[0].modelName, count: length}] | sort_by(-.count)'
|
|
69
|
+
|
|
70
|
+
# Find overdue assets in connected systems (blocking test execution)
|
|
71
|
+
slcli asset list --calibration-status PAST_RECOMMENDED_DUE_DATE --connected -f json --take 200 | \
|
|
72
|
+
jq 'length'
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Recipe 3: Test time distribution and bottleneck identification
|
|
78
|
+
|
|
79
|
+
**Question:** What is the test time distribution across stations and where are
|
|
80
|
+
bottlenecks?
|
|
81
|
+
|
|
82
|
+
**Steps:**
|
|
83
|
+
|
|
84
|
+
```bash
|
|
85
|
+
# Step 1: Get results with execution time, grouped by host
|
|
86
|
+
slcli testmonitor result list -f json --take 500 --order-by TOTAL_TIME_IN_SECONDS --descending
|
|
87
|
+
|
|
88
|
+
# Step 2: Summarize by host to find busiest stations
|
|
89
|
+
slcli testmonitor result list --summary --group-by hostName -f json
|
|
90
|
+
|
|
91
|
+
# Step 3: Drill into slowest host
|
|
92
|
+
slcli testmonitor result list --host-name "station-01" -f json --take 200 \
|
|
93
|
+
--order-by TOTAL_TIME_IN_SECONDS --descending
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
**Post-processing with jq:**
|
|
97
|
+
|
|
98
|
+
```bash
|
|
99
|
+
# Calculate average test duration per host
|
|
100
|
+
slcli testmonitor result list -f json --take 1000 | \
|
|
101
|
+
jq '[group_by(.hostName) | .[] | {
|
|
102
|
+
host: .[0].hostName,
|
|
103
|
+
count: length,
|
|
104
|
+
avg_seconds: ([.[].totalTimeInSeconds] | add / length),
|
|
105
|
+
max_seconds: ([.[].totalTimeInSeconds] | max)
|
|
106
|
+
}] | sort_by(-.avg_seconds)'
|
|
107
|
+
|
|
108
|
+
# Find outlier tests (> 2x average duration)
|
|
109
|
+
slcli testmonitor result list -f json --take 1000 | \
|
|
110
|
+
jq '(([.[].totalTimeInSeconds] | add / length) * 2) as $threshold |
|
|
111
|
+
[.[] | select(.totalTimeInSeconds > $threshold)] | length'
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
---
|
|
115
|
+
|
|
116
|
+
## Recipe 4: System capability discovery
|
|
117
|
+
|
|
118
|
+
**Question:** Which connected systems lack specific measurement capabilities?
|
|
119
|
+
|
|
120
|
+
**Steps:**
|
|
121
|
+
|
|
122
|
+
```bash
|
|
123
|
+
# Step 1: List all connected systems
|
|
124
|
+
slcli system list --state CONNECTED -f json --take 200
|
|
125
|
+
|
|
126
|
+
# Step 2: Get assets for a specific system (new: single command)
|
|
127
|
+
slcli system get <SYSTEM_ID> --include-assets -f json | jq '._assets.items'
|
|
128
|
+
|
|
129
|
+
# Step 3: Generate a hardware report
|
|
130
|
+
slcli system report --type HARDWARE -o hardware_report.csv
|
|
131
|
+
|
|
132
|
+
# Step 4: Check systems with specific packages
|
|
133
|
+
slcli system list --has-package "DAQmx" --state CONNECTED -f json
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
**Post-processing with jq:**
|
|
137
|
+
|
|
138
|
+
```bash
|
|
139
|
+
# List connected systems and their OS
|
|
140
|
+
slcli system list --state CONNECTED -f json --take 500 | \
|
|
141
|
+
jq '[.[] | {alias, id, os: .grains.kernel}]'
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
---
|
|
145
|
+
|
|
146
|
+
## Recipe 4b: Full system health snapshot
|
|
147
|
+
|
|
148
|
+
**Question:** What is the current health of a specific system — assets, alarms, recent jobs, test results, scheduled work?
|
|
149
|
+
|
|
150
|
+
**Answer:** Use `--include-all` (or selective `--include-*` flags) to fetch all related
|
|
151
|
+
resources in a single parallel call, mirroring the web UI system detail page.
|
|
152
|
+
|
|
153
|
+
```bash
|
|
154
|
+
# Full snapshot in table format
|
|
155
|
+
slcli system get <SYSTEM_ID> --include-all
|
|
156
|
+
|
|
157
|
+
# Limit each section to 5 rows / extend work-item window to 60 days
|
|
158
|
+
slcli system get <SYSTEM_ID> --include-all -t 5 --workitem-days 60
|
|
159
|
+
|
|
160
|
+
# Machine-readable JSON — all sections embedded
|
|
161
|
+
slcli system get <SYSTEM_ID> --include-all -f json
|
|
162
|
+
|
|
163
|
+
# Extract just the active alarms
|
|
164
|
+
slcli system get <SYSTEM_ID> --include-alarms -f json | jq '._alarms.items'
|
|
165
|
+
|
|
166
|
+
# Show systems that have active alarms
|
|
167
|
+
slcli system list --state CONNECTED -f json --take 200 | \
|
|
168
|
+
jq -r '.[].id' | \
|
|
169
|
+
xargs -I{} sh -c \
|
|
170
|
+
'slcli system get {} --include-alarms -f json | jq -e "._alarms.totalCount > 0" > /dev/null && echo {}'
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
**JSON output shape** — each included section is nested under `_<section>`:
|
|
174
|
+
|
|
175
|
+
```json
|
|
176
|
+
{
|
|
177
|
+
"id": "...",
|
|
178
|
+
"_assets": { "totalCount": 3, "items": [...], "error": null },
|
|
179
|
+
"_alarms": { "totalCount": 1, "items": [...], "error": null },
|
|
180
|
+
"_jobs": { "totalCount": 5, "items": [...], "error": null },
|
|
181
|
+
"_results": { "totalCount": 12, "items": [...], "error": null },
|
|
182
|
+
"_states": { "totalCount": 2, "items": [...], "error": null },
|
|
183
|
+
"_workitems": { "totalCount": 0, "items": [], "error": null }
|
|
184
|
+
}
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
> If a service is unavailable, `"error"` contains the error message and the
|
|
188
|
+
> other sections still render normally.
|
|
189
|
+
|
|
190
|
+
---
|
|
191
|
+
|
|
192
|
+
## Recipe 5: Product yield analysis
|
|
193
|
+
|
|
194
|
+
**Question:** How many units per product variant are past their first-pass yield
|
|
195
|
+
expectations?
|
|
196
|
+
|
|
197
|
+
**Steps:**
|
|
198
|
+
|
|
199
|
+
```bash
|
|
200
|
+
# Step 1: List products with battery part numbers
|
|
201
|
+
slcli testmonitor product list --part-number "BATT" -f json
|
|
202
|
+
|
|
203
|
+
# Step 2: Get summary for a specific product
|
|
204
|
+
slcli testmonitor product list --part-number "BATT-8" --summary -f json
|
|
205
|
+
|
|
206
|
+
# Step 3: Get pass/fail breakdown for a product
|
|
207
|
+
slcli testmonitor result list --part-number "BATT-8" --summary --group-by status -f json
|
|
208
|
+
|
|
209
|
+
# Step 4: Compare across all battery products
|
|
210
|
+
for i in $(seq 0 14); do
|
|
211
|
+
echo "BATT-$i:"
|
|
212
|
+
slcli testmonitor result list --part-number "BATT-$i" --summary --group-by status -f json
|
|
213
|
+
done
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
**Post-processing with jq:**
|
|
217
|
+
|
|
218
|
+
```bash
|
|
219
|
+
# Calculate yield rate for a part number
|
|
220
|
+
slcli testmonitor result list --part-number "BATT-8" -f json --take 1000 | \
|
|
221
|
+
jq '{
|
|
222
|
+
total: length,
|
|
223
|
+
passed: [.[] | select(.status.statusType == "PASSED")] | length,
|
|
224
|
+
failed: [.[] | select(.status.statusType == "FAILED")] | length
|
|
225
|
+
} | . + {yield_pct: (100 * .passed / .total | round)}'
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
---
|
|
229
|
+
|
|
230
|
+
## Recipe 6: Test program runtime analysis
|
|
231
|
+
|
|
232
|
+
**Question:** Which test programs have the longest runtimes?
|
|
233
|
+
|
|
234
|
+
**Steps:**
|
|
235
|
+
|
|
236
|
+
```bash
|
|
237
|
+
# Step 1: Summarize by program name
|
|
238
|
+
slcli testmonitor result list --summary --group-by programName -f json
|
|
239
|
+
|
|
240
|
+
# Step 2: Get detailed results for the longest program
|
|
241
|
+
slcli testmonitor result list --program-name "SOH_SCT_HPPC" -f json --take 200 \
|
|
242
|
+
--order-by TOTAL_TIME_IN_SECONDS --descending
|
|
243
|
+
|
|
244
|
+
# Step 3: Compare program variants
|
|
245
|
+
slcli testmonitor result list --program-name "SOH_SCT_HPPC_0" -f json --take 100
|
|
246
|
+
slcli testmonitor result list --program-name "SOH_SCT_HPPC_5" -f json --take 100
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
**Post-processing with jq:**
|
|
250
|
+
|
|
251
|
+
```bash
|
|
252
|
+
# Top 5 programs by average runtime
|
|
253
|
+
slcli testmonitor result list -f json --take 2000 | \
|
|
254
|
+
jq '[group_by(.programName) | .[] | {
|
|
255
|
+
program: .[0].programName,
|
|
256
|
+
count: length,
|
|
257
|
+
avg_seconds: ([.[].totalTimeInSeconds] | add / length | . * 100 | round / 100),
|
|
258
|
+
total_hours: ([.[].totalTimeInSeconds] | add / 3600 | . * 100 | round / 100)
|
|
259
|
+
}] | sort_by(-.avg_seconds) | .[:5]'
|
|
260
|
+
```
|
|
261
|
+
|
|
262
|
+
---
|
|
263
|
+
|
|
264
|
+
## Recipe 7: Location-based failure analysis
|
|
265
|
+
|
|
266
|
+
**Question:** Are there facility-specific issues causing higher failure rates?
|
|
267
|
+
|
|
268
|
+
**Steps:**
|
|
269
|
+
|
|
270
|
+
```bash
|
|
271
|
+
# Step 1: List all systems to understand location naming conventions
|
|
272
|
+
slcli system list -f json --take 200 | jq '[.[].alias] | unique'
|
|
273
|
+
|
|
274
|
+
# Step 2: Get failure counts per system
|
|
275
|
+
slcli testmonitor result list --status FAILED --summary --group-by systemId -f json
|
|
276
|
+
|
|
277
|
+
# Step 3: Get total counts per system for rate calculation
|
|
278
|
+
slcli testmonitor result list --summary --group-by systemId -f json
|
|
279
|
+
|
|
280
|
+
# Step 4: Drill into a system with high failure rate
|
|
281
|
+
slcli testmonitor result list --system-id "<SYSTEM_ID>" --status FAILED -f json --take 100
|
|
282
|
+
```
|
|
283
|
+
|
|
284
|
+
---
|
|
285
|
+
|
|
286
|
+
## Recipe 8: Fixture calibration and capacity planning
|
|
287
|
+
|
|
288
|
+
**Question:** Which PAtools fixtures need recalibration soon and what capacity
|
|
289
|
+
would we lose?
|
|
290
|
+
|
|
291
|
+
**Steps:**
|
|
292
|
+
|
|
293
|
+
```bash
|
|
294
|
+
# Step 1: List all PAtools fixtures
|
|
295
|
+
slcli asset list --filter 'VendorName.Contains("PAtools")' --asset-type FIXTURE -f json
|
|
296
|
+
|
|
297
|
+
# Step 2: Filter for those approaching calibration due date
|
|
298
|
+
slcli asset list --filter 'VendorName.Contains("PAtools")' --asset-type FIXTURE \
|
|
299
|
+
--calibration-status APPROACHING_RECOMMENDED_DUE_DATE -f json
|
|
300
|
+
|
|
301
|
+
# Step 3: Get overall fixture summary
|
|
302
|
+
slcli asset summary -f json
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
**Post-processing with jq:**
|
|
306
|
+
|
|
307
|
+
```bash
|
|
308
|
+
# Count fixtures by calibration status
|
|
309
|
+
slcli asset list --filter 'VendorName.Contains("PAtools")' --asset-type FIXTURE -f json --take 100 | \
|
|
310
|
+
jq '[group_by(.calibrationStatus) | .[] | {status: .[0].calibrationStatus, count: length}]'
|
|
311
|
+
```
|
|
312
|
+
|
|
313
|
+
---
|
|
314
|
+
|
|
315
|
+
## Recipe 9: Product family workload distribution
|
|
316
|
+
|
|
317
|
+
**Question:** How is test execution distributed across product families?
|
|
318
|
+
|
|
319
|
+
**Steps:**
|
|
320
|
+
|
|
321
|
+
```bash
|
|
322
|
+
# Step 1: List all products to see families
|
|
323
|
+
slcli testmonitor product list -f json --take 100
|
|
324
|
+
|
|
325
|
+
# Step 2: Summarize by product family
|
|
326
|
+
slcli testmonitor product list --summary -f json
|
|
327
|
+
|
|
328
|
+
# Step 3: Get result volume per part number
|
|
329
|
+
slcli testmonitor result list --summary --group-by programName -f json
|
|
330
|
+
```
|
|
331
|
+
|
|
332
|
+
**Post-processing with jq:**
|
|
333
|
+
|
|
334
|
+
```bash
|
|
335
|
+
# Group products by family prefix
|
|
336
|
+
slcli testmonitor product list -f json --take 100 | \
|
|
337
|
+
jq '[group_by(.family) | .[] | {family: .[0].family, products: length}] | sort_by(-.products)'
|
|
338
|
+
|
|
339
|
+
# Get execution volume grouped by part-number prefix
|
|
340
|
+
slcli testmonitor result list -f json --take 2000 | \
|
|
341
|
+
jq '[group_by(.partNumber | split("-")[0]) | .[] | {
|
|
342
|
+
family: .[0].partNumber | split("-")[0],
|
|
343
|
+
executions: length,
|
|
344
|
+
total_hours: ([.[].totalTimeInSeconds] | add / 3600 | . * 10 | round / 10)
|
|
345
|
+
}] | sort_by(-.executions)'
|
|
346
|
+
```
|
|
347
|
+
|
|
348
|
+
---
|
|
349
|
+
|
|
350
|
+
## Recipe 10: Environmental condition failure patterns
|
|
351
|
+
|
|
352
|
+
**Question:** Are failures concentrated in specific operating condition ranges?
|
|
353
|
+
|
|
354
|
+
**Steps:**
|
|
355
|
+
|
|
356
|
+
```bash
|
|
357
|
+
# Step 1: Get failed results with properties
|
|
358
|
+
slcli testmonitor result list --status FAILED -f json --take 500
|
|
359
|
+
|
|
360
|
+
# Step 2: Get detailed result with step data for environmental readings
|
|
361
|
+
slcli testmonitor result get <RESULT_ID> --include-steps -f json
|
|
362
|
+
|
|
363
|
+
# Step 3: Compare passed vs. failed property distributions
|
|
364
|
+
slcli testmonitor result list --status PASSED -f json --take 500
|
|
365
|
+
slcli testmonitor result list --status FAILED -f json --take 500
|
|
366
|
+
```
|
|
367
|
+
|
|
368
|
+
**Post-processing with jq:**
|
|
369
|
+
|
|
370
|
+
```bash
|
|
371
|
+
# Extract temperature and voltage from failed results
|
|
372
|
+
slcli testmonitor result list --status FAILED -f json --take 500 | \
|
|
373
|
+
jq '[.[] | select(.properties != null) | {
|
|
374
|
+
partNumber,
|
|
375
|
+
programName,
|
|
376
|
+
properties: (.properties | to_entries | map(select(.key | test("Temp|Volt|Capacity"; "i"))))
|
|
377
|
+
}] | [.[] | select(.properties | length > 0)]'
|
|
378
|
+
|
|
379
|
+
# Compare property distributions between passed and failed
|
|
380
|
+
slcli testmonitor result list --status FAILED --part-number "BATT-8" -f json --take 200 | \
|
|
381
|
+
jq '[.[].totalTimeInSeconds] | {min: min, max: max, avg: (add / length | round)}'
|
|
382
|
+
```
|
|
383
|
+
|
|
384
|
+
---
|
|
385
|
+
|
|
386
|
+
## Recipe 11: Create and schedule a work item on a specific fixture/slot and system
|
|
387
|
+
|
|
388
|
+
**Question:** "Create a work item for final validation testing, and schedule it on the
|
|
389
|
+
slot(s) belonging to Chamber B for all day tomorrow."
|
|
390
|
+
|
|
391
|
+
**Concept:** In SystemLink, a _slot_ is an asset with `assetType = FIXTURE`. "Chamber B"
|
|
392
|
+
is most likely a registered system (found via `slcli system list`) whose associated
|
|
393
|
+
fixture assets (the test slots inside it) are of type FIXTURE. The work item template
|
|
394
|
+
defines named resource requirement slots (e.g. "System Under Test", "Chamber") which are
|
|
395
|
+
filled by assigning real asset IDs or system IDs at scheduling time.
|
|
396
|
+
|
|
397
|
+
**Steps:**
|
|
398
|
+
|
|
399
|
+
```bash
|
|
400
|
+
# Step 1: Find Chamber B's system ID
|
|
401
|
+
slcli system list --alias "Chamber B" -f json | jq -r '.[0].id'
|
|
402
|
+
# → e.g. "chamber-b-minion-id"
|
|
403
|
+
|
|
404
|
+
# Step 2: Find fixture/slot assets belonging to Chamber B
|
|
405
|
+
# Fixtures are assets with assetType = FIXTURE located on the Chamber B system
|
|
406
|
+
slcli asset list --asset-type FIXTURE -f json | \
|
|
407
|
+
jq '[.[] | select(.location.minionId == "chamber-b-minion-id")]'
|
|
408
|
+
# → e.g. [ { "id": "fixture-slot-1", "name": "Slot 1" }, ... ]
|
|
409
|
+
|
|
410
|
+
# Step 3: Find a suitable work item template (if creating from template)
|
|
411
|
+
slcli workitem template list -f json | \
|
|
412
|
+
jq '[.[] | select(.name | test("validation"; "i")) | {id, name}]'
|
|
413
|
+
# → e.g. "template-abc"
|
|
414
|
+
|
|
415
|
+
# Step 4: Create the work item
|
|
416
|
+
slcli workitem create-from-template template-abc \
|
|
417
|
+
--name "Final Validation - Chamber B" \
|
|
418
|
+
--state NEW \
|
|
419
|
+
-f json | jq -r '.id'
|
|
420
|
+
# → e.g. "wi-12345"
|
|
421
|
+
# Or create from scratch:
|
|
422
|
+
slcli workitem create \
|
|
423
|
+
--name "Final Validation - Chamber B" \
|
|
424
|
+
--type testplan \
|
|
425
|
+
--state NEW \
|
|
426
|
+
--part-number "P-FINAL-001" \
|
|
427
|
+
-f json | jq -r '.id'
|
|
428
|
+
|
|
429
|
+
# Step 5: Schedule the work item — assign Chamber B system + fixture slots,
|
|
430
|
+
# and set tomorrow's date (00:00–23:59 UTC)
|
|
431
|
+
slcli workitem schedule wi-12345 \
|
|
432
|
+
--start "2026-03-03T00:00:00Z" \
|
|
433
|
+
--end "2026-03-03T23:59:59Z" \
|
|
434
|
+
--system "chamber-b-minion-id" \
|
|
435
|
+
--fixture "fixture-slot-1" \
|
|
436
|
+
--fixture "fixture-slot-2"
|
|
437
|
+
```
|
|
438
|
+
|
|
439
|
+
**Multi-slot shorthand with shell expansion:**
|
|
440
|
+
|
|
441
|
+
```bash
|
|
442
|
+
# Find all fixture IDs for Chamber B and schedule them all at once
|
|
443
|
+
FIXTURE_FLAGS=$(
|
|
444
|
+
slcli asset list --asset-type FIXTURE -f json | \
|
|
445
|
+
jq -r '[.[] | select(.location.minionId == "chamber-b-minion-id") | "--fixture", .id] | .[]'
|
|
446
|
+
)
|
|
447
|
+
slcli workitem schedule wi-12345 \
|
|
448
|
+
--start "2026-03-03T00:00:00Z" \
|
|
449
|
+
--end "2026-03-03T23:59:59Z" \
|
|
450
|
+
--system "chamber-b-minion-id" \
|
|
451
|
+
$FIXTURE_FLAGS
|
|
452
|
+
```
|
|
453
|
+
|
|
454
|
+
**Key facts:**
|
|
455
|
+
|
|
456
|
+
- `--system` takes a system/minion ID (from `slcli system list`).
|
|
457
|
+
- `--fixture` takes an _asset_ ID of type FIXTURE (from `slcli asset list --asset-type FIXTURE`).
|
|
458
|
+
- `--dut` takes an _asset_ ID of type DEVICE_UNDER_TEST.
|
|
459
|
+
- All three flags are repeatable for multi-resource scheduling.
|
|
460
|
+
- Time and resource flags can be combined in a single `workitem schedule` call.
|
|
461
|
+
|
|
462
|
+
---
|
|
463
|
+
|
|
464
|
+
## General tips
|
|
465
|
+
|
|
466
|
+
- **Start with `--summary`** to understand data shape before fetching raw records.
|
|
467
|
+
- **Use `--group-by`** with `--summary` for aggregation: `status`, `programName`,
|
|
468
|
+
`serialNumber`, `operator`, `hostName`, `systemId`.
|
|
469
|
+
- **Chain with `jq`** for complex transformations — always use `-f json`.
|
|
470
|
+
- **Use `--take` wisely** — start small (100-500), increase if needed.
|
|
471
|
+
- **For rate calculations**, query total count and filtered count separately, then divide.
|
|
472
|
+
- **Use `--order-by TOTAL_TIME_IN_SECONDS --descending`** to find slowest tests quickly.
|
|
473
|
+
- **Asset `--connected` flag** limits to assets in currently connected systems.
|
|
474
|
+
- **System `--has-package`** filter is client-side — use with `--state CONNECTED` to reduce volume.
|
|
@@ -0,0 +1,236 @@
|
|
|
1
|
+
# Filtering Reference
|
|
2
|
+
|
|
3
|
+
Detailed guide to filtering syntax across slcli command groups.
|
|
4
|
+
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Convenience filters
|
|
8
|
+
|
|
9
|
+
Convenience filters are named options that map to common query patterns. When
|
|
10
|
+
multiple convenience filters are specified, they are combined with AND logic.
|
|
11
|
+
|
|
12
|
+
### Test Monitor results
|
|
13
|
+
|
|
14
|
+
| Flag | Match type | Example |
|
|
15
|
+
| ---------------------- | ---------- | -------------------------- |
|
|
16
|
+
| `--status TEXT` | Exact | `--status FAILED` |
|
|
17
|
+
| `--program-name TEXT` | Contains | `--program-name "SOH_SCT"` |
|
|
18
|
+
| `--serial-number TEXT` | Contains | `--serial-number "1234"` |
|
|
19
|
+
| `--part-number TEXT` | Contains | `--part-number "BATT-8"` |
|
|
20
|
+
| `--operator TEXT` | Contains | `--operator "xli"` |
|
|
21
|
+
| `--host-name TEXT` | Contains | `--host-name "station-01"` |
|
|
22
|
+
| `--system-id TEXT` | Exact | `--system-id "abc-123"` |
|
|
23
|
+
| `--workspace, -w TEXT` | Name or ID | `--workspace "Production"` |
|
|
24
|
+
|
|
25
|
+
### Test Monitor products
|
|
26
|
+
|
|
27
|
+
| Flag | Match type | Example |
|
|
28
|
+
| ---------------------- | ---------- | -------------------------- |
|
|
29
|
+
| `--name TEXT` | Contains | `--name "Battery"` |
|
|
30
|
+
| `--part-number TEXT` | Contains | `--part-number "BATT"` |
|
|
31
|
+
| `--family TEXT` | Contains | `--family "Sensors"` |
|
|
32
|
+
| `--workspace, -w TEXT` | Name or ID | `--workspace "Production"` |
|
|
33
|
+
|
|
34
|
+
### Assets
|
|
35
|
+
|
|
36
|
+
| Flag | Match type | Example |
|
|
37
|
+
| ----------------------------- | -------------- | ------------------------------------------------ |
|
|
38
|
+
| `--model TEXT` | Contains | `--model "4071"` |
|
|
39
|
+
| `--serial-number TEXT` | Exact | `--serial-number "01BB877A"` |
|
|
40
|
+
| `--bus-type CHOICE` | Exact enum | `--bus-type PCI_PXI` |
|
|
41
|
+
| `--asset-type CHOICE` | Exact enum | `--asset-type FIXTURE` |
|
|
42
|
+
| `--calibration-status CHOICE` | Exact enum | `--calibration-status PAST_RECOMMENDED_DUE_DATE` |
|
|
43
|
+
| `--connected` | Flag (boolean) | `--connected` |
|
|
44
|
+
| `--calibratable` | Flag (boolean) | `--calibratable` |
|
|
45
|
+
| `--workspace, -w TEXT` | Name or ID | `--workspace "Production"` |
|
|
46
|
+
|
|
47
|
+
### Systems
|
|
48
|
+
|
|
49
|
+
| Flag | Match type | Example |
|
|
50
|
+
| ---------------------- | ---------------------- | ------------------------------ |
|
|
51
|
+
| `--alias, -a TEXT` | Contains | `--alias "PXI"` |
|
|
52
|
+
| `--state CHOICE` | Exact enum | `--state CONNECTED` |
|
|
53
|
+
| `--os TEXT` | Contains | `--os "Windows"` |
|
|
54
|
+
| `--host TEXT` | Contains | `--host "lab-01"` |
|
|
55
|
+
| `--has-package TEXT` | Contains (client-side) | `--has-package "DAQmx"` |
|
|
56
|
+
| `--has-keyword TEXT` | Exact (repeatable) | `--has-keyword "production"` |
|
|
57
|
+
| `--property TEXT` | key=value (repeatable) | `--property "location=Austin"` |
|
|
58
|
+
| `--workspace, -w TEXT` | Name or ID | `--workspace "Production"` |
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## Advanced filter expressions
|
|
63
|
+
|
|
64
|
+
Use `--filter` for complex queries that go beyond convenience filters.
|
|
65
|
+
|
|
66
|
+
### Test Monitor LINQ syntax
|
|
67
|
+
|
|
68
|
+
Test Monitor uses Dynamic LINQ filter expressions.
|
|
69
|
+
|
|
70
|
+
```bash
|
|
71
|
+
# Status filter
|
|
72
|
+
--filter 'status.statusType == "FAILED"'
|
|
73
|
+
|
|
74
|
+
# Combined filters
|
|
75
|
+
--filter 'partNumber.Contains("ABC") and programName == "TestProgram"'
|
|
76
|
+
|
|
77
|
+
# Date-based filtering
|
|
78
|
+
--filter 'startedAt > DateTime(2026, 1, 1)'
|
|
79
|
+
|
|
80
|
+
# Numeric comparison
|
|
81
|
+
--filter 'totalTimeInSeconds > 300'
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
**Important:** Always use parameterized queries with `--substitution`:
|
|
85
|
+
|
|
86
|
+
```bash
|
|
87
|
+
# Good — parameterized
|
|
88
|
+
--filter 'programName == @0' --substitution "SOH_SCT_HPPC_0"
|
|
89
|
+
|
|
90
|
+
# Bad — string interpolation (security risk)
|
|
91
|
+
--filter "programName = '${PROGRAM}'"
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
Multiple substitutions are positional:
|
|
95
|
+
|
|
96
|
+
```bash
|
|
97
|
+
--filter 'programName == @0 and operator == @1' \
|
|
98
|
+
--substitution "SOH_SCT_HPPC_0" \
|
|
99
|
+
--substitution "xli"
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
### Product filtering with `--product-filter`
|
|
103
|
+
|
|
104
|
+
Filter results by properties of their associated product:
|
|
105
|
+
|
|
106
|
+
```bash
|
|
107
|
+
slcli testmonitor result list \
|
|
108
|
+
--product-filter 'family == @0' \
|
|
109
|
+
--product-substitution "Battery"
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
### Asset API expression syntax
|
|
113
|
+
|
|
114
|
+
Asset filtering uses the Asset API expression language:
|
|
115
|
+
|
|
116
|
+
```bash
|
|
117
|
+
# Property access with methods
|
|
118
|
+
--filter 'ModelName.Contains("PXI")'
|
|
119
|
+
|
|
120
|
+
# Exact match
|
|
121
|
+
--filter 'SerialNumber == "01BB877A"'
|
|
122
|
+
|
|
123
|
+
# Combined with logical operators
|
|
124
|
+
--filter 'BusType == "PCI_PXI" and CalibrationStatus == "OK"'
|
|
125
|
+
|
|
126
|
+
# Vendor filtering
|
|
127
|
+
--filter 'VendorName.Contains("PAtools")'
|
|
128
|
+
|
|
129
|
+
# Combined with convenience filters
|
|
130
|
+
slcli asset list --asset-type FIXTURE --filter 'VendorName.Contains("PAtools")'
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
### Systems Management filter syntax
|
|
134
|
+
|
|
135
|
+
System filtering uses nested property access:
|
|
136
|
+
|
|
137
|
+
```bash
|
|
138
|
+
# Connection state (nested property)
|
|
139
|
+
--filter 'connected.data.state == "CONNECTED"'
|
|
140
|
+
|
|
141
|
+
# OS filtering
|
|
142
|
+
--filter 'grains.data.kernel == "Windows"'
|
|
143
|
+
|
|
144
|
+
# Combined
|
|
145
|
+
--filter 'connected.data.state == "CONNECTED" and grains.data.kernel == "Windows"'
|
|
146
|
+
|
|
147
|
+
# Alias matching
|
|
148
|
+
--filter 'alias.Contains("PXI")'
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
---
|
|
152
|
+
|
|
153
|
+
## Sorting
|
|
154
|
+
|
|
155
|
+
Most list commands support `--order-by` and `--descending / --ascending`.
|
|
156
|
+
|
|
157
|
+
### Test Monitor result sort fields
|
|
158
|
+
|
|
159
|
+
`ID`, `STARTED_AT`, `UPDATED_AT`, `PROGRAM_NAME`, `SYSTEM_ID`, `HOST_NAME`,
|
|
160
|
+
`OPERATOR`, `SERIAL_NUMBER`, `PART_NUMBER`, `TOTAL_TIME_IN_SECONDS`, `PROPERTIES`
|
|
161
|
+
|
|
162
|
+
Default: `STARTED_AT` descending.
|
|
163
|
+
|
|
164
|
+
```bash
|
|
165
|
+
# Slowest tests first
|
|
166
|
+
slcli testmonitor result list --order-by TOTAL_TIME_IN_SECONDS --descending
|
|
167
|
+
|
|
168
|
+
# Oldest first
|
|
169
|
+
slcli testmonitor result list --order-by STARTED_AT --ascending
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
---
|
|
173
|
+
|
|
174
|
+
## Aggregation with --summary
|
|
175
|
+
|
|
176
|
+
The `--summary` flag returns aggregate counts instead of individual records.
|
|
177
|
+
Combine with `--group-by` for grouped aggregation.
|
|
178
|
+
|
|
179
|
+
```bash
|
|
180
|
+
# Total counts by status
|
|
181
|
+
slcli testmonitor result list --summary --group-by status -f json
|
|
182
|
+
|
|
183
|
+
# Counts by operator
|
|
184
|
+
slcli testmonitor result list --summary --group-by operator -f json
|
|
185
|
+
|
|
186
|
+
# Filtered summary — failures by program
|
|
187
|
+
slcli testmonitor result list --status FAILED --summary --group-by programName -f json
|
|
188
|
+
|
|
189
|
+
# Product summary
|
|
190
|
+
slcli testmonitor product list --summary -f json
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
Available `--group-by` values: `status`, `programName`, `serialNumber`,
|
|
194
|
+
`operator`, `hostName`, `systemId`.
|
|
195
|
+
|
|
196
|
+
---
|
|
197
|
+
|
|
198
|
+
## Pagination
|
|
199
|
+
|
|
200
|
+
- **Table output**: Paginated interactively (default 25 rows, Y/n prompt for next page).
|
|
201
|
+
- **JSON output**: Returns all results up to `--take` limit in a single array.
|
|
202
|
+
- **`--take, -t`**: Controls maximum items. Default 25 for most commands, 100 for systems.
|
|
203
|
+
|
|
204
|
+
```bash
|
|
205
|
+
# Get exactly 500 results as JSON
|
|
206
|
+
slcli testmonitor result list -f json --take 500
|
|
207
|
+
|
|
208
|
+
# Get first 10 in table
|
|
209
|
+
slcli testmonitor result list --take 10
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
---
|
|
213
|
+
|
|
214
|
+
## Enum values reference
|
|
215
|
+
|
|
216
|
+
### Status types (Test Monitor)
|
|
217
|
+
|
|
218
|
+
`PASSED`, `FAILED`, `RUNNING`, `ERRORED`, `TERMINATED`, `TIMEDOUT`, `WAITING`,
|
|
219
|
+
`SKIPPED`, `CUSTOM`
|
|
220
|
+
|
|
221
|
+
### Calibration status (Assets)
|
|
222
|
+
|
|
223
|
+
`OK`, `APPROACHING_RECOMMENDED_DUE_DATE`, `PAST_RECOMMENDED_DUE_DATE`,
|
|
224
|
+
`OUT_FOR_CALIBRATION`
|
|
225
|
+
|
|
226
|
+
### Asset types
|
|
227
|
+
|
|
228
|
+
`GENERIC`, `DEVICE_UNDER_TEST`, `FIXTURE`, `SYSTEM`
|
|
229
|
+
|
|
230
|
+
### Bus types (Assets)
|
|
231
|
+
|
|
232
|
+
`BUILT_IN_SYSTEM`, `PCI_PXI`, `USB`, `GPIB`, `VXI`, `SERIAL`, `TCP_IP`, `CRIO`
|
|
233
|
+
|
|
234
|
+
### System states
|
|
235
|
+
|
|
236
|
+
`CONNECTED`, `DISCONNECTED`, `VIRTUAL`, `APPROVED`
|