@yeyuan98/opencode-bioresearcher-plugin 1.3.1 → 1.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +14 -0
- package/dist/index.js +4 -1
- package/dist/misc-tools/index.d.ts +3 -0
- package/dist/misc-tools/index.js +3 -0
- package/dist/misc-tools/json-extract.d.ts +13 -0
- package/dist/misc-tools/json-extract.js +394 -0
- package/dist/misc-tools/json-infer.d.ts +13 -0
- package/dist/misc-tools/json-infer.js +199 -0
- package/dist/misc-tools/json-tools.d.ts +33 -0
- package/dist/misc-tools/json-tools.js +187 -0
- package/dist/misc-tools/json-validate.d.ts +13 -0
- package/dist/misc-tools/json-validate.js +228 -0
- package/dist/skills/bioresearcher-core/README.md +210 -0
- package/dist/skills/bioresearcher-core/SKILL.md +128 -0
- package/dist/skills/bioresearcher-core/examples/contexts.json +29 -0
- package/dist/skills/bioresearcher-core/examples/data-exchange-example.md +303 -0
- package/dist/skills/bioresearcher-core/examples/template.md +49 -0
- package/dist/skills/bioresearcher-core/patterns/calculator.md +215 -0
- package/dist/skills/bioresearcher-core/patterns/data-exchange.md +406 -0
- package/dist/skills/bioresearcher-core/patterns/json-tools.md +263 -0
- package/dist/skills/bioresearcher-core/patterns/progress.md +127 -0
- package/dist/skills/bioresearcher-core/patterns/retry.md +110 -0
- package/dist/skills/bioresearcher-core/patterns/shell-commands.md +79 -0
- package/dist/skills/bioresearcher-core/patterns/subagent-waves.md +186 -0
- package/dist/skills/bioresearcher-core/patterns/table-tools.md +260 -0
- package/dist/skills/bioresearcher-core/patterns/user-confirmation.md +187 -0
- package/dist/skills/bioresearcher-core/python/template.md +273 -0
- package/dist/skills/bioresearcher-core/python/template.py +323 -0
- package/dist/skills/long-table-summary/SKILL.md +437 -0
- package/dist/skills/long-table-summary/combine_outputs.py +336 -0
- package/dist/skills/long-table-summary/generate_prompts.py +211 -0
- package/dist/skills/long-table-summary/pyproject.toml +8 -0
- package/dist/skills/pubmed-weekly/SKILL.md +329 -329
- package/dist/skills/pubmed-weekly/pubmed_weekly.py +411 -411
- package/dist/skills/pubmed-weekly/pyproject.toml +8 -8
- package/package.json +7 -2
|
@@ -0,0 +1,263 @@
|
|
|
1
|
+
# JSON Tools Pattern
|
|
2
|
+
|
|
3
|
+
Guide for using JSON tools (jsonExtract, jsonValidate, jsonInfer) in BioResearcher workflows.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
These tools replace custom Python code for JSON operations:
|
|
8
|
+
- **jsonExtract**: Extract JSON from files (handles markdown code blocks, raw JSON)
|
|
9
|
+
- **jsonValidate**: Validate JSON against schemas (supports Draft-4, Draft-7, Draft-2020-12)
|
|
10
|
+
- **jsonInfer**: Infer schemas from JSON data
|
|
11
|
+
|
|
12
|
+
## Tool: jsonExtract
|
|
13
|
+
|
|
14
|
+
Extract JSON from files that may contain extra text.
|
|
15
|
+
|
|
16
|
+
### Signature
|
|
17
|
+
```
|
|
18
|
+
jsonExtract(file_path: string, return_all: boolean = false)
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
### Return Format
|
|
22
|
+
```json
|
|
23
|
+
{
|
|
24
|
+
"success": true,
|
|
25
|
+
"data": { ... } or [ ... ],
|
|
26
|
+
"metadata": {
|
|
27
|
+
"method": "json_code_block" | "code_block" | "object" | "array",
|
|
28
|
+
"dataType": "object" | "array" | "mixed",
|
|
29
|
+
"fileSize": 1234
|
|
30
|
+
}
|
|
31
|
+
}
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
### Extraction Methods (in order)
|
|
35
|
+
1. **json_code_block**: Content in ```json ... ```
|
|
36
|
+
2. **code_block**: Content in ``` ... ```
|
|
37
|
+
3. **object**: First {...} with proper brace matching
|
|
38
|
+
4. **array**: First [...] with proper bracket matching
|
|
39
|
+
|
|
40
|
+
### Examples
|
|
41
|
+
|
|
42
|
+
```
|
|
43
|
+
# Extract from markdown with JSON code block
|
|
44
|
+
jsonExtract(file_path="outputs/batch001.md")
|
|
45
|
+
|
|
46
|
+
# Extract all JSON objects from file
|
|
47
|
+
jsonExtract(file_path="outputs/all_batches.md", return_all=true)
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
### Error Codes
|
|
51
|
+
| Code | Description |
|
|
52
|
+
|------|-------------|
|
|
53
|
+
| `FILE_NOT_FOUND` | File does not exist |
|
|
54
|
+
| `FILE_TOO_LARGE` | File exceeds 200MB limit |
|
|
55
|
+
| `BINARY_FILE` | File is binary format |
|
|
56
|
+
| `EMPTY_FILE` | File has no content |
|
|
57
|
+
| `NO_JSON_FOUND` | No valid JSON found |
|
|
58
|
+
|
|
59
|
+
## Tool: jsonValidate
|
|
60
|
+
|
|
61
|
+
Validate JSON data against a JSON Schema.
|
|
62
|
+
|
|
63
|
+
### Signature
|
|
64
|
+
```
|
|
65
|
+
jsonValidate(data: string, schema: string)
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
- `data`: JSON string to validate
|
|
69
|
+
- `schema`: JSON Schema string OR file path (auto-detected)
|
|
70
|
+
|
|
71
|
+
### Return Format (Success)
|
|
72
|
+
```json
|
|
73
|
+
{
|
|
74
|
+
"success": true,
|
|
75
|
+
"data": { ... },
|
|
76
|
+
"valid": true,
|
|
77
|
+
"metadata": {
|
|
78
|
+
"errorCount": 0,
|
|
79
|
+
"schemaFeatures": {
|
|
80
|
+
"validated": ["type", "properties", "required"],
|
|
81
|
+
"ignored": ["uniqueItems"]
|
|
82
|
+
}
|
|
83
|
+
}
|
|
84
|
+
}
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
> **Note:** Always check `success` first, then `valid`. If `success` is true and `errorCount` is 0, the validation passed regardless of `valid` field presence.
|
|
88
|
+
|
|
89
|
+
### Return Format (Validation Errors)
|
|
90
|
+
```json
|
|
91
|
+
{
|
|
92
|
+
"success": true,
|
|
93
|
+
"data": null,
|
|
94
|
+
"valid": false,
|
|
95
|
+
"errors": [
|
|
96
|
+
{
|
|
97
|
+
"path": "summaries.0.row_number",
|
|
98
|
+
"message": "Expected number, received string",
|
|
99
|
+
"code": "invalid_type",
|
|
100
|
+
"expected": "number",
|
|
101
|
+
"received": "string"
|
|
102
|
+
}
|
|
103
|
+
],
|
|
104
|
+
"metadata": {
|
|
105
|
+
"errorCount": 1
|
|
106
|
+
}
|
|
107
|
+
}
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
### Examples
|
|
111
|
+
|
|
112
|
+
```
|
|
113
|
+
# Validate with inline schema
|
|
114
|
+
jsonValidate(
|
|
115
|
+
data='{"name": "test", "age": 30}',
|
|
116
|
+
schema='{"type": "object", "properties": {"name": {"type": "string"}, "age": {"type": "number"}}}'
|
|
117
|
+
)
|
|
118
|
+
|
|
119
|
+
# Validate with schema file
|
|
120
|
+
jsonValidate(
|
|
121
|
+
data='{"batch_number": 1, "summaries": [...]}',
|
|
122
|
+
schema='./schemas/batch_output.schema.json'
|
|
123
|
+
)
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
### Unsupported Schema Features
|
|
127
|
+
- `not`, `unevaluatedItems`, `unevaluatedProperties`
|
|
128
|
+
- `if`, `then`, `else`, `dependentSchemas`
|
|
129
|
+
|
|
130
|
+
### Silently Ignored Features
|
|
131
|
+
- `uniqueItems`, `contains`, `minContains`, `maxContains`
|
|
132
|
+
|
|
133
|
+
## Tool: jsonInfer
|
|
134
|
+
|
|
135
|
+
Generate a JSON Schema from example data.
|
|
136
|
+
|
|
137
|
+
### Signature
|
|
138
|
+
```
|
|
139
|
+
jsonInfer(data: string, strict: boolean = false)
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
- `data`: Example JSON string
|
|
143
|
+
- `strict`: If true, all fields required at all levels; if false, only top-level required array is omitted (nested objects may still have required fields)
|
|
144
|
+
|
|
145
|
+
### Return Format
|
|
146
|
+
```json
|
|
147
|
+
{
|
|
148
|
+
"success": true,
|
|
149
|
+
"data": {
|
|
150
|
+
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
|
151
|
+
"type": "object",
|
|
152
|
+
"properties": {
|
|
153
|
+
"name": { "type": "string" },
|
|
154
|
+
"age": { "type": "integer" }
|
|
155
|
+
},
|
|
156
|
+
"required": ["name", "age"]
|
|
157
|
+
},
|
|
158
|
+
"metadata": {
|
|
159
|
+
"inferredType": "object",
|
|
160
|
+
"strictMode": true
|
|
161
|
+
}
|
|
162
|
+
}
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
### Examples
|
|
166
|
+
|
|
167
|
+
```
|
|
168
|
+
# Infer schema from example (optional fields)
|
|
169
|
+
jsonInfer(data='{"name": "Alice", "age": 30}', strict=false)
|
|
170
|
+
|
|
171
|
+
# Infer schema with required fields
|
|
172
|
+
jsonInfer(data='{"batch_number": 1, "summaries": [{"row": 1, "value": "test"}]}', strict=true)
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
### Warnings
|
|
176
|
+
- `MIXED_TYPE_ARRAY`: Array contains multiple types
|
|
177
|
+
- `EMPTY_ARRAY`: Cannot infer element type
|
|
178
|
+
- `PARTIAL_OBJECT_SCHEMA`: Object has many properties
|
|
179
|
+
|
|
180
|
+
### Generated Constraints
|
|
181
|
+
|
|
182
|
+
jsonInfer may add the following constraints automatically:
|
|
183
|
+
|
|
184
|
+
| Type | Constraint | Value | Purpose |
|
|
185
|
+
|------|------------|-------|---------|
|
|
186
|
+
| integer | minimum/maximum | ±9007199254740991 | JavaScript safe integer range |
|
|
187
|
+
|
|
188
|
+
> **Note:** These constraints ensure JSON compatibility but may be overly restrictive for your use case. Remove them from the generated schema if not needed.
|
|
189
|
+
|
|
190
|
+
## Common Workflows
|
|
191
|
+
|
|
192
|
+
### Validate Subagent Output
|
|
193
|
+
|
|
194
|
+
```
|
|
195
|
+
# 1. Extract JSON from output file
|
|
196
|
+
result = jsonExtract(file_path="./outputs/batch001.md")
|
|
197
|
+
if not result.success:
|
|
198
|
+
log_error("Failed to extract JSON")
|
|
199
|
+
return
|
|
200
|
+
|
|
201
|
+
# 2. Validate against expected schema
|
|
202
|
+
validation = jsonValidate(
|
|
203
|
+
data=json.dumps(result.data),
|
|
204
|
+
schema=batch_output_schema
|
|
205
|
+
)
|
|
206
|
+
|
|
207
|
+
if not validation.valid:
|
|
208
|
+
log_error(f"Validation failed: {validation.errors}")
|
|
209
|
+
return
|
|
210
|
+
|
|
211
|
+
# 3. Process validated data
|
|
212
|
+
process(result.data)
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
### Infer Schema from Example
|
|
216
|
+
|
|
217
|
+
```
|
|
218
|
+
# 1. Get first valid output
|
|
219
|
+
result = jsonExtract(file_path="./outputs/batch001.md")
|
|
220
|
+
|
|
221
|
+
# 2. Infer schema from example
|
|
222
|
+
schema_result = jsonInfer(
|
|
223
|
+
data=json.dumps(result.data),
|
|
224
|
+
strict=true
|
|
225
|
+
)
|
|
226
|
+
|
|
227
|
+
# 3. Use inferred schema for validation
|
|
228
|
+
inferred_schema = json.dumps(schema_result.data)
|
|
229
|
+
|
|
230
|
+
# 4. Validate other outputs
|
|
231
|
+
for file in other_files:
|
|
232
|
+
data = jsonExtract(file_path=file)
|
|
233
|
+
validation = jsonValidate(
|
|
234
|
+
data=json.dumps(data.data),
|
|
235
|
+
schema=inferred_schema
|
|
236
|
+
)
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
### Combine Multiple JSON Files
|
|
240
|
+
|
|
241
|
+
```
|
|
242
|
+
# 1. Extract all JSON objects
|
|
243
|
+
all_data = []
|
|
244
|
+
for file in output_files:
|
|
245
|
+
result = jsonExtract(file_path=file, return_all=true)
|
|
246
|
+
if result.success:
|
|
247
|
+
all_data.extend(result.data)
|
|
248
|
+
|
|
249
|
+
# 2. Create combined output
|
|
250
|
+
tableCreateFile(
|
|
251
|
+
file_path="./combined.xlsx",
|
|
252
|
+
sheet_name="Results",
|
|
253
|
+
data=all_data
|
|
254
|
+
)
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
## Best Practices
|
|
258
|
+
|
|
259
|
+
1. **Always check success**: Check `result.success` before using `result.data`
|
|
260
|
+
2. **Handle errors gracefully**: Log errors and continue with other files
|
|
261
|
+
3. **Use strict mode for inference**: When you know all fields are required
|
|
262
|
+
4. **Validate early**: Validate before processing to catch errors early
|
|
263
|
+
5. **Keep schemas simple**: Avoid unsupported schema features
|
|
@@ -0,0 +1,127 @@
|
|
|
1
|
+
# Progress Pattern
|
|
2
|
+
|
|
3
|
+
Track and report progress at configurable intervals during batch operations.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
Use this pattern when processing multiple items to provide user feedback on completion status.
|
|
8
|
+
|
|
9
|
+
## Pattern Algorithm
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
1. Initialize: total = N, completed = 0, report_interval = M
|
|
13
|
+
2. For each item:
|
|
14
|
+
a. Process item
|
|
15
|
+
b. completed += 1
|
|
16
|
+
c. If completed % report_interval == 0 OR completed == total:
|
|
17
|
+
- Use calculator: percent = (completed / total) * 100
|
|
18
|
+
- Report: "Progress: X/Y (Z%)"
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
## Parameters
|
|
22
|
+
|
|
23
|
+
| Parameter | Default | Description |
|
|
24
|
+
|-----------|---------|-------------|
|
|
25
|
+
| `total` | Required | Total number of items to process |
|
|
26
|
+
| `completed` | 0 | Items processed so far |
|
|
27
|
+
| `report_interval` | 3 | Report every N items |
|
|
28
|
+
|
|
29
|
+
## Tool: calculator
|
|
30
|
+
|
|
31
|
+
```
|
|
32
|
+
calculator(formula: string, precision: number = 3)
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
- Supported: +, -, *, /, ^, brackets, scientific notation
|
|
36
|
+
- **MUST** use explicit * for multiplication: `2*(3)` not `2(3)`
|
|
37
|
+
|
|
38
|
+
## Example: Batch Processing Progress
|
|
39
|
+
|
|
40
|
+
```
|
|
41
|
+
# Configuration
|
|
42
|
+
total = 100
|
|
43
|
+
completed = 0
|
|
44
|
+
report_interval = 10
|
|
45
|
+
|
|
46
|
+
# Processing loop
|
|
47
|
+
for item in items:
|
|
48
|
+
process(item)
|
|
49
|
+
completed += 1
|
|
50
|
+
|
|
51
|
+
if completed % report_interval == 0 or completed == total:
|
|
52
|
+
percent = calculator(formula="(completed / total) * 100", precision=1)
|
|
53
|
+
report("Progress: {completed}/{total} ({percent}%)")
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
## Example: Wave-Based Progress
|
|
57
|
+
|
|
58
|
+
For subagent waves (3 subagents per wave):
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
# Configuration
|
|
62
|
+
total_waves = 4
|
|
63
|
+
completed_waves = 0
|
|
64
|
+
total_batches = 12
|
|
65
|
+
completed_batches = 0
|
|
66
|
+
|
|
67
|
+
# After each wave completes
|
|
68
|
+
completed_waves += 1
|
|
69
|
+
completed_batches += 3
|
|
70
|
+
percent = calculator(formula="({completed_batches} / {total_batches}) * 100")
|
|
71
|
+
|
|
72
|
+
report("Progress: {completed_batches}/{total_batches} batches ({percent}%)")
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
## Progress Report Format
|
|
76
|
+
|
|
77
|
+
Standard format for consistency:
|
|
78
|
+
|
|
79
|
+
```
|
|
80
|
+
Progress: X/Y batches completed (Z%)
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
Examples:
|
|
84
|
+
- "Progress: 3/10 batches completed (30%)"
|
|
85
|
+
- "Progress: 6/10 batches completed (60%)"
|
|
86
|
+
- "Progress: 10/10 batches completed (100%)"
|
|
87
|
+
|
|
88
|
+
## Percentage Calculation
|
|
89
|
+
|
|
90
|
+
Use the calculator tool:
|
|
91
|
+
|
|
92
|
+
```
|
|
93
|
+
calculator(formula="(45 / 100) * 100", precision=1)
|
|
94
|
+
# Returns: { "formula": "(45 / 100) * 100", "result": 45 }
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
## Example: Time Estimation
|
|
98
|
+
|
|
99
|
+
```
|
|
100
|
+
# Track start time and items per minute
|
|
101
|
+
start_time = current_time()
|
|
102
|
+
items_per_minute = 10
|
|
103
|
+
|
|
104
|
+
# Calculate remaining time
|
|
105
|
+
remaining_items = total - completed
|
|
106
|
+
remaining_minutes = calculator(
|
|
107
|
+
formula="remaining_items / items_per_minute",
|
|
108
|
+
precision=0
|
|
109
|
+
)
|
|
110
|
+
|
|
111
|
+
report("Progress: {completed}/{total} ({percent}%) - ~{remaining_minutes} min remaining")
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
## Integration with Other Patterns
|
|
115
|
+
|
|
116
|
+
| Pattern | Integration |
|
|
117
|
+
|---------|-------------|
|
|
118
|
+
| `retry.md` | Count retries in progress |
|
|
119
|
+
| `subagent-waves.md` | Report after each wave |
|
|
120
|
+
| `calculator.md` | Calculate percentages |
|
|
121
|
+
|
|
122
|
+
## Best Practices
|
|
123
|
+
|
|
124
|
+
1. **Report at meaningful intervals**: Not too frequent (spam) or too sparse (silent)
|
|
125
|
+
2. **Include totals**: Always show X/Y format
|
|
126
|
+
3. **Round percentages**: Use precision=0 or precision=1
|
|
127
|
+
4. **Final report**: Always report 100% completion
|
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
# Retry Pattern
|
|
2
|
+
|
|
3
|
+
Retry failed operations with configurable delays and backoff.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
Use this pattern when operations may fail transiently (network issues, API rate limits, temporary resource unavailability).
|
|
8
|
+
|
|
9
|
+
## Pattern Algorithm
|
|
10
|
+
|
|
11
|
+
```
|
|
12
|
+
1. Initialize: attempts = 0, max_attempts = N, delay = D, backoff_factor = B
|
|
13
|
+
2. Attempt operation
|
|
14
|
+
3. If success -> continue with workflow
|
|
15
|
+
4. If failure:
|
|
16
|
+
a. attempts += 1
|
|
17
|
+
b. If attempts < max_attempts:
|
|
18
|
+
- Use blockingTimer(delay=D)
|
|
19
|
+
- delay = delay * backoff_factor
|
|
20
|
+
- Retry from step 2
|
|
21
|
+
c. If attempts >= max_attempts:
|
|
22
|
+
- Report failure or ask user via question tool
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
## Parameters
|
|
26
|
+
|
|
27
|
+
| Parameter | Default | Description |
|
|
28
|
+
|-----------|---------|-------------|
|
|
29
|
+
| `max_attempts` | 3 | Maximum retry attempts |
|
|
30
|
+
| `delay` | 2 | Initial wait time in seconds |
|
|
31
|
+
| `backoff_factor` | 1 | Multiply delay after each failure (1 = constant delay) |
|
|
32
|
+
|
|
33
|
+
## Tool: blockingTimer
|
|
34
|
+
|
|
35
|
+
```
|
|
36
|
+
blockingTimer(delay: number)
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
- Maximum delay: 300 seconds
|
|
40
|
+
- Returns: "Timer completed: waited X seconds (actual elapsed: Ys)"
|
|
41
|
+
|
|
42
|
+
## Example: Constant Delay Retry
|
|
43
|
+
|
|
44
|
+
```
|
|
45
|
+
# Retry configuration
|
|
46
|
+
max_attempts = 3
|
|
47
|
+
delay = 2
|
|
48
|
+
attempts = 0
|
|
49
|
+
|
|
50
|
+
# Attempt loop
|
|
51
|
+
while attempts < max_attempts:
|
|
52
|
+
result = attempt_operation()
|
|
53
|
+
if result.success:
|
|
54
|
+
break
|
|
55
|
+
attempts += 1
|
|
56
|
+
if attempts < max_attempts:
|
|
57
|
+
blockingTimer(delay=2) # Wait 2 seconds
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
## Example: Exponential Backoff Retry
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
# Retry configuration
|
|
64
|
+
max_attempts = 4
|
|
65
|
+
delay = 1
|
|
66
|
+
backoff_factor = 2
|
|
67
|
+
attempts = 0
|
|
68
|
+
|
|
69
|
+
# Attempt loop
|
|
70
|
+
while attempts < max_attempts:
|
|
71
|
+
result = attempt_operation()
|
|
72
|
+
if result.success:
|
|
73
|
+
break
|
|
74
|
+
attempts += 1
|
|
75
|
+
if attempts < max_attempts:
|
|
76
|
+
blockingTimer(delay=delay)
|
|
77
|
+
delay = delay * backoff_factor # 1, 2, 4, 8...
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
## When to Use
|
|
81
|
+
|
|
82
|
+
| Scenario | delay | backoff_factor | max_attempts |
|
|
83
|
+
|----------|-------|----------------|--------------|
|
|
84
|
+
| Network timeout | 2 | 1 | 3 |
|
|
85
|
+
| API rate limit | 5 | 2 | 4 |
|
|
86
|
+
| File lock | 1 | 1 | 5 |
|
|
87
|
+
| Resource unavailable | 2 | 2 | 4 |
|
|
88
|
+
|
|
89
|
+
## User Notification on Persistent Failure
|
|
90
|
+
|
|
91
|
+
After all retries exhausted, use `question` tool:
|
|
92
|
+
|
|
93
|
+
```
|
|
94
|
+
question(questions=[{
|
|
95
|
+
"header": "Retry failed",
|
|
96
|
+
"question": "Operation failed after 3 attempts. How would you like to proceed?",
|
|
97
|
+
"options": [
|
|
98
|
+
{"label": "Retry now", "description": "Try one more time immediately"},
|
|
99
|
+
{"label": "Skip", "description": "Skip this item and continue"},
|
|
100
|
+
{"label": "Abort", "description": "Stop the entire workflow"}
|
|
101
|
+
]
|
|
102
|
+
}])
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
## Important Notes
|
|
106
|
+
|
|
107
|
+
1. **Maximum delay**: blockingTimer caps at 300 seconds
|
|
108
|
+
2. **Backoff calculation**: Use `calculator` tool if needed
|
|
109
|
+
3. **State tracking**: Track attempts in your workflow state
|
|
110
|
+
4. **Error logging**: Log failure reasons for debugging
|
|
@@ -0,0 +1,79 @@
|
|
|
1
|
+
# Shell Commands Pattern
|
|
2
|
+
|
|
3
|
+
Generate shell commands using forward slashes for cross-platform compatibility.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
Forward slashes (`/`) work on all major platforms:
|
|
8
|
+
- Unix-like (Linux, macOS): Native support
|
|
9
|
+
- Windows (Git Bash): Native support
|
|
10
|
+
- Windows (Python): Native support via `os.path`
|
|
11
|
+
- Windows (cmd.exe): Supported in most contexts
|
|
12
|
+
|
|
13
|
+
## Recommendation
|
|
14
|
+
|
|
15
|
+
**Use forward slashes (`/`) universally** for maximum compatibility.
|
|
16
|
+
|
|
17
|
+
## Examples
|
|
18
|
+
|
|
19
|
+
### Python Script Execution
|
|
20
|
+
```bash
|
|
21
|
+
uv run python <skill_path>/script.py --input ./data/file.csv --output ./results/output.xlsx
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
### Directory Creation
|
|
25
|
+
```bash
|
|
26
|
+
mkdir -p .work/subdir1 .work/subdir2
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
### File Listing
|
|
30
|
+
```bash
|
|
31
|
+
ls -la ./work/outputs/
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
### For Loop
|
|
35
|
+
```bash
|
|
36
|
+
for file in file1.txt file2.txt file3.txt; do
|
|
37
|
+
uv run python <skill_path>/process.py "$file"
|
|
38
|
+
done
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
### JSON String Arguments
|
|
42
|
+
```bash
|
|
43
|
+
uv run python script.py --data '{"key": "value"}'
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
### Environment Variables
|
|
47
|
+
```bash
|
|
48
|
+
export MY_VAR=value
|
|
49
|
+
uv run python script.py
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
## Windows Edge Cases
|
|
53
|
+
|
|
54
|
+
In rare cases, Windows cmd.exe may not support forward slashes:
|
|
55
|
+
|
|
56
|
+
| Command | Forward Slash | Alternative |
|
|
57
|
+
|---------|---------------|-------------|
|
|
58
|
+
| `dir` | Works | `dir ./folder` |
|
|
59
|
+
| `mkdir` | `mkdir -p` fails | Use Python: `python -c "import os; os.makedirs('./folder', exist_ok=True)"` |
|
|
60
|
+
| `copy` | Works | `copy ./src/file.txt ./dest/` |
|
|
61
|
+
| `del` | Works | `del ./folder/file.txt` |
|
|
62
|
+
|
|
63
|
+
> **Note:** If using pure cmd.exe (not Git Bash), the `-p` flag for `mkdir` is not supported. Use Python's `os.makedirs()` instead.
|
|
64
|
+
|
|
65
|
+
## Path Handling Guidelines
|
|
66
|
+
|
|
67
|
+
1. **Always use forward slashes** (`/`) in paths
|
|
68
|
+
2. **Quote paths with spaces** in all contexts
|
|
69
|
+
3. **Use relative paths** from project root when possible
|
|
70
|
+
4. **Python paths**: Forward slashes work on all platforms
|
|
71
|
+
5. **Skill path placeholder**: Replace `<skill_path>` with actual path from skill tool output
|
|
72
|
+
|
|
73
|
+
## Best Practices
|
|
74
|
+
|
|
75
|
+
1. **Default to forward slashes** - works 99% of the time
|
|
76
|
+
2. **Test on target platform** if edge cases suspected
|
|
77
|
+
3. **Use Python alternatives** for complex file operations
|
|
78
|
+
4. **Quote all paths** to handle spaces safely
|
|
79
|
+
5. **Avoid cmd.exe-specific syntax** (like `^` line continuation)
|