deepwork 0.4.0__py3-none-any.whl → 0.7.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- deepwork/__init__.py +1 -1
- deepwork/cli/hook.py +3 -4
- deepwork/cli/install.py +70 -117
- deepwork/cli/main.py +2 -2
- deepwork/cli/serve.py +133 -0
- deepwork/cli/sync.py +93 -58
- deepwork/core/adapters.py +91 -98
- deepwork/core/generator.py +19 -386
- deepwork/core/hooks_syncer.py +1 -1
- deepwork/core/parser.py +270 -1
- deepwork/hooks/README.md +0 -44
- deepwork/hooks/__init__.py +3 -6
- deepwork/hooks/check_version.sh +54 -21
- deepwork/mcp/__init__.py +23 -0
- deepwork/mcp/quality_gate.py +347 -0
- deepwork/mcp/schemas.py +263 -0
- deepwork/mcp/server.py +253 -0
- deepwork/mcp/state.py +422 -0
- deepwork/mcp/tools.py +394 -0
- deepwork/schemas/job.schema.json +347 -0
- deepwork/schemas/job_schema.py +27 -239
- deepwork/standard_jobs/deepwork_jobs/doc_specs/job_spec.md +9 -15
- deepwork/standard_jobs/deepwork_jobs/job.yml +146 -46
- deepwork/standard_jobs/deepwork_jobs/steps/define.md +100 -33
- deepwork/standard_jobs/deepwork_jobs/steps/errata.md +154 -0
- deepwork/standard_jobs/deepwork_jobs/steps/fix_jobs.md +207 -0
- deepwork/standard_jobs/deepwork_jobs/steps/fix_settings.md +177 -0
- deepwork/standard_jobs/deepwork_jobs/steps/implement.md +22 -138
- deepwork/standard_jobs/deepwork_jobs/steps/iterate.md +221 -0
- deepwork/standard_jobs/deepwork_jobs/steps/learn.md +2 -26
- deepwork/standard_jobs/deepwork_jobs/steps/test.md +154 -0
- deepwork/standard_jobs/deepwork_jobs/templates/job.yml.template +2 -0
- deepwork/templates/claude/AGENTS.md +38 -0
- deepwork/templates/claude/settings.json +16 -0
- deepwork/templates/claude/skill-deepwork.md.jinja +37 -0
- deepwork/templates/gemini/skill-deepwork.md.jinja +37 -0
- deepwork-0.7.0.dist-info/METADATA +317 -0
- deepwork-0.7.0.dist-info/RECORD +64 -0
- deepwork/cli/rules.py +0 -32
- deepwork/core/command_executor.py +0 -190
- deepwork/core/pattern_matcher.py +0 -271
- deepwork/core/rules_parser.py +0 -559
- deepwork/core/rules_queue.py +0 -321
- deepwork/hooks/rules_check.py +0 -759
- deepwork/schemas/rules_schema.py +0 -135
- deepwork/standard_jobs/deepwork_jobs/steps/review_job_spec.md +0 -208
- deepwork/standard_jobs/deepwork_jobs/templates/doc_spec.md.example +0 -86
- deepwork/standard_jobs/deepwork_rules/hooks/capture_prompt_work_tree.sh +0 -38
- deepwork/standard_jobs/deepwork_rules/hooks/global_hooks.yml +0 -8
- deepwork/standard_jobs/deepwork_rules/hooks/user_prompt_submit.sh +0 -16
- deepwork/standard_jobs/deepwork_rules/job.yml +0 -49
- deepwork/standard_jobs/deepwork_rules/rules/.gitkeep +0 -13
- deepwork/standard_jobs/deepwork_rules/rules/api-documentation-sync.md.example +0 -10
- deepwork/standard_jobs/deepwork_rules/rules/readme-documentation.md.example +0 -10
- deepwork/standard_jobs/deepwork_rules/rules/security-review.md.example +0 -11
- deepwork/standard_jobs/deepwork_rules/rules/skill-md-validation.md +0 -46
- deepwork/standard_jobs/deepwork_rules/rules/source-test-pairing.md.example +0 -13
- deepwork/standard_jobs/deepwork_rules/steps/define.md +0 -249
- deepwork/templates/claude/skill-job-meta.md.jinja +0 -77
- deepwork/templates/claude/skill-job-step.md.jinja +0 -251
- deepwork/templates/gemini/skill-job-meta.toml.jinja +0 -76
- deepwork/templates/gemini/skill-job-step.toml.jinja +0 -162
- deepwork-0.4.0.dist-info/METADATA +0 -381
- deepwork-0.4.0.dist-info/RECORD +0 -71
- {deepwork-0.4.0.dist-info → deepwork-0.7.0.dist-info}/WHEEL +0 -0
- {deepwork-0.4.0.dist-info → deepwork-0.7.0.dist-info}/entry_points.txt +0 -0
- {deepwork-0.4.0.dist-info → deepwork-0.7.0.dist-info}/licenses/LICENSE.md +0 -0
|
@@ -0,0 +1,221 @@
|
|
|
1
|
+
# Iterate on Workflow Design
|
|
2
|
+
|
|
3
|
+
## Objective
|
|
4
|
+
|
|
5
|
+
Review the test run conversation and improve the job definition based on what happened. This step closes the feedback loop by incorporating learnings from the test into the workflow itself, making future runs more efficient and producing better results.
|
|
6
|
+
|
|
7
|
+
## Task
|
|
8
|
+
|
|
9
|
+
Analyze the conversation history from the test step, identify areas for improvement, and update the job definition and step instructions accordingly.
|
|
10
|
+
|
|
11
|
+
### Step 1: Review the Conversation History
|
|
12
|
+
|
|
13
|
+
Carefully analyze the conversation from the test step, looking for:
|
|
14
|
+
|
|
15
|
+
1. **Process Inefficiencies**
|
|
16
|
+
- Steps that took multiple attempts to complete
|
|
17
|
+
- Questions the agent had to ask that should have been in the instructions
|
|
18
|
+
- Unnecessary back-and-forth with the user
|
|
19
|
+
- Information that had to be repeated
|
|
20
|
+
|
|
21
|
+
2. **Output Quality Issues**
|
|
22
|
+
- Issues identified during critique (from Step 3 of test)
|
|
23
|
+
- Corrections requested by the user
|
|
24
|
+
- Patterns in user feedback (what did they consistently want changed?)
|
|
25
|
+
|
|
26
|
+
3. **Tool Usage Problems**
|
|
27
|
+
- Tools that didn't work as expected
|
|
28
|
+
- Missing tools that would have helped
|
|
29
|
+
- Inefficient tool sequences
|
|
30
|
+
|
|
31
|
+
4. **Missing or Unclear Instructions**
|
|
32
|
+
- Ambiguities that led to wrong outputs
|
|
33
|
+
- Missing guidance that caused confusion
|
|
34
|
+
- Quality criteria that weren't clear enough
|
|
35
|
+
|
|
36
|
+
### Step 2: Plan Improvements
|
|
37
|
+
|
|
38
|
+
For each issue identified, determine the appropriate fix:
|
|
39
|
+
|
|
40
|
+
| Issue Type | Solution Location |
|
|
41
|
+
|------------|-------------------|
|
|
42
|
+
| Process inefficiency | Update step instructions with clearer guidance |
|
|
43
|
+
| Output quality | Update quality criteria or add examples |
|
|
44
|
+
| Missing information | Add to step inputs or instructions |
|
|
45
|
+
| Tool problems | Suggest different tools in instructions |
|
|
46
|
+
| Unclear criteria | Rewrite quality criteria to be specific |
|
|
47
|
+
|
|
48
|
+
**Prioritize improvements** that will have the most impact on future runs. Focus on:
|
|
49
|
+
- Issues that caused multiple iterations
|
|
50
|
+
- Problems that affected the final output quality
|
|
51
|
+
- Confusion that could be eliminated with clearer instructions
|
|
52
|
+
|
|
53
|
+
### Step 3: Update Step Instructions
|
|
54
|
+
|
|
55
|
+
For each step that needs improvement:
|
|
56
|
+
|
|
57
|
+
1. **Read the current instruction file** at `.deepwork/jobs/[job_name]/steps/[step_id].md`
|
|
58
|
+
|
|
59
|
+
2. **Make targeted improvements**:
|
|
60
|
+
- Add missing context or clarification
|
|
61
|
+
- Include examples of good output (use what worked in the test)
|
|
62
|
+
- Clarify ambiguous instructions
|
|
63
|
+
- Add tool recommendations if a different approach would be better
|
|
64
|
+
- Update quality criteria to match user expectations
|
|
65
|
+
|
|
66
|
+
3. **Keep instructions concise**:
|
|
67
|
+
- Avoid redundancy
|
|
68
|
+
- Be direct and actionable
|
|
69
|
+
- Use bullet points where appropriate
|
|
70
|
+
|
|
71
|
+
### Step 4: Update Quality Criteria
|
|
72
|
+
|
|
73
|
+
Review and update quality criteria in two places:
|
|
74
|
+
|
|
75
|
+
1. **In step instruction files** - The "Quality Criteria" section should reflect what the user actually cared about during testing
|
|
76
|
+
|
|
77
|
+
2. **In job.yml** - If steps have `quality_criteria` or `stop_hooks`, update them to:
|
|
78
|
+
- Remove criteria that weren't relevant
|
|
79
|
+
- Add criteria based on user feedback
|
|
80
|
+
- Make existing criteria more specific
|
|
81
|
+
|
|
82
|
+
**Example improvement:**
|
|
83
|
+
```yaml
|
|
84
|
+
# Before
|
|
85
|
+
quality_criteria:
|
|
86
|
+
- "Report is formatted correctly"
|
|
87
|
+
|
|
88
|
+
# After
|
|
89
|
+
quality_criteria:
|
|
90
|
+
- "Report uses distinct colors for each data series in charts"
|
|
91
|
+
- "Tables have sufficient padding and font size for readability"
|
|
92
|
+
- "Executive summary is understandable by non-technical readers"
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Step 5: Consider Alternative Tools
|
|
96
|
+
|
|
97
|
+
If any tools didn't work well during the test:
|
|
98
|
+
|
|
99
|
+
1. **Identify the problem** - What went wrong? (slow, wrong output, hard to use)
|
|
100
|
+
|
|
101
|
+
2. **Research alternatives** - What other tools could accomplish the same goal?
|
|
102
|
+
|
|
103
|
+
3. **Update instructions** - If a better tool exists, update the step instructions to recommend it
|
|
104
|
+
|
|
105
|
+
Examples:
|
|
106
|
+
- If web scraping was unreliable, suggest a specific browser automation approach
|
|
107
|
+
- If data processing was slow, suggest a different method or tool
|
|
108
|
+
- If file generation had issues, recommend a different library or format
|
|
109
|
+
|
|
110
|
+
### Step 6: Update Job Version and Changelog
|
|
111
|
+
|
|
112
|
+
After making improvements:
|
|
113
|
+
|
|
114
|
+
1. **Bump the version** in `job.yml`:
|
|
115
|
+
- Patch version (x.x.1) for minor instruction tweaks
|
|
116
|
+
- Minor version (x.1.0) for quality criteria changes or significant improvements
|
|
117
|
+
|
|
118
|
+
2. **Add a changelog entry** describing what was improved:
|
|
119
|
+
```yaml
|
|
120
|
+
changelog:
|
|
121
|
+
- version: "1.0.1"
|
|
122
|
+
changes: "Improved [step] instructions based on test run: added examples, clarified criteria, updated tool recommendations"
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
### Step 7: Provide Recap
|
|
126
|
+
|
|
127
|
+
Summarize the improvements made:
|
|
128
|
+
|
|
129
|
+
1. **List the issues identified** from the test run
|
|
130
|
+
|
|
131
|
+
2. **Describe the changes made** to address each issue
|
|
132
|
+
|
|
133
|
+
3. **Explain the expected impact** on future runs
|
|
134
|
+
|
|
135
|
+
Example recap format:
|
|
136
|
+
```
|
|
137
|
+
## Iteration Summary
|
|
138
|
+
|
|
139
|
+
### Issues Identified
|
|
140
|
+
1. Chart colors were hard to distinguish
|
|
141
|
+
2. Tables were too small to read
|
|
142
|
+
3. Executive summary was too technical
|
|
143
|
+
|
|
144
|
+
### Changes Made
|
|
145
|
+
|
|
146
|
+
**gather_data step:**
|
|
147
|
+
- Added instruction to use distinct color palettes for charts
|
|
148
|
+
- Updated quality criteria to require readable table formatting
|
|
149
|
+
|
|
150
|
+
**generate_report step:**
|
|
151
|
+
- Added example of good executive summary
|
|
152
|
+
- Updated quality criteria: "Summary understandable by non-technical stakeholders"
|
|
153
|
+
|
|
154
|
+
**job.yml:**
|
|
155
|
+
- Updated version to 1.0.1
|
|
156
|
+
- Added changelog entry
|
|
157
|
+
|
|
158
|
+
### Expected Impact
|
|
159
|
+
Future runs should produce reports with better visual design and clearer summaries, reducing the need for post-generation corrections.
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
## Example Dialog
|
|
163
|
+
|
|
164
|
+
```
|
|
165
|
+
Claude: Let me review our test run and improve the workflow based on what happened.
|
|
166
|
+
|
|
167
|
+
**Analyzing Test Conversation...**
|
|
168
|
+
|
|
169
|
+
I identified the following issues from the test run:
|
|
170
|
+
|
|
171
|
+
### Issues Found
|
|
172
|
+
|
|
173
|
+
1. **Chart readability (gather_data step)**: The instructions didn't specify how to handle color choices for visualizations. During the test, we had to fix colors that were too similar.
|
|
174
|
+
|
|
175
|
+
2. **Missing month-over-month comparison (analyze_trends step)**: The user wanted comparisons to previous periods, but this wasn't in the instructions. We had to add it after the fact.
|
|
176
|
+
|
|
177
|
+
3. **Technical executive summary (generate_report step)**: The summary was too technical for non-technical stakeholders. The quality criteria didn't specify the audience clearly.
|
|
178
|
+
|
|
179
|
+
4. **Table formatting (generate_report step)**: Tables were hard to read. No guidance on sizing or padding.
|
|
180
|
+
|
|
181
|
+
### Changes Made
|
|
182
|
+
|
|
183
|
+
**`.deepwork/jobs/monthly_report/steps/gather_data.md`:**
|
|
184
|
+
- Added: "Use distinct, high-contrast colors for chart data series. Avoid similar shades for different categories."
|
|
185
|
+
- Added to Quality Criteria: "Charts use visually distinct colors that are accessible to colorblind readers"
|
|
186
|
+
|
|
187
|
+
**`.deepwork/jobs/monthly_report/steps/analyze_trends.md`:**
|
|
188
|
+
- Added: "Always include comparison to the same period in the previous month/quarter"
|
|
189
|
+
- Added input: "previous_period_data" to formalize this requirement
|
|
190
|
+
|
|
191
|
+
**`.deepwork/jobs/monthly_report/steps/generate_report.md`:**
|
|
192
|
+
- Added example of good executive summary targeting non-technical readers
|
|
193
|
+
- Added: "Format tables with sufficient padding (at least 8px) and readable font size (minimum 11pt)"
|
|
194
|
+
- Updated Quality Criteria:
|
|
195
|
+
- "Executive summary is understandable by non-technical stakeholders"
|
|
196
|
+
- "Tables are readable with adequate padding and font size"
|
|
197
|
+
|
|
198
|
+
**`.deepwork/jobs/monthly_report/job.yml`:**
|
|
199
|
+
- Version bumped to 1.0.1
|
|
200
|
+
- Added changelog: "Improved visual design guidance and audience-appropriate writing based on test feedback"
|
|
201
|
+
|
|
202
|
+
Done! The workflow has been updated. Changes are immediately available through the MCP server.
|
|
203
|
+
|
|
204
|
+
### Expected Impact
|
|
205
|
+
|
|
206
|
+
Future monthly reports should:
|
|
207
|
+
- Have better chart color choices from the start
|
|
208
|
+
- Include month-over-month comparisons automatically
|
|
209
|
+
- Have clearer executive summaries for leadership
|
|
210
|
+
- Have more readable tables
|
|
211
|
+
|
|
212
|
+
This should significantly reduce the number of corrections needed after the initial output.
|
|
213
|
+
```
|
|
214
|
+
|
|
215
|
+
## Important Guidelines
|
|
216
|
+
|
|
217
|
+
1. **Be specific** - Don't just note "instructions were unclear" - explain exactly what was unclear and how you fixed it
|
|
218
|
+
2. **Use test results as examples** - If something worked well in the test, add it as an example in the instructions
|
|
219
|
+
3. **Don't over-engineer** - Only add improvements that address actual problems from the test
|
|
220
|
+
4. **Maintain consistency** - Keep the same structure and style in instruction files
|
|
221
|
+
5. **Test-driven improvement** - Every change should trace back to something that happened in the test
|
|
@@ -233,15 +233,6 @@ If instruction files were modified:
|
|
|
233
233
|
changes: "Improved [step] instructions based on execution learnings: [brief description]"
|
|
234
234
|
```
|
|
235
235
|
|
|
236
|
-
### Step 7: Sync and Relay Instructions
|
|
237
|
-
|
|
238
|
-
1. **Run deepwork sync** (if instructions were modified)
|
|
239
|
-
```bash
|
|
240
|
-
deepwork sync
|
|
241
|
-
```
|
|
242
|
-
|
|
243
|
-
2. **If skills were regenerated**, look at the "To use the new skills" section in the `deepwork sync` output and **relay these exact reload instructions to the user** (e.g., "Type 'exit' then run 'claude --resume'" for Claude Code)
|
|
244
|
-
|
|
245
236
|
## File Reference Patterns
|
|
246
237
|
|
|
247
238
|
When adding entries to AGENTS.md, prefer these patterns:
|
|
@@ -260,23 +251,10 @@ When adding entries to AGENTS.md, prefer these patterns:
|
|
|
260
251
|
- Configuration requires these fields: name, version, ...
|
|
261
252
|
```
|
|
262
253
|
|
|
263
|
-
## Quality Criteria
|
|
264
|
-
|
|
265
|
-
- Conversation has been analyzed for job executions
|
|
266
|
-
- Points of confusion and inefficiency are identified
|
|
267
|
-
- Learnings are correctly classified (generalizable vs bespoke)
|
|
268
|
-
- Job instructions updated for generalizable improvements
|
|
269
|
-
- Instructions are concise - no redundancy or unnecessary verbosity
|
|
270
|
-
- Shared/lengthy content extracted into referenced files where appropriate
|
|
271
|
-
- AGENTS.md created/updated with bespoke learnings
|
|
272
|
-
- File references used instead of duplicating content
|
|
273
|
-
- AGENTS.md is in the correct folder (the deepest common folder for the topic)
|
|
274
|
-
- When all criteria are met, include `<promise>✓ Quality Criteria Met</promise>`
|
|
275
|
-
|
|
276
254
|
## Example Dialog
|
|
277
255
|
|
|
278
256
|
```
|
|
279
|
-
User:
|
|
257
|
+
User: [Invokes the learn workflow via MCP]
|
|
280
258
|
|
|
281
259
|
Claude: I'll analyze this conversation for learnings from DeepWork job executions.
|
|
282
260
|
|
|
@@ -326,11 +304,9 @@ I found the following job executions:
|
|
|
326
304
|
|
|
327
305
|
3. Updated job.yml version to 1.0.1 with changelog entry
|
|
328
306
|
|
|
329
|
-
4. Ran `deepwork sync`
|
|
330
|
-
|
|
331
307
|
**Summary**
|
|
332
308
|
|
|
333
|
-
Updated job instructions and created AGENTS.md with bespoke learnings.
|
|
309
|
+
Updated job instructions and created AGENTS.md with bespoke learnings.
|
|
334
310
|
```
|
|
335
311
|
|
|
336
312
|
## Handling Edge Cases
|
|
@@ -0,0 +1,154 @@
|
|
|
1
|
+
# Test the New Workflow
|
|
2
|
+
|
|
3
|
+
## Objective
|
|
4
|
+
|
|
5
|
+
Run the newly created workflow on a real use case chosen by the user, critique the output, and iterate until the user is satisfied with the results. This step validates that the workflow works as intended before finalizing it.
|
|
6
|
+
|
|
7
|
+
## Task
|
|
8
|
+
|
|
9
|
+
Guide the user through testing their new workflow by running it on a real example, then critically evaluating the output and refining it based on user feedback.
|
|
10
|
+
|
|
11
|
+
### Step 1: Announce Readiness and Gather Test Case
|
|
12
|
+
|
|
13
|
+
The workflow is now implemented and ready to test. Use the AskUserQuestion tool to:
|
|
14
|
+
|
|
15
|
+
1. **Inform the user** that the workflow is ready for a test run
|
|
16
|
+
2. **Ask what they'd like to test it on** - Get a specific, real use case
|
|
17
|
+
|
|
18
|
+
Example question to ask:
|
|
19
|
+
```
|
|
20
|
+
Your new workflow is ready to try out! What would you like to use it on for the first test run?
|
|
21
|
+
|
|
22
|
+
Please describe a specific case you want to run through the workflow - ideally something you actually need done, so we can validate the workflow produces useful results.
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Important**: Get a concrete, specific test case. Vague responses like "just test it" should be followed up with clarifying questions to understand what inputs/context the workflow needs.
|
|
26
|
+
|
|
27
|
+
### Step 2: Prepare and Run the Workflow
|
|
28
|
+
|
|
29
|
+
1. **Prepare clean context** - Before invoking the workflow, consider compacting the conversation history (e.g., using `/compact` in Claude Code) to ensure the workflow starts with clean context focused on the test case.
|
|
30
|
+
|
|
31
|
+
2. **Start the new workflow** - Use `start_workflow` through the DeepWork MCP server with the job name and workflow name to begin executing the workflow.
|
|
32
|
+
|
|
33
|
+
3. **Complete the full workflow** - Continue through all steps of the workflow until it produces its final output. Use `finished_step` to progress through each step.
|
|
34
|
+
|
|
35
|
+
4. **Note any issues during execution** - Pay attention to:
|
|
36
|
+
- Confusion or ambiguity in instructions
|
|
37
|
+
- Missing information that had to be asked for
|
|
38
|
+
- Steps that took longer than expected
|
|
39
|
+
- Awkward tool usage or process flow
|
|
40
|
+
|
|
41
|
+
### Step 3: Critique the Output
|
|
42
|
+
|
|
43
|
+
After the workflow completes, perform a self-critique of the output:
|
|
44
|
+
|
|
45
|
+
1. **Review the final deliverable** - Read through all outputs produced by the workflow
|
|
46
|
+
|
|
47
|
+
2. **Identify up to 3 top issues** - Look for problems such as:
|
|
48
|
+
- Missing information or sections
|
|
49
|
+
- Formatting issues (layout, structure, readability)
|
|
50
|
+
- Quality problems (vague content, errors, inconsistencies)
|
|
51
|
+
- Misalignment with what the user likely wanted
|
|
52
|
+
- Technical issues (broken links, malformed data, etc.)
|
|
53
|
+
|
|
54
|
+
3. **Present each issue to the user** with a specific question asking if they want it fixed. For example:
|
|
55
|
+
```
|
|
56
|
+
I noticed a few things we could improve:
|
|
57
|
+
|
|
58
|
+
1. **Text overlap**: The PDF has some text overlapping images in a few places - shall I correct that?
|
|
59
|
+
|
|
60
|
+
2. **Missing summary**: The report doesn't have an executive summary at the top - would you like me to add one?
|
|
61
|
+
|
|
62
|
+
3. **Data formatting**: The numbers aren't consistently formatted (some have commas, some don't) - should I standardize them?
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
4. **Make corrections** for any issues the user confirms they want fixed
|
|
66
|
+
|
|
67
|
+
### Step 4: Gather Additional Feedback
|
|
68
|
+
|
|
69
|
+
After addressing the identified issues:
|
|
70
|
+
|
|
71
|
+
1. **Ask if the user has more comments** on the output:
|
|
72
|
+
```
|
|
73
|
+
I've made those corrections. Is there anything else you'd like me to change about the output?
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
2. **Iterate on feedback** - Keep making corrections based on user feedback until they indicate they're satisfied
|
|
77
|
+
|
|
78
|
+
3. **Confirm completion** - When the user says the output is good, confirm that testing is complete
|
|
79
|
+
|
|
80
|
+
### Loop Behavior
|
|
81
|
+
|
|
82
|
+
The feedback loop should continue until the user explicitly indicates satisfaction. Look for signals like:
|
|
83
|
+
- "Looks good"
|
|
84
|
+
- "That's fine"
|
|
85
|
+
- "I'm happy with it"
|
|
86
|
+
- "No more changes"
|
|
87
|
+
- "Perfect"
|
|
88
|
+
|
|
89
|
+
If the user provides more feedback, address it and ask again. Don't assume satisfaction without explicit confirmation.
|
|
90
|
+
|
|
91
|
+
## Example Dialog
|
|
92
|
+
|
|
93
|
+
```
|
|
94
|
+
Claude: Your new workflow is ready to try out!
|
|
95
|
+
|
|
96
|
+
The `monthly_report` workflow has been implemented with these steps:
|
|
97
|
+
1. gather_data - Collects metrics from various sources
|
|
98
|
+
2. analyze_trends - Identifies patterns and insights
|
|
99
|
+
3. generate_report - Creates the final PDF report
|
|
100
|
+
|
|
101
|
+
What would you like to use it on for the first test run? Please describe a specific report you need - for example, which month's data and any particular focus areas.
|
|
102
|
+
|
|
103
|
+
User: Let's do the January 2026 engineering report, focusing on deployment frequency and incident response times.
|
|
104
|
+
|
|
105
|
+
Claude: Great, let me run the workflow on your January 2026 engineering report.
|
|
106
|
+
|
|
107
|
+
[Starting the monthly_report workflow via MCP...]
|
|
108
|
+
|
|
109
|
+
[... workflow runs through all steps ...]
|
|
110
|
+
|
|
111
|
+
Claude: The January 2026 Engineering Report has been generated. Let me review it for any issues.
|
|
112
|
+
|
|
113
|
+
I noticed a few things we could improve:
|
|
114
|
+
|
|
115
|
+
1. **Chart readability**: The deployment frequency chart uses very similar colors for different teams - shall I change to a more distinct color palette?
|
|
116
|
+
|
|
117
|
+
2. **Missing context**: The incident response times don't include comparison to the previous month - would you like me to add month-over-month comparison?
|
|
118
|
+
|
|
119
|
+
3. **Executive summary**: The summary is quite technical - should I add a simpler high-level overview for non-technical stakeholders?
|
|
120
|
+
|
|
121
|
+
User: Yes to all three, please.
|
|
122
|
+
|
|
123
|
+
Claude: Done! I've updated the report with:
|
|
124
|
+
- Distinct color palette for the deployment chart
|
|
125
|
+
- Month-over-month comparison for incident response times
|
|
126
|
+
- Added a non-technical executive summary
|
|
127
|
+
|
|
128
|
+
Is there anything else you'd like me to change about the report?
|
|
129
|
+
|
|
130
|
+
User: Can you make the tables larger? They're a bit hard to read.
|
|
131
|
+
|
|
132
|
+
Claude: I've increased the table sizes for better readability. The font is now larger and there's more padding around cells.
|
|
133
|
+
|
|
134
|
+
Is there anything else you'd like me to adjust?
|
|
135
|
+
|
|
136
|
+
User: No, that looks great!
|
|
137
|
+
|
|
138
|
+
Claude: Excellent! The test run is complete. The workflow successfully generated your January 2026 Engineering Report with all the refinements applied.
|
|
139
|
+
|
|
140
|
+
**Test Summary:**
|
|
141
|
+
- Workflow ran successfully through all 3 steps
|
|
142
|
+
- Output required adjustments to chart colors, comparisons, and table sizing
|
|
143
|
+
- User is satisfied with the final result
|
|
144
|
+
|
|
145
|
+
Ready to proceed to the iterate step to improve the workflow based on what we learned.
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
## Important Guidelines
|
|
149
|
+
|
|
150
|
+
1. **Use real test cases** - Testing with actual data/needs validates the workflow better than hypothetical examples
|
|
151
|
+
2. **Be specific in critiques** - Don't say "formatting issues" - say exactly what's wrong and where
|
|
152
|
+
3. **Limit initial critique to 3 issues** - Too many issues at once is overwhelming; address more in the feedback loop
|
|
153
|
+
4. **Don't assume what user wants fixed** - Always ask before making corrections
|
|
154
|
+
5. **Iterate until satisfied** - Don't rush to completion; the user's satisfaction is the goal
|
|
@@ -34,6 +34,8 @@ steps:
|
|
|
34
34
|
outputs:
|
|
35
35
|
- [output_filename_or_path] # e.g., "report.md" or "reports/analysis.md"
|
|
36
36
|
dependencies: [] # List of step IDs that must complete first
|
|
37
|
+
# Optional: Delegate to a specific agent type (uses context: fork)
|
|
38
|
+
# agent: general-purpose # or other agent type
|
|
37
39
|
# Optional: Quality validation hooks
|
|
38
40
|
hooks:
|
|
39
41
|
after_agent:
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
# Claude Templates - Agent Notes
|
|
2
|
+
|
|
3
|
+
Notes for AI agents working on Claude Code jinja templates.
|
|
4
|
+
|
|
5
|
+
## Prompt-Based Stop Hooks - NOT WORKING
|
|
6
|
+
|
|
7
|
+
**IMPORTANT: Prompt-based stop hooks (`type: prompt` for Stop/SubagentStop events) do not currently work properly in Claude Code.**
|
|
8
|
+
|
|
9
|
+
Reference: https://github.com/anthropics/claude-code/issues/20221
|
|
10
|
+
|
|
11
|
+
### What This Means
|
|
12
|
+
|
|
13
|
+
- Do NOT use `type: prompt` hooks for Stop or SubagentStop events in templates
|
|
14
|
+
- The quality validation loop pattern that relies on prompt hooks will not function as expected
|
|
15
|
+
- Instead, instruct agents to use a sub-agent (e.g., Haiku) to review their work against quality criteria
|
|
16
|
+
|
|
17
|
+
### Workaround
|
|
18
|
+
|
|
19
|
+
Instead of prompt-based stop hooks, templates should include explicit instructions in the command content directing the agent to:
|
|
20
|
+
|
|
21
|
+
1. Have a sub-agent using Haiku review the work against the specified quality criteria
|
|
22
|
+
2. Fix any valid issues raised by the sub-agent
|
|
23
|
+
3. Have the sub-agent review again until all valid feedback is handled
|
|
24
|
+
|
|
25
|
+
### Future Reversal
|
|
26
|
+
|
|
27
|
+
If prompt-based stop hooks are fixed in Claude Code (check the issue above for updates), this guidance should be reversed and prompt hooks can be re-enabled in templates.
|
|
28
|
+
|
|
29
|
+
## Historical Context (Prompt Hooks - When They Work)
|
|
30
|
+
|
|
31
|
+
The following guidance applies IF prompt hooks start working again:
|
|
32
|
+
|
|
33
|
+
When writing prompt-based hooks (e.g., Stop hooks with `type: prompt`):
|
|
34
|
+
|
|
35
|
+
- **Do NOT include instructions on how to return responses** (e.g., "respond with JSON", "return `{"ok": true}`"). Claude Code's internal instructions already specify the expected response format for prompt hooks.
|
|
36
|
+
- Adding redundant response format instructions can cause conflicts or confusion with the built-in behavior. i.e. the hook will not block the agent from stopping.
|
|
37
|
+
|
|
38
|
+
Reference: https://github.com/anthropics/claude-code/issues/11786
|
|
@@ -0,0 +1,16 @@
|
|
|
1
|
+
{
|
|
2
|
+
"permissions": {
|
|
3
|
+
"allow": [
|
|
4
|
+
"Read(./.deepwork/**)",
|
|
5
|
+
"Edit(./.deepwork/**)",
|
|
6
|
+
"Write(./.deepwork/**)",
|
|
7
|
+
"Bash(deepwork:*)",
|
|
8
|
+
"Bash(./.deepwork/jobs/deepwork_jobs/make_new_job.sh:*)",
|
|
9
|
+
"WebSearch",
|
|
10
|
+
"mcp__deepwork__get_workflows",
|
|
11
|
+
"mcp__deepwork__start_workflow",
|
|
12
|
+
"mcp__deepwork__finished_step",
|
|
13
|
+
"mcp__deepwork__abort_workflow"
|
|
14
|
+
]
|
|
15
|
+
}
|
|
16
|
+
}
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
{#
|
|
2
|
+
Template: skill-deepwork.md.jinja
|
|
3
|
+
Purpose: Generates the main /deepwork skill that instructs agents to use MCP tools
|
|
4
|
+
|
|
5
|
+
This template is used to create the entry-point skill for DeepWork.
|
|
6
|
+
Instead of containing step instructions, it directs agents to use the
|
|
7
|
+
DeepWork MCP server tools.
|
|
8
|
+
#}
|
|
9
|
+
---
|
|
10
|
+
name: deepwork
|
|
11
|
+
description: "Start or continue DeepWork workflows using MCP tools"
|
|
12
|
+
---
|
|
13
|
+
|
|
14
|
+
# DeepWork Workflow Manager
|
|
15
|
+
|
|
16
|
+
Execute multi-step workflows with quality gate checkpoints.
|
|
17
|
+
|
|
18
|
+
> **IMPORTANT**: Use the DeepWork MCP server tools. All workflow operations
|
|
19
|
+
> are performed through MCP tool calls and following the instructions they return,
|
|
20
|
+
> not by reading instructions from files.
|
|
21
|
+
|
|
22
|
+
## How to Use
|
|
23
|
+
|
|
24
|
+
1. Call `get_workflows` to discover available workflows
|
|
25
|
+
2. Call `start_workflow` with goal, job_name, and workflow_name
|
|
26
|
+
3. Follow the step instructions returned
|
|
27
|
+
4. Call `finished_step` with your outputs when done
|
|
28
|
+
5. Handle the response: `needs_work`, `next_step`, or `workflow_complete`
|
|
29
|
+
|
|
30
|
+
## Intent Parsing
|
|
31
|
+
|
|
32
|
+
When the user invokes `/deepwork`, parse their intent:
|
|
33
|
+
1. **ALWAYS**: Call `get_workflows` to discover available workflows
|
|
34
|
+
2. Based on the available flows and what the user said in their request, proceed:
|
|
35
|
+
- **Explicit workflow**: `/deepwork <a workflow name>` → start the `<a workflow name>` workflow
|
|
36
|
+
- **General request**: `/deepwork <a request>` → infer best match from available workflows
|
|
37
|
+
- **No context**: `/deepwork` alone → ask user to choose from available workflows
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
{#
|
|
2
|
+
Template: skill-deepwork.md.jinja
|
|
3
|
+
Purpose: Generates the main /deepwork skill that instructs agents to use MCP tools
|
|
4
|
+
|
|
5
|
+
This template is used to create the entry-point skill for DeepWork.
|
|
6
|
+
Instead of containing step instructions, it directs agents to use the
|
|
7
|
+
DeepWork MCP server tools.
|
|
8
|
+
#}
|
|
9
|
+
+++
|
|
10
|
+
name = "deepwork"
|
|
11
|
+
description = "Start or continue DeepWork workflows using MCP tools"
|
|
12
|
+
+++
|
|
13
|
+
|
|
14
|
+
# DeepWork Workflow Manager
|
|
15
|
+
|
|
16
|
+
Execute multi-step workflows with quality gate checkpoints.
|
|
17
|
+
|
|
18
|
+
> **IMPORTANT**: Use the DeepWork MCP server tools. All workflow operations
|
|
19
|
+
> are performed through MCP tool calls and following the instructions they return,
|
|
20
|
+
> not by reading instructions from files.
|
|
21
|
+
|
|
22
|
+
## How to Use
|
|
23
|
+
|
|
24
|
+
1. Call `get_workflows` to discover available workflows
|
|
25
|
+
2. Call `start_workflow` with goal, job_name, and workflow_name
|
|
26
|
+
3. Follow the step instructions returned
|
|
27
|
+
4. Call `finished_step` with your outputs when done
|
|
28
|
+
5. Handle the response: `needs_work`, `next_step`, or `workflow_complete`
|
|
29
|
+
|
|
30
|
+
## Intent Parsing
|
|
31
|
+
|
|
32
|
+
When the user invokes `/deepwork`, parse their intent:
|
|
33
|
+
1. **ALWAYS**: Call `get_workflows` to discover available workflows
|
|
34
|
+
2. Based on the available flows and what the user said in their request, proceed:
|
|
35
|
+
- **Explicit workflow**: `/deepwork <a workflow name>` → start the `<a workflow name>` workflow
|
|
36
|
+
- **General request**: `/deepwork <a request>` → infer best match from available workflows
|
|
37
|
+
- **No context**: `/deepwork` alone → ask user to choose from available workflows
|