wogiflow 1.0.34 → 1.0.36

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -89,4 +89,74 @@ Recommended models for code generation:
89
89
  - **Qwen3-Coder 30B** - Best code quality
90
90
  - **DeepSeek Coder** - Good balance
91
91
 
92
+ ## Hybrid Mode Intelligence (v2.1)
93
+
94
+ Hybrid mode now includes intelligent features that learn from each execution:
95
+
96
+ ### Model Learning Profiles
97
+
98
+ Each executor model gets its own learning profile at `.workflow/state/model-profiles/`.
99
+ The system learns:
100
+ - What context each model needs for success
101
+ - Common failure patterns to avoid
102
+ - Optimal example count and instruction richness
103
+
104
+ ```bash
105
+ # View model profiles
106
+ node scripts/flow-model-profile.js list
107
+
108
+ # Get profile for specific model
109
+ node scripts/flow-model-profile.js get qwen3-coder
110
+
111
+ # Get instruction richness recommendation
112
+ node scripts/flow-model-profile.js richness qwen3-coder create --json
113
+ ```
114
+
115
+ ### Task Type Classification
116
+
117
+ Tasks are automatically classified as:
118
+ - **create** - New files/components
119
+ - **modify** - Edit existing files
120
+ - **refactor** - Structural changes
121
+ - **fix** - Bug fixes
122
+ - **integrate** - Connect systems
123
+
124
+ Each type loads specific context and follows learned patterns.
125
+
126
+ ```bash
127
+ # Classify a task
128
+ node scripts/flow-task-classifier.js classify "Add user authentication"
129
+
130
+ # Get context for task type
131
+ node scripts/flow-task-classifier.js context create
132
+ ```
133
+
134
+ ### Failure Learning
135
+
136
+ When execution fails, the system:
137
+ 1. Asks the executor what information was missing
138
+ 2. Updates the model profile with learnings
139
+ 3. Retries with enhanced context
140
+
141
+ ```bash
142
+ # View learning statistics
143
+ node scripts/flow-failure-learning.js stats
144
+
145
+ # View recent learnings
146
+ node scripts/flow-failure-learning.js recent qwen3-coder
147
+ ```
148
+
149
+ ### Cheaper Context Generation
150
+
151
+ Context is generated using the cheapest appropriate model:
152
+ - **Scripts**: File listing, export extraction (free)
153
+ - **Haiku**: Import mapping, PIN generation (cheap)
154
+ - **Sonnet**: Pattern identification (moderate)
155
+ - **Opus**: Architecture analysis (only when needed)
156
+
157
+ ```bash
158
+ # Generate project context
159
+ node scripts/flow-context-generator.js generate --verbose
160
+ ```
161
+
92
162
  Let me detect your local LLM setup now...
@@ -143,6 +143,39 @@ Ask user about default preferences:
143
143
  }
144
144
  ```
145
145
 
146
+ ### Step 3.5: Include Claude in Peer Reviews (Optional)
147
+
148
+ Ask if Claude should also participate as a reviewer:
149
+
150
+ ```javascript
151
+ {
152
+ question: "Include Claude as a peer reviewer?",
153
+ header: "Claude Review",
154
+ multiSelect: false,
155
+ options: [
156
+ {
157
+ label: "Yes (Recommended)",
158
+ description: "Claude also reviews alongside external models for an extra perspective"
159
+ },
160
+ {
161
+ label: "No",
162
+ description: "Only use external models for peer review"
163
+ }
164
+ ]
165
+ }
166
+ ```
167
+
168
+ Save to config:
169
+ ```javascript
170
+ const modelConfig = require('./scripts/flow-model-config');
171
+ modelConfig.setIncludeClaude(userSelectedYes);
172
+ ```
173
+
174
+ **Why include Claude?**
175
+ - Provides additional perspective from the orchestrating model
176
+ - Can leverage full conversation context when reviewing
177
+ - Catches things external models might miss due to context limitations
178
+
146
179
  ### Step 4: Summary
147
180
 
148
181
  Display configuration summary:
@@ -161,6 +194,7 @@ API Keys stored in: .env
161
194
  Config saved to: .workflow/config.json
162
195
 
163
196
  Default for peer review: gpt-4o, gemini-2.0-flash
197
+ Include Claude in reviews: Yes ✓
164
198
  Default for hybrid mode: local:qwen2.5-coder
165
199
 
166
200
  You can now use:
@@ -12,12 +12,16 @@ const modelConfig = require('./scripts/flow-model-config');
12
12
  // Run migration if needed (handles old config formats)
13
13
  modelConfig.migrateOldConfig();
14
14
 
15
+ // Check if Claude should also review
16
+ const includeClaude = modelConfig.shouldIncludeClaude();
17
+
15
18
  // Check if models already selected this session
16
19
  const sessionModels = modelConfig.getSessionModels('peerReview');
17
20
 
18
21
  if (sessionModels && sessionModels.length > 0 && !args.includes('--select-models')) {
19
22
  // Use session models - show brief note
20
- console.log(`Using models: ${sessionModels.join(', ')}`);
23
+ const claudeNote = includeClaude ? ' + Claude' : '';
24
+ console.log(`Using models: ${sessionModels.join(', ')}${claudeNote}`);
21
25
  console.log(`(Run with --select-models to change)`);
22
26
  // Proceed with review using sessionModels
23
27
  } else {
@@ -56,6 +60,8 @@ Show selection dialog when:
56
60
  header: "Models",
57
61
  multiSelect: true,
58
62
  options: [
63
+ // Claude option (when includeClaude is enabled in config)
64
+ { label: "Claude (current session)", description: "Reviews using current conversation context" },
59
65
  // Dynamically populated from configured models
60
66
  { label: "openai:gpt-4o", description: "Best quality reasoning" },
61
67
  { label: "openai:gpt-4o-mini", description: "Faster, cheaper" },
@@ -66,7 +72,11 @@ Show selection dialog when:
66
72
  }
67
73
  ```
68
74
 
69
- **Show only models that:**
75
+ **Show Claude option when:**
76
+ - `modelConfig.shouldIncludeClaude()` returns `true`
77
+ - Claude is shown first (recommended) as it has full context
78
+
79
+ **Show external models that:**
70
80
  1. Are configured in `models.providers`
71
81
  2. Have `enabled: true`
72
82
  3. Have API key set (check `process.env[apiKeyEnv]`) or are local
@@ -84,13 +94,24 @@ Then proceed with the review using selected models.
84
94
 
85
95
  ## How It Works
86
96
 
87
- 1. **Primary model (Claude)** reviews the changes for improvements
88
- 2. **Secondary model(s)** review the same changes
89
- 3. **Findings are compared** and disagreements surfaced
90
- 4. **Primary model responds** to peer feedback:
97
+ ### When `includeClaude` is enabled (recommended):
98
+ 1. **Claude reviews first** using the same improvement-focused prompt as external models
99
+ 2. **External model(s)** review the same changes via API
100
+ 3. **All findings compared** (Claude + external models)
101
+ 4. **Claude synthesizes** all perspectives and responds to disagreements
102
+
103
+ ### When `includeClaude` is disabled:
104
+ 1. **External model(s)** review the changes via API
105
+ 2. **Findings are compared** across external models
106
+ 3. **Claude synthesizes** findings and responds to peer feedback:
91
107
  - Defends decisions with context
92
108
  - OR acknowledges valid alternatives
93
109
 
110
+ **Why include Claude?**
111
+ - Provides additional perspective alongside external models
112
+ - Has full conversation context (knows why certain decisions were made)
113
+ - Catches things external models might miss due to context limitations
114
+
94
115
  ## Key Difference from `/wogi-review`
95
116
 
96
117
  | `/wogi-review` | `/wogi-peer-review` |
@@ -156,7 +177,8 @@ Models are configured in `.workflow/config.json` under `models`:
156
177
  }
157
178
  },
158
179
  "defaults": {
159
- "peerReview": ["openai:gpt-4o", "google:gemini-2.0-flash"]
180
+ "peerReview": ["openai:gpt-4o", "google:gemini-2.0-flash"],
181
+ "includeClaude": true
160
182
  }
161
183
  }
162
184
  }
@@ -202,18 +224,53 @@ When manual:
202
224
  ├─────────────────────────────────────────────────────────┤
203
225
  │ 1. Collect code changes (git diff or specified files) │
204
226
  │ 2. Generate improvement-focused prompt │
205
- │ 3. Claude reviews for improvements
206
- 4. Secondary model(s) review
207
- 5. Compare findings:
208
- Both agree Strong suggestion
209
- Disagree Present both perspectives
210
- │ 6. Claude responds to peer feedback:
227
+ │ 3. If includeClaude enabled:
228
+ Launch Claude review (Task agent, Explore type)
229
+ Claude reviews using same prompt as external
230
+ 4. External model(s) review via API
231
+ 5. Collect all results
232
+ │ 6. Compare findings:
233
+ │ • All agree → Strong suggestion │
234
+ │ • Partial agree → Present perspectives │
235
+ │ • Disagree → Surface disagreement │
236
+ │ 7. Claude synthesizes and responds to feedback: │
211
237
  │ • "I have more context, here's why X is better..." │
212
238
  │ • "Valid point, Y would be an improvement..." │
213
- 7. Output final synthesis │
239
+ 8. Output final synthesis │
214
240
  └─────────────────────────────────────────────────────────┘
215
241
  ```
216
242
 
243
+ ### Claude Review Implementation
244
+
245
+ When `includeClaude` is enabled, launch a Task agent to perform Claude's review:
246
+
247
+ ```javascript
248
+ // In wogi-peer-review execution
249
+ const modelConfig = require('./scripts/flow-model-config');
250
+
251
+ if (modelConfig.shouldIncludeClaude()) {
252
+ // Launch Task agent with subagent_type=Explore
253
+ // Use the same improvement-focused prompt as external models
254
+ // The agent reviews the code and returns findings
255
+ // Add Claude's results to the comparison alongside external model results
256
+ }
257
+ ```
258
+
259
+ **Task agent prompt for Claude review:**
260
+ ```
261
+ Review this code for IMPROVEMENT OPPORTUNITIES (not bugs):
262
+
263
+ 1. Optimization: Can this be faster/more efficient?
264
+ 2. Alternatives: Are there better approaches?
265
+ 3. Patterns: Does this follow best practices?
266
+ 4. Readability: Could this be clearer/simpler?
267
+ 5. Extensibility: Will this be easy to extend?
268
+
269
+ [code changes]
270
+
271
+ Return structured findings with specific suggestions.
272
+ ```
273
+
217
274
  ## Review Prompt Template
218
275
 
219
276
  The peer review focuses on improvements, not correctness:
@@ -243,21 +300,30 @@ Respond with:
243
300
  🔍 Peer Review Results
244
301
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
245
302
 
246
- Agreement (2/2 models):
303
+ Reviewers: Claude, GPT-4o, Gemini 2.0 Flash
304
+
305
+ ✅ Agreement (3/3 models):
247
306
  • Consider using early return for readability
248
307
  • Extract repeated logic to helper function
249
308
 
309
+ ⚖️ Partial Agreement (2/3 models):
310
+ • Claude + Gemini: Add input validation at boundary
311
+ • GPT-4o: Not necessary for internal function
312
+
250
313
  ⚖️ Disagreement:
251
314
  • Claude: Prefer inline styling for this case
252
- • GPT-4: Recommend extracting to CSS module
315
+ • GPT-4o: Recommend extracting to CSS module
316
+ • Gemini: No strong opinion
253
317
  → Resolution: Context-dependent, current approach is valid
254
318
 
255
319
  💡 Unique Insights:
256
- • [GPT-4] Consider memoization for expensive computation
257
320
  • [Claude] Current architecture handles edge case X well
321
+ • [GPT-4o] Consider memoization for expensive computation
322
+ • [Gemini] Similar pattern used in popular library Y
258
323
 
259
324
  📊 Summary:
260
- 3 actionable improvements identified
325
+ Reviewers: 3 (Claude + 2 external)
326
+ 4 actionable improvements identified
261
327
  1 disagreement resolved
262
328
  Code quality: Good, with minor optimization opportunities
263
329
  ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wogiflow",
3
- "version": "1.0.34",
3
+ "version": "1.0.36",
4
4
  "description": "AI-powered development workflow management system with multi-model support",
5
5
  "main": "lib/index.js",
6
6
  "bin": {