claude-mpm 5.6.9__py3-none-any.whl → 5.6.11__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
claude_mpm/VERSION CHANGED
@@ -1 +1 @@
1
- 5.6.9
1
+ 5.6.11
@@ -294,6 +294,8 @@ If you're about to run ANY other command, stop and delegate instead.
294
294
  - Grep (>1), Glob (investigation) → Delegate to research
295
295
  - `mcp__mcp-ticketer__*` → Delegate to ticketing
296
296
  - `mcp__chrome-devtools__*` → Delegate to web-qa
297
+ - `mcp__claude-in-chrome__*` → Delegate to web-qa
298
+ - `mcp__playwright__*` → Delegate to web-qa
297
299
 
298
300
  ## Agent Deployment Architecture
299
301
 
@@ -358,7 +360,7 @@ These are EXAMPLES of routing, not an exhaustive list. **Default to delegation f
358
360
  | **Research** | Understanding codebase, investigating approaches, analyzing files | Grep, Glob, Read multiple files, WebSearch | Investigation tools |
359
361
  | **Engineer** | Writing/modifying code, implementing features, refactoring | Edit, Write, codebase knowledge, testing workflows | - |
360
362
  | **Ops** (local-ops) | Deploying apps, managing infrastructure, starting servers, port/process management | Environment config, deployment procedures | Use `local-ops` for localhost/PM2/docker |
361
- | **QA** (web-qa, api-qa) | Testing implementations, verifying deployments, regression tests, browser testing | Playwright (web), fetch (APIs), verification protocols | For browser: use **web-qa** (never use chrome-devtools directly) |
363
+ | **QA** (web-qa, api-qa) | Testing implementations, verifying deployments, regression tests, browser testing | Playwright (web), fetch (APIs), verification protocols | For browser: use **web-qa** (never use chrome-devtools, claude-in-chrome, or playwright directly) |
362
364
  | **Documentation** | Creating/updating docs, README, API docs, guides | Style consistency, organization standards | - |
363
365
  | **Ticketing** | ALL ticket operations (CRUD, search, hierarchy, comments) | Direct mcp-ticketer access | PM never uses `mcp__mcp-ticketer__*` directly |
364
366
  | **Version Control** | Creating PRs, managing branches, complex git ops | PR workflows, branch management | Check git user for main branch access (bobmatnyc@users.noreply.github.com only) |
@@ -728,7 +730,7 @@ Circuit breakers automatically detect and enforce delegation requirements. All c
728
730
  | 3 | Unverified Assertions | PM claiming status without agent evidence | Require verification evidence | [Details](#circuit-breaker-3-unverified-assertions) |
729
731
  | 4 | File Tracking | PM marking task complete without tracking new files | Run git tracking sequence | [Details](#circuit-breaker-4-file-tracking-enforcement) |
730
732
  | 5 | Delegation Chain | PM claiming completion without full workflow delegation | Execute missing phases | [Details](#circuit-breaker-5-delegation-chain) |
731
- | 6 | Forbidden Tool Usage | PM using ticketing/browser MCP tools directly | Delegate to specialist agent | [Details](#circuit-breaker-6-forbidden-tool-usage) |
733
+ | 6 | Forbidden Tool Usage | PM using ticketing/browser MCP tools (ticketer, chrome-devtools, claude-in-chrome, playwright) directly | Delegate to specialist agent | [Details](#circuit-breaker-6-forbidden-tool-usage) |
732
734
  | 7 | Verification Commands | PM using curl/lsof/ps/wget/nc | Delegate to local-ops or QA | [Details](#circuit-breaker-7-verification-command-detection) |
733
735
  | 8 | QA Verification Gate | PM claiming work complete without QA delegation | BLOCK - Delegate to QA now | [Details](#circuit-breaker-8-qa-verification-gate) |
734
736
  | 9 | User Delegation | PM instructing user to run commands | Delegate to appropriate agent | [Details](#circuit-breaker-9-user-delegation-detection) |
@@ -747,6 +749,9 @@ Circuit breakers automatically detect and enforce delegation requirements. All c
747
749
  - "It works" / "It's deployed" → Circuit Breaker #3
748
750
  - Marks todo complete without `git status` → Circuit Breaker #4
749
751
  - Uses `mcp__mcp-ticketer__*` → Circuit Breaker #6
752
+ - Uses `mcp__chrome-devtools__*` → Circuit Breaker #6
753
+ - Uses `mcp__claude-in-chrome__*` → Circuit Breaker #6
754
+ - Uses `mcp__playwright__*` → Circuit Breaker #6
750
755
  - Uses curl/lsof directly → Circuit Breaker #7
751
756
  - Claims complete without QA → Circuit Breaker #8
752
757
  - "You'll need to run..." → Circuit Breaker #9
@@ -782,7 +787,7 @@ When the user says "just do it" or "handle it", delegate to the full workflow pi
782
787
 
783
788
  When the user says "verify", "check", or "test", delegate to the QA agent with specific verification criteria.
784
789
 
785
- When the user mentions "browser", "screenshot", "click", "navigate", "DOM", "console errors", delegate to web-qa agent for browser testing (NEVER use chrome-devtools tools directly).
790
+ When the user mentions "browser", "screenshot", "click", "navigate", "DOM", "console errors", "tabs", "window", delegate to web-qa agent for browser testing (NEVER use chrome-devtools, claude-in-chrome, or playwright tools directly).
786
791
 
787
792
  When the user mentions "localhost", "local server", or "PM2", delegate to **local-ops** as the primary choice for local development operations.
788
793
 
@@ -0,0 +1,10 @@
1
+ """Core coordination components for MPM Commander.
2
+
3
+ This module provides core components that coordinate between different
4
+ subsystems like events, work execution, and session management.
5
+ """
6
+
7
+ from .block_manager import BlockManager
8
+ from .response_manager import ResponseManager, ResponseRoute
9
+
10
+ __all__ = ["BlockManager", "ResponseManager", "ResponseRoute"]
@@ -0,0 +1,325 @@
1
+ """BlockManager for coordinating work blocking with events.
2
+
3
+ This module provides BlockManager which automatically blocks/unblocks
4
+ work items based on blocking event detection and resolution.
5
+ """
6
+
7
+ import logging
8
+ from typing import Dict, List, Optional, Set
9
+
10
+ from ..events.manager import EventManager
11
+ from ..models.events import Event
12
+ from ..models.work import WorkState
13
+ from ..work.executor import WorkExecutor
14
+ from ..work.queue import WorkQueue
15
+
16
+ logger = logging.getLogger(__name__)
17
+
18
+
19
+ class BlockManager:
20
+ """Coordinates blocking events with work execution.
21
+
22
+ Monitors blocking events and automatically blocks/unblocks work items
23
+ based on event lifecycle. Tracks event-to-work relationships for precise
24
+ unblocking when events are resolved.
25
+
26
+ Attributes:
27
+ event_manager: EventManager for querying blocking events
28
+ work_queues: Dict mapping project_id -> WorkQueue
29
+ work_executors: Dict mapping project_id -> WorkExecutor
30
+
31
+ Example:
32
+ >>> manager = BlockManager(event_manager, work_queues, work_executors)
33
+ >>> blocked = await manager.check_and_block(event)
34
+ >>> unblocked = await manager.check_and_unblock(event_id)
35
+ """
36
+
37
+ def __init__(
38
+ self,
39
+ event_manager: EventManager,
40
+ work_queues: Dict[str, WorkQueue],
41
+ work_executors: Dict[str, WorkExecutor],
42
+ ):
43
+ """Initialize BlockManager.
44
+
45
+ Args:
46
+ event_manager: EventManager instance
47
+ work_queues: Dict mapping project_id -> WorkQueue
48
+ work_executors: Dict mapping project_id -> WorkExecutor
49
+
50
+ Raises:
51
+ ValueError: If any required parameter is None
52
+ """
53
+ if event_manager is None:
54
+ raise ValueError("EventManager cannot be None")
55
+ if work_queues is None:
56
+ raise ValueError("work_queues cannot be None")
57
+ if work_executors is None:
58
+ raise ValueError("work_executors cannot be None")
59
+
60
+ self.event_manager = event_manager
61
+ self.work_queues = work_queues
62
+ self.work_executors = work_executors
63
+
64
+ # Track event-to-work mapping: event_id -> set of work_ids
65
+ self._event_work_mapping: Dict[str, Set[str]] = {}
66
+
67
+ logger.debug("BlockManager initialized")
68
+
69
+ async def check_and_block(self, event: Event) -> List[str]:
70
+ """Check if event is blocking and block affected work.
71
+
72
+ When a blocking event is detected:
73
+ 1. Determine blocking scope (project or all)
74
+ 2. Find all in-progress work items in scope
75
+ 3. Block each work item via WorkExecutor
76
+ 4. Track event-to-work mapping for later unblocking
77
+
78
+ Args:
79
+ event: Event to check for blocking
80
+
81
+ Returns:
82
+ List of work item IDs that were blocked
83
+
84
+ Example:
85
+ >>> event = Event(type=EventType.ERROR, ...)
86
+ >>> blocked = await manager.check_and_block(event)
87
+ >>> print(f"Blocked {len(blocked)} work items")
88
+ """
89
+ if not event.is_blocking:
90
+ logger.debug("Event %s is not blocking, no action needed", event.id)
91
+ return []
92
+
93
+ logger.info(
94
+ "Processing blocking event %s (scope: %s): %s",
95
+ event.id,
96
+ event.blocking_scope,
97
+ event.title,
98
+ )
99
+
100
+ blocked_work_ids = []
101
+
102
+ # Determine which projects to block based on scope
103
+ if event.blocking_scope == "all":
104
+ # Block all projects
105
+ target_projects = list(self.work_queues.keys())
106
+ logger.info("Event %s blocks ALL projects", event.id)
107
+ elif event.blocking_scope == "project":
108
+ # Block only this project
109
+ target_projects = [event.project_id]
110
+ logger.info("Event %s blocks project %s only", event.id, event.project_id)
111
+ else:
112
+ logger.warning(
113
+ "Unknown blocking scope '%s' for event %s",
114
+ event.blocking_scope,
115
+ event.id,
116
+ )
117
+ return []
118
+
119
+ # Block in-progress work in target projects
120
+ for project_id in target_projects:
121
+ queue = self.work_queues.get(project_id)
122
+ if not queue:
123
+ logger.debug("No work queue for project %s", project_id)
124
+ continue
125
+
126
+ executor = self.work_executors.get(project_id)
127
+ if not executor:
128
+ logger.debug("No work executor for project %s", project_id)
129
+ continue
130
+
131
+ # Get in-progress work items
132
+ in_progress = queue.list(WorkState.IN_PROGRESS)
133
+
134
+ for work_item in in_progress:
135
+ # Block the work item
136
+ block_reason = f"Event {event.id}: {event.title}"
137
+ success = await executor.handle_block(work_item.id, block_reason)
138
+
139
+ if success:
140
+ blocked_work_ids.append(work_item.id)
141
+ logger.info(
142
+ "Blocked work item %s for project %s: %s",
143
+ work_item.id,
144
+ project_id,
145
+ block_reason,
146
+ )
147
+ else:
148
+ logger.warning(
149
+ "Failed to block work item %s for project %s",
150
+ work_item.id,
151
+ project_id,
152
+ )
153
+
154
+ # Track event-to-work mapping
155
+ if blocked_work_ids:
156
+ self._event_work_mapping[event.id] = set(blocked_work_ids)
157
+ logger.info(
158
+ "Event %s blocked %d work items: %s",
159
+ event.id,
160
+ len(blocked_work_ids),
161
+ blocked_work_ids,
162
+ )
163
+
164
+ return blocked_work_ids
165
+
166
+ async def check_and_unblock(self, event_id: str) -> List[str]:
167
+ """Unblock work items when event is resolved.
168
+
169
+ When a blocking event is resolved:
170
+ 1. Look up which work items were blocked by this event
171
+ 2. Unblock each work item via WorkExecutor
172
+ 3. Remove event-to-work mapping
173
+
174
+ Args:
175
+ event_id: ID of resolved event
176
+
177
+ Returns:
178
+ List of work item IDs that were unblocked
179
+
180
+ Example:
181
+ >>> unblocked = await manager.check_and_unblock("evt_123")
182
+ >>> print(f"Unblocked {len(unblocked)} work items")
183
+ """
184
+ # Get work items blocked by this event
185
+ work_ids = self._event_work_mapping.pop(event_id, set())
186
+
187
+ if not work_ids:
188
+ logger.debug("No work items blocked by event %s", event_id)
189
+ return []
190
+
191
+ logger.info(
192
+ "Unblocking %d work items for resolved event %s", len(work_ids), event_id
193
+ )
194
+
195
+ unblocked_work_ids = []
196
+
197
+ # Unblock each work item
198
+ for work_id in work_ids:
199
+ # Find which project this work belongs to
200
+ project_id = self._find_work_project(work_id)
201
+ if not project_id:
202
+ logger.warning("Cannot find project for work item %s", work_id)
203
+ continue
204
+
205
+ executor = self.work_executors.get(project_id)
206
+ if not executor:
207
+ logger.warning("No executor for project %s", project_id)
208
+ continue
209
+
210
+ # Unblock the work item
211
+ success = await executor.handle_unblock(work_id)
212
+
213
+ if success:
214
+ unblocked_work_ids.append(work_id)
215
+ logger.info("Unblocked work item %s", work_id)
216
+ else:
217
+ logger.warning("Failed to unblock work item %s", work_id)
218
+
219
+ return unblocked_work_ids
220
+
221
+ def _find_work_project(self, work_id: str) -> Optional[str]:
222
+ """Find which project a work item belongs to.
223
+
224
+ Args:
225
+ work_id: Work item ID to search for
226
+
227
+ Returns:
228
+ Project ID if found, None otherwise
229
+ """
230
+ for project_id, queue in self.work_queues.items():
231
+ work_item = queue.get(work_id)
232
+ if work_item:
233
+ return project_id
234
+ return None
235
+
236
+ def get_blocked_work(self, event_id: str) -> Set[str]:
237
+ """Get work items blocked by a specific event.
238
+
239
+ Args:
240
+ event_id: Event ID to check
241
+
242
+ Returns:
243
+ Set of work item IDs blocked by this event
244
+
245
+ Example:
246
+ >>> work_ids = manager.get_blocked_work("evt_123")
247
+ """
248
+ return self._event_work_mapping.get(event_id, set()).copy()
249
+
250
+ def get_blocking_events(self, work_id: str) -> List[str]:
251
+ """Get events that are blocking a specific work item.
252
+
253
+ Args:
254
+ work_id: Work item ID to check
255
+
256
+ Returns:
257
+ List of event IDs blocking this work item
258
+
259
+ Example:
260
+ >>> events = manager.get_blocking_events("work-123")
261
+ """
262
+ blocking_events = []
263
+ for event_id, work_ids in self._event_work_mapping.items():
264
+ if work_id in work_ids:
265
+ blocking_events.append(event_id)
266
+ return blocking_events
267
+
268
+ def is_work_blocked(self, work_id: str) -> bool:
269
+ """Check if a work item is currently blocked.
270
+
271
+ Args:
272
+ work_id: Work item ID to check
273
+
274
+ Returns:
275
+ True if work item is blocked by any event, False otherwise
276
+
277
+ Example:
278
+ >>> if manager.is_work_blocked("work-123"):
279
+ ... print("Work is blocked")
280
+ """
281
+ return len(self.get_blocking_events(work_id)) > 0
282
+
283
+ def clear_project_mappings(self, project_id: str) -> int:
284
+ """Clear all event-work mappings for a project.
285
+
286
+ Called when a project is shut down or reset.
287
+
288
+ Args:
289
+ project_id: Project ID to clear
290
+
291
+ Returns:
292
+ Number of work items that had mappings removed
293
+
294
+ Example:
295
+ >>> count = manager.clear_project_mappings("proj_123")
296
+ """
297
+ queue = self.work_queues.get(project_id)
298
+ if not queue:
299
+ return 0
300
+
301
+ # Get all work IDs for this project
302
+ all_work = queue.list()
303
+ project_work_ids = {w.id for w in all_work}
304
+
305
+ removed_count = 0
306
+
307
+ # Remove work items from event mappings
308
+ for event_id in list(self._event_work_mapping.keys()):
309
+ work_ids = self._event_work_mapping[event_id]
310
+ original_len = len(work_ids)
311
+
312
+ # Remove project work items
313
+ work_ids.difference_update(project_work_ids)
314
+
315
+ removed_count += original_len - len(work_ids)
316
+
317
+ # Remove empty mappings
318
+ if not work_ids:
319
+ del self._event_work_mapping[event_id]
320
+
321
+ logger.info(
322
+ "Cleared %d work item mappings for project %s", removed_count, project_id
323
+ )
324
+
325
+ return removed_count
@@ -0,0 +1,323 @@
1
+ """ResponseManager for centralized response routing and validation.
2
+
3
+ This module provides ResponseManager which handles response validation,
4
+ routing, and delivery to runtime sessions.
5
+ """
6
+
7
+ import logging
8
+ from dataclasses import dataclass, field
9
+ from datetime import datetime, timezone
10
+ from typing import Any, Dict, List, Optional, Tuple
11
+
12
+ from ..events.manager import EventManager
13
+ from ..models.events import Event, EventType
14
+ from ..runtime.executor import RuntimeExecutor
15
+
16
+ logger = logging.getLogger(__name__)
17
+
18
+
19
+ def _utc_now() -> datetime:
20
+ """Return current UTC time with timezone info."""
21
+ return datetime.now(timezone.utc)
22
+
23
+
24
+ @dataclass
25
+ class ResponseRoute:
26
+ """Encapsulates a validated response ready for delivery.
27
+
28
+ Attributes:
29
+ event: Event being responded to
30
+ response: User's response text
31
+ valid: Whether validation passed
32
+ validation_errors: List of validation error messages
33
+ timestamp: When the route was created
34
+ delivered: Whether response has been delivered
35
+ delivery_timestamp: When the response was delivered
36
+ """
37
+
38
+ event: Event
39
+ response: str
40
+ valid: bool
41
+ validation_errors: List[str] = field(default_factory=list)
42
+ timestamp: datetime = field(default_factory=_utc_now)
43
+ delivered: bool = False
44
+ delivery_timestamp: Optional[datetime] = None
45
+
46
+
47
+ class ResponseManager:
48
+ """Centralizes response validation, routing, and delivery.
49
+
50
+ Provides centralized response handling with validation and routing
51
+ capabilities for event responses.
52
+
53
+ Attributes:
54
+ event_manager: EventManager for retrieving events
55
+ runtime_executor: Optional RuntimeExecutor for response delivery
56
+ _response_history: History of all response attempts per event
57
+
58
+ Example:
59
+ >>> manager = ResponseManager(event_manager, runtime_executor)
60
+ >>> valid, errors = manager.validate_response(event, "staging")
61
+ >>> if valid:
62
+ ... route = manager.validate_and_route(event_id, "staging")
63
+ ... success = await manager.deliver_response(route)
64
+ """
65
+
66
+ def __init__(
67
+ self,
68
+ event_manager: EventManager,
69
+ runtime_executor: Optional[RuntimeExecutor] = None,
70
+ ) -> None:
71
+ """Initialize ResponseManager.
72
+
73
+ Args:
74
+ event_manager: EventManager instance for retrieving events
75
+ runtime_executor: Optional RuntimeExecutor for response delivery
76
+
77
+ Raises:
78
+ ValueError: If event_manager is None
79
+ """
80
+ if event_manager is None:
81
+ raise ValueError("EventManager cannot be None")
82
+
83
+ self.event_manager = event_manager
84
+ self.runtime_executor = runtime_executor
85
+ self._response_history: Dict[str, List[ResponseRoute]] = {}
86
+
87
+ logger.debug(
88
+ "ResponseManager initialized (runtime_executor: %s)",
89
+ "enabled" if runtime_executor else "disabled",
90
+ )
91
+
92
+ def validate_response(self, event: Event, response: str) -> Tuple[bool, List[str]]:
93
+ """Validate response against event constraints.
94
+
95
+ Validation rules:
96
+ 1. Empty responses: Not allowed for blocking events
97
+ 2. DECISION_NEEDED options: Response must match one of the options
98
+ 3. Response whitespace: Stripped before validation
99
+
100
+ Args:
101
+ event: Event being responded to
102
+ response: User's response
103
+
104
+ Returns:
105
+ Tuple of (is_valid, list_of_error_messages)
106
+
107
+ Example:
108
+ >>> valid, errors = manager.validate_response(event, "staging")
109
+ >>> if not valid:
110
+ ... for error in errors:
111
+ ... print(f"Validation error: {error}")
112
+ """
113
+ errors: List[str] = []
114
+
115
+ # Strip whitespace for validation
116
+ response_stripped = response.strip()
117
+
118
+ # Rule 1: Empty responses not allowed for blocking events
119
+ if event.is_blocking and not response_stripped:
120
+ errors.append("Response cannot be empty for blocking events")
121
+
122
+ # Rule 2: DECISION_NEEDED events must use one of the provided options
123
+ if event.type == EventType.DECISION_NEEDED and event.options:
124
+ if response_stripped not in event.options:
125
+ errors.append(
126
+ f"Response must be one of: {', '.join(event.options)}. "
127
+ f"Got: '{response_stripped}'"
128
+ )
129
+
130
+ # Future validation rules can be added here:
131
+ # - Max length check
132
+ # - Format validation (e.g., regex patterns)
133
+ # - Custom validators per event type
134
+ # - Conditional validation based on event context
135
+
136
+ is_valid = len(errors) == 0
137
+ return is_valid, errors
138
+
139
+ def validate_and_route(
140
+ self, event_id: str, response: str
141
+ ) -> Optional[ResponseRoute]:
142
+ """Create a validated ResponseRoute for an event.
143
+
144
+ Retrieves the event, validates the response, and creates a ResponseRoute
145
+ with validation results.
146
+
147
+ Args:
148
+ event_id: ID of event to respond to
149
+ response: User's response
150
+
151
+ Returns:
152
+ ResponseRoute with validation results, or None if event not found
153
+
154
+ Example:
155
+ >>> route = manager.validate_and_route("evt_123", "staging")
156
+ >>> if route and route.valid:
157
+ ... await manager.deliver_response(route)
158
+ >>> elif route:
159
+ ... print(f"Validation failed: {route.validation_errors}")
160
+ """
161
+ # Get the event
162
+ event = self.event_manager.get(event_id)
163
+ if not event:
164
+ logger.warning("Event not found: %s", event_id)
165
+ return None
166
+
167
+ # Validate response
168
+ valid, errors = self.validate_response(event, response)
169
+
170
+ # Create route
171
+ route = ResponseRoute(
172
+ event=event,
173
+ response=response,
174
+ valid=valid,
175
+ validation_errors=errors,
176
+ )
177
+
178
+ logger.debug(
179
+ "Created route for event %s: valid=%s, errors=%s",
180
+ event_id,
181
+ valid,
182
+ errors,
183
+ )
184
+
185
+ return route
186
+
187
+ async def deliver_response(self, route: ResponseRoute) -> bool:
188
+ """Deliver a validated response to the runtime.
189
+
190
+ Records the response in event history and attempts delivery to the
191
+ runtime executor if available.
192
+
193
+ Args:
194
+ route: ResponseRoute to deliver
195
+
196
+ Returns:
197
+ True if delivery successful, False otherwise
198
+
199
+ Raises:
200
+ ValueError: If route validation failed
201
+
202
+ Example:
203
+ >>> route = manager.validate_and_route("evt_123", "yes")
204
+ >>> if route and route.valid:
205
+ ... success = await manager.deliver_response(route)
206
+ ... if success:
207
+ ... print("Response delivered successfully")
208
+ """
209
+ if not route.valid:
210
+ error_msg = "; ".join(route.validation_errors)
211
+ raise ValueError(f"Cannot deliver invalid response: {error_msg}")
212
+
213
+ # Mark route as delivered
214
+ route.delivered = True
215
+ route.delivery_timestamp = _utc_now()
216
+
217
+ # Track in history
218
+ self._add_to_history(route)
219
+
220
+ # For non-blocking events, no runtime delivery needed
221
+ if not route.event.is_blocking:
222
+ logger.debug(
223
+ "Event %s is non-blocking, no runtime delivery needed",
224
+ route.event.id,
225
+ )
226
+ return True
227
+
228
+ # Deliver to runtime if executor available
229
+ if not self.runtime_executor:
230
+ logger.warning(
231
+ "No runtime executor available, cannot deliver response for event %s",
232
+ route.event.id,
233
+ )
234
+ return False
235
+
236
+ # Note: Actual delivery is handled by EventHandler which has session context
237
+ # ResponseManager just validates and tracks responses
238
+ # The EventHandler will call executor.send_message() with session's active_pane
239
+ logger.info(
240
+ "Response validated and ready for delivery (event %s): %s",
241
+ route.event.id,
242
+ route.response[:50],
243
+ )
244
+ return True
245
+
246
+ def _add_to_history(self, route: ResponseRoute) -> None:
247
+ """Add response route to history tracking.
248
+
249
+ Args:
250
+ route: ResponseRoute to record
251
+ """
252
+ event_id = route.event.id
253
+ if event_id not in self._response_history:
254
+ self._response_history[event_id] = []
255
+
256
+ self._response_history[event_id].append(route)
257
+ logger.debug(
258
+ "Added response to history for event %s (total: %d)",
259
+ event_id,
260
+ len(self._response_history[event_id]),
261
+ )
262
+
263
+ def get_response_history(self, event_id: str) -> List[ResponseRoute]:
264
+ """Get all response attempts for an event (for audit trail).
265
+
266
+ Args:
267
+ event_id: Event ID to query
268
+
269
+ Returns:
270
+ List of ResponseRoute objects for this event (chronological order)
271
+
272
+ Example:
273
+ >>> history = manager.get_response_history("evt_123")
274
+ >>> for i, route in enumerate(history, 1):
275
+ ... status = "valid" if route.valid else "invalid"
276
+ ... print(f"Attempt {i} ({status}): {route.response}")
277
+ """
278
+ return self._response_history.get(event_id, []).copy()
279
+
280
+ def clear_history(self, event_id: str) -> int:
281
+ """Clear response history for an event.
282
+
283
+ Args:
284
+ event_id: Event ID to clear
285
+
286
+ Returns:
287
+ Number of history entries removed
288
+
289
+ Example:
290
+ >>> removed = manager.clear_history("evt_123")
291
+ >>> print(f"Cleared {removed} history entries")
292
+ """
293
+ history = self._response_history.pop(event_id, [])
294
+ count = len(history)
295
+ if count > 0:
296
+ logger.debug("Cleared %d history entries for event %s", count, event_id)
297
+ return count
298
+
299
+ def get_stats(self) -> Dict[str, Any]:
300
+ """Get statistics about response history.
301
+
302
+ Returns:
303
+ Dict with statistics about tracked responses
304
+
305
+ Example:
306
+ >>> stats = manager.get_stats()
307
+ >>> print(f"Total events with history: {stats['total_events']}")
308
+ >>> print(f"Total response attempts: {stats['total_responses']}")
309
+ """
310
+ total_events = len(self._response_history)
311
+ total_responses = sum(len(routes) for routes in self._response_history.values())
312
+ valid_responses = sum(
313
+ sum(1 for route in routes if route.valid)
314
+ for routes in self._response_history.values()
315
+ )
316
+ invalid_responses = total_responses - valid_responses
317
+
318
+ return {
319
+ "total_events": total_events,
320
+ "total_responses": total_responses,
321
+ "valid_responses": valid_responses,
322
+ "invalid_responses": invalid_responses,
323
+ }
@@ -15,12 +15,18 @@ from .api.app import (
15
15
  app,
16
16
  )
17
17
  from .config import DaemonConfig
18
+ from .core.block_manager import BlockManager
18
19
  from .events.manager import EventManager
19
20
  from .inbox import Inbox
21
+ from .parsing.output_parser import OutputParser
20
22
  from .persistence import EventStore, StateStore
21
23
  from .project_session import ProjectSession, SessionState
22
24
  from .registry import ProjectRegistry
25
+ from .runtime.monitor import RuntimeMonitor
23
26
  from .tmux_orchestrator import TmuxOrchestrator
27
+ from .work.executor import WorkExecutor
28
+ from .work.queue import WorkQueue
29
+ from .workflow.event_handler import EventHandler
24
30
 
25
31
  logger = logging.getLogger(__name__)
26
32
 
@@ -38,6 +44,11 @@ class CommanderDaemon:
38
44
  event_manager: Event manager
39
45
  inbox: Event inbox
40
46
  sessions: Active project sessions by project_id
47
+ work_queues: Work queues by project_id
48
+ work_executors: Work executors by project_id
49
+ block_manager: Block manager for automatic work blocking
50
+ runtime_monitor: Runtime monitor for output monitoring
51
+ event_handler: Event handler for blocking event workflow
41
52
  state_store: StateStore for project/session persistence
42
53
  event_store: EventStore for event queue persistence
43
54
  running: Whether daemon is currently running
@@ -68,6 +79,8 @@ class CommanderDaemon:
68
79
  self.event_manager = EventManager()
69
80
  self.inbox = Inbox(self.event_manager, self.registry)
70
81
  self.sessions: Dict[str, ProjectSession] = {}
82
+ self.work_queues: Dict[str, WorkQueue] = {}
83
+ self.work_executors: Dict[str, WorkExecutor] = {}
71
84
  self._running = False
72
85
  self._server_task: Optional[asyncio.Task] = None
73
86
  self._main_loop_task: Optional[asyncio.Task] = None
@@ -76,6 +89,30 @@ class CommanderDaemon:
76
89
  self.state_store = StateStore(config.state_dir)
77
90
  self.event_store = EventStore(config.state_dir)
78
91
 
92
+ # Initialize BlockManager with work queues and executors
93
+ self.block_manager = BlockManager(
94
+ event_manager=self.event_manager,
95
+ work_queues=self.work_queues,
96
+ work_executors=self.work_executors,
97
+ )
98
+
99
+ # Initialize RuntimeMonitor with BlockManager
100
+ parser = OutputParser(self.event_manager)
101
+ self.runtime_monitor = RuntimeMonitor(
102
+ orchestrator=self.orchestrator,
103
+ parser=parser,
104
+ event_manager=self.event_manager,
105
+ poll_interval=config.poll_interval,
106
+ block_manager=self.block_manager,
107
+ )
108
+
109
+ # Initialize EventHandler with BlockManager
110
+ self.event_handler = EventHandler(
111
+ inbox=self.inbox,
112
+ session_manager=self.sessions,
113
+ block_manager=self.block_manager,
114
+ )
115
+
79
116
  # Configure logging
80
117
  logging.basicConfig(
81
118
  level=getattr(logging, config.log_level.upper()),
@@ -171,6 +208,16 @@ class CommanderDaemon:
171
208
  except Exception as e:
172
209
  logger.error(f"Error stopping session {project_id}: {e}")
173
210
 
211
+ # Clear BlockManager project mappings
212
+ for project_id in list(self.work_queues.keys()):
213
+ try:
214
+ removed = self.block_manager.clear_project_mappings(project_id)
215
+ logger.debug(
216
+ f"Cleared {removed} work mappings for project {project_id}"
217
+ )
218
+ except Exception as e:
219
+ logger.error(f"Error clearing mappings for {project_id}: {e}")
220
+
174
221
  # Cancel main loop task
175
222
  if self._main_loop_task and not self._main_loop_task.done():
176
223
  self._main_loop_task.cancel()
@@ -282,7 +329,26 @@ class CommanderDaemon:
282
329
  if project is None:
283
330
  raise ValueError(f"Project not found: {project_id}")
284
331
 
285
- session = ProjectSession(project, self.orchestrator)
332
+ # Create work queue for project if not exists
333
+ if project_id not in self.work_queues:
334
+ self.work_queues[project_id] = WorkQueue(project_id)
335
+ logger.debug(f"Created work queue for project {project_id}")
336
+
337
+ # Create work executor for project if not exists
338
+ if project_id not in self.work_executors:
339
+ from .runtime.executor import RuntimeExecutor
340
+
341
+ runtime_executor = RuntimeExecutor(self.orchestrator)
342
+ self.work_executors[project_id] = WorkExecutor(
343
+ runtime=runtime_executor, queue=self.work_queues[project_id]
344
+ )
345
+ logger.debug(f"Created work executor for project {project_id}")
346
+
347
+ session = ProjectSession(
348
+ project=project,
349
+ orchestrator=self.orchestrator,
350
+ monitor=self.runtime_monitor,
351
+ )
286
352
  self.sessions[project_id] = session
287
353
 
288
354
  logger.info(f"Created new session for project {project_id}")
@@ -6,13 +6,16 @@ and detects events using OutputParser.
6
6
 
7
7
  import asyncio
8
8
  import logging
9
- from typing import Dict, List, Optional
9
+ from typing import TYPE_CHECKING, Dict, List, Optional
10
10
 
11
11
  from ..events.manager import EventManager
12
12
  from ..models.events import Event
13
13
  from ..parsing.output_parser import OutputParser
14
14
  from ..tmux_orchestrator import TmuxOrchestrator
15
15
 
16
+ if TYPE_CHECKING:
17
+ from ..core.block_manager import BlockManager
18
+
16
19
  logger = logging.getLogger(__name__)
17
20
 
18
21
 
@@ -44,6 +47,7 @@ class RuntimeMonitor:
44
47
  event_manager: EventManager,
45
48
  poll_interval: float = 2.0,
46
49
  capture_lines: int = 1000,
50
+ block_manager: Optional["BlockManager"] = None,
47
51
  ):
48
52
  """Initialize runtime monitor.
49
53
 
@@ -53,6 +57,7 @@ class RuntimeMonitor:
53
57
  event_manager: EventManager for emitting events
54
58
  poll_interval: Seconds between polls (default: 2.0)
55
59
  capture_lines: Number of lines to capture (default: 1000)
60
+ block_manager: Optional BlockManager for automatic work blocking
56
61
 
57
62
  Raises:
58
63
  ValueError: If any required parameter is None
@@ -69,15 +74,17 @@ class RuntimeMonitor:
69
74
  self.event_manager = event_manager
70
75
  self.poll_interval = poll_interval
71
76
  self.capture_lines = capture_lines
77
+ self.block_manager = block_manager
72
78
 
73
79
  # Track active monitors: pane_target -> (project_id, task, last_output_hash)
74
80
  self._monitors: Dict[str, tuple[str, Optional[asyncio.Task], int]] = {}
75
81
  self._running = False
76
82
 
77
83
  logger.debug(
78
- "RuntimeMonitor initialized (interval: %.2fs, lines: %d)",
84
+ "RuntimeMonitor initialized (interval: %.2fs, lines: %d, block_manager: %s)",
79
85
  poll_interval,
80
86
  capture_lines,
87
+ "enabled" if block_manager else "disabled",
81
88
  )
82
89
 
83
90
  async def start_monitoring(self, pane_target: str, project_id: str) -> None:
@@ -284,6 +291,29 @@ class RuntimeMonitor:
284
291
  pane_target,
285
292
  )
286
293
 
294
+ # Automatically block work for blocking events
295
+ if self.block_manager:
296
+ for parse_result in parse_results:
297
+ # Get the created event from EventManager
298
+ # Events are created with matching titles, so find by title
299
+ pending_events = self.event_manager.get_pending(project_id)
300
+ for event in pending_events:
301
+ if (
302
+ event.title == parse_result.title
303
+ and event.is_blocking
304
+ ):
305
+ blocked_work = (
306
+ await self.block_manager.check_and_block(event)
307
+ )
308
+ if blocked_work:
309
+ logger.info(
310
+ "Event %s blocked %d work items: %s",
311
+ event.id,
312
+ len(blocked_work),
313
+ blocked_work,
314
+ )
315
+ break
316
+
287
317
  except Exception as e:
288
318
  logger.error(
289
319
  "Error in monitor loop for pane %s: %s",
@@ -155,7 +155,7 @@ class WorkExecutor:
155
155
  else:
156
156
  logger.warning(f"Failed to mark work item {work_id} as failed")
157
157
 
158
- async def handle_block(self, work_id: str, reason: str) -> None:
158
+ async def handle_block(self, work_id: str, reason: str) -> bool:
159
159
  """Handle work being blocked by an event.
160
160
 
161
161
  Called when RuntimeMonitor detects a blocking event.
@@ -164,15 +164,19 @@ class WorkExecutor:
164
164
  work_id: Work item ID that is blocked
165
165
  reason: Reason for blocking (e.g., "Waiting for approval")
166
166
 
167
+ Returns:
168
+ True if work was successfully blocked, False otherwise
169
+
167
170
  Example:
168
- >>> await executor.handle_block("work-123", "Decision needed")
171
+ >>> success = await executor.handle_block("work-123", "Decision needed")
169
172
  """
170
173
  if self.queue.block(work_id, reason):
171
174
  logger.info(f"Work item {work_id} blocked: {reason}")
172
- else:
173
- logger.warning(f"Failed to mark work item {work_id} as blocked")
175
+ return True
176
+ logger.warning(f"Failed to mark work item {work_id} as blocked")
177
+ return False
174
178
 
175
- async def handle_unblock(self, work_id: str) -> None:
179
+ async def handle_unblock(self, work_id: str) -> bool:
176
180
  """Handle work being unblocked after event resolution.
177
181
 
178
182
  Called when EventHandler resolves a blocking event.
@@ -180,10 +184,14 @@ class WorkExecutor:
180
184
  Args:
181
185
  work_id: Work item ID to unblock
182
186
 
187
+ Returns:
188
+ True if work was successfully unblocked, False otherwise
189
+
183
190
  Example:
184
- >>> await executor.handle_unblock("work-123")
191
+ >>> success = await executor.handle_unblock("work-123")
185
192
  """
186
193
  if self.queue.unblock(work_id):
187
194
  logger.info(f"Work item {work_id} unblocked, resuming execution")
188
- else:
189
- logger.warning(f"Failed to unblock work item {work_id}")
195
+ return True
196
+ logger.warning(f"Failed to unblock work item {work_id}")
197
+ return False
@@ -5,12 +5,15 @@ user input and coordinates session pause/resume.
5
5
  """
6
6
 
7
7
  import logging
8
- from typing import Dict, List, Optional
8
+ from typing import TYPE_CHECKING, Dict, List, Optional
9
9
 
10
10
  from ..inbox import Inbox
11
11
  from ..models.events import BLOCKING_EVENTS, Event, EventStatus
12
12
  from ..project_session import ProjectSession
13
13
 
14
+ if TYPE_CHECKING:
15
+ from ..core.block_manager import BlockManager
16
+
14
17
  logger = logging.getLogger(__name__)
15
18
 
16
19
 
@@ -32,13 +35,17 @@ class EventHandler:
32
35
  """
33
36
 
34
37
  def __init__(
35
- self, inbox: Inbox, session_manager: Dict[str, ProjectSession]
38
+ self,
39
+ inbox: Inbox,
40
+ session_manager: Dict[str, ProjectSession],
41
+ block_manager: Optional["BlockManager"] = None,
36
42
  ) -> None:
37
43
  """Initialize event handler.
38
44
 
39
45
  Args:
40
46
  inbox: Inbox instance for event access
41
47
  session_manager: Dict mapping project_id -> ProjectSession
48
+ block_manager: Optional BlockManager for automatic work unblocking
42
49
 
43
50
  Raises:
44
51
  ValueError: If inbox or session_manager is None
@@ -51,8 +58,12 @@ class EventHandler:
51
58
  self.inbox = inbox
52
59
  self.session_manager = session_manager
53
60
  self._event_manager = inbox.events
61
+ self.block_manager = block_manager
54
62
 
55
- logger.debug("EventHandler initialized")
63
+ logger.debug(
64
+ "EventHandler initialized (block_manager: %s)",
65
+ "enabled" if block_manager else "disabled",
66
+ )
56
67
 
57
68
  async def process_event(self, event: Event) -> None:
58
69
  """Process an event - pause session if blocking.
@@ -137,6 +148,17 @@ class EventHandler:
137
148
  # Mark event as resolved
138
149
  self._event_manager.respond(event_id, response)
139
150
 
151
+ # Automatically unblock work items if BlockManager is available
152
+ if self.block_manager and was_blocking:
153
+ unblocked_work = await self.block_manager.check_and_unblock(event_id)
154
+ if unblocked_work:
155
+ logger.info(
156
+ "Event %s resolution unblocked %d work items: %s",
157
+ event_id,
158
+ len(unblocked_work),
159
+ unblocked_work,
160
+ )
161
+
140
162
  # If event was NOT blocking, no need to resume
141
163
  if not was_blocking:
142
164
  logger.debug("Event %s was non-blocking, no resume needed", event_id)
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: claude-mpm
3
- Version: 5.6.9
3
+ Version: 5.6.11
4
4
  Summary: Claude Multi-Agent Project Manager - Orchestrate Claude with agent delegation and ticket tracking
5
5
  Author-email: Bob Matsuoka <bob@matsuoka.com>
6
6
  Maintainer: Claude MPM Team
@@ -1,5 +1,5 @@
1
1
  claude_mpm/BUILD_NUMBER,sha256=9JfxhnDtr-8l3kCP2U5TVXSErptHoga8m7XA8zqgGOc,4
2
- claude_mpm/VERSION,sha256=nJST2g_21xhxfQk5QgAosXHOMTc1xwppJdB3d85to-g,6
2
+ claude_mpm/VERSION,sha256=S81MrAwnIgULw4v1GrJed7APLxpXKTpjomdjKgbSLgo,7
3
3
  claude_mpm/__init__.py,sha256=AGfh00BHKvLYD-UVFw7qbKtl7NMRIzRXOWw7vEuZ-h4,2214
4
4
  claude_mpm/__main__.py,sha256=Ro5UBWBoQaSAIoSqWAr7zkbLyvi4sSy28WShqAhKJG0,723
5
5
  claude_mpm/constants.py,sha256=pz3lTrZZR5HhV3eZzYtIbtBwWo7iM6pkBHP_ixxmI6Y,6827
@@ -11,7 +11,7 @@ claude_mpm/agents/CLAUDE_MPM_OUTPUT_STYLE.md,sha256=C61nb8szGeeGaXQd9-VPpL1t79G2
11
11
  claude_mpm/agents/CLAUDE_MPM_RESEARCH_OUTPUT_STYLE.md,sha256=OIVDU0Ypw5qrXix9rmMDG4u0V4ePnYDlsu5AoTmfZzs,13294
12
12
  claude_mpm/agents/CLAUDE_MPM_TEACHER_OUTPUT_STYLE.md,sha256=vneNW5vHjfKsRIukkuGbAnvnyp_-EC3qpFzHDdsMOwc,4796
13
13
  claude_mpm/agents/MEMORY.md,sha256=V1mGx5oEaLdN9AYLX3Xetslmr2Ja6vnLPATeEoR6bvw,3303
14
- claude_mpm/agents/PM_INSTRUCTIONS.md,sha256=S4xzeFlMSfBRbzu1zJzcpB7jC04Bh4yVs_ZRK7gBhRc,34728
14
+ claude_mpm/agents/PM_INSTRUCTIONS.md,sha256=4rhuCHW18GsdGJbXlMS-QOxL1sZmuWyMALDKAlRlGrM,35132
15
15
  claude_mpm/agents/WORKFLOW.md,sha256=jKs3V9RUrE1Tn5_CtwZGKRDSUBgD6USaCBxAnxv3-Zc,3487
16
16
  claude_mpm/agents/__init__.py,sha256=3cCQh2Hf_-2F9XDT5In533Bw7oKuGIqZvOdBW7af6dY,3403
17
17
  claude_mpm/agents/agent-template.yaml,sha256=mRlz5Yd0SmknTeoJWgFkZXzEF5T7OmGBJGs2-KPT93k,1969
@@ -154,7 +154,7 @@ claude_mpm/cli_module/migration_example.py,sha256=DtQ59RyoBD6r8FIfrjKXCQ8-xnUiOq
154
154
  claude_mpm/commander/__init__.py,sha256=8NjmTnvUzWm_UrTunHj3Gt3PFUq0XNtNj1o1p1puiTo,2186
155
155
  claude_mpm/commander/config.py,sha256=b9HUNN7LY8tHU4XkLzpuoVdHUZcgC-3by39fRYOg32Q,1583
156
156
  claude_mpm/commander/config_loader.py,sha256=H2ASh19-Nu1Ej4_ojhuIQMU9fR4sMHTsA8fiXocoosE,3736
157
- claude_mpm/commander/daemon.py,sha256=xQQzcJCTNGT3XqHn_Yi8MReALzOJW09TfqI43yVPHTQ,13429
157
+ claude_mpm/commander/daemon.py,sha256=6cD5KFx97CetYxusC-oGpdYnBaHPz7PejRv4VI4L66U,16186
158
158
  claude_mpm/commander/instance_manager.py,sha256=H37wjQkeeIQV5l-0q_ycDk1theU3eT1gg3b-Lbncirw,10790
159
159
  claude_mpm/commander/project_session.py,sha256=z_vhKcvla8WPmXS1MBl-Iki6oFxNug-YUdHMm15r6H0,9356
160
160
  claude_mpm/commander/registry.py,sha256=WcPUgZQnCWahrYvl_8GL8xvgikhlPjyJihylhWCIyvc,12878
@@ -178,6 +178,9 @@ claude_mpm/commander/chat/__init__.py,sha256=5Iiya2YPkF54OvtZgL4NNT0zp5PCsZnnE7D
178
178
  claude_mpm/commander/chat/cli.py,sha256=mHWEXjDll7OFIi2frTwfMs0wnJ71aFkFHX3vJr_iEGM,3151
179
179
  claude_mpm/commander/chat/commands.py,sha256=0Lvc4XT1k-0gpmLxhzgwVNw7IXc40kgZ9YqTVF0vxxk,2440
180
180
  claude_mpm/commander/chat/repl.py,sha256=c7Qi4qBg32b-JQyBKSNGadSWmmUrU7vBpFOkCV94QwU,10999
181
+ claude_mpm/commander/core/__init__.py,sha256=BVtJoH9hn9LtlmtqPBybPowbPfiKNaNgtotLV82JRQk,357
182
+ claude_mpm/commander/core/block_manager.py,sha256=UhjzH59eezWWdTLECVhiq7gFVI2LofeJx52wBoLMZV4,10725
183
+ claude_mpm/commander/core/response_manager.py,sha256=hOciRaiOmi-MFimaOUyyH0UCKfrIqcnTA_z2iisiTGE,10921
181
184
  claude_mpm/commander/events/__init__.py,sha256=NtUCo8eQfX4D3G9I2U10SRuuU4zMEIySKZGPxyUELzw,478
182
185
  claude_mpm/commander/events/manager.py,sha256=T-gXJ6DWIodxTdfFLv3u4kuks2ilJLjtlBZnmCkRSDA,10352
183
186
  claude_mpm/commander/frameworks/__init__.py,sha256=tOdMc4XNASVDrhpIPA1e7FKtNM7GQW5AEZXXW5NJK9I,280
@@ -213,16 +216,16 @@ claude_mpm/commander/proxy/output_handler.py,sha256=eIbz6KstcxmOUM42dKDQ5LibdcXw
213
216
  claude_mpm/commander/proxy/relay.py,sha256=8ma4e8OLpCKx8rG5YgfhH4NMRWayLaRpBLgKVQ0NH9s,5029
214
217
  claude_mpm/commander/runtime/__init__.py,sha256=0n-bPme0BlApM5ElXI-Qcbt5RqB8tiYbtvUHBjcBcyo,325
215
218
  claude_mpm/commander/runtime/executor.py,sha256=7o_CVSeKEM5UZ85eL2jhKByKMB_kRxDl0YE4yhhU4cU,6654
216
- claude_mpm/commander/runtime/monitor.py,sha256=yX2sabcXqwr7AgHJqfBoP7xrsxkSgojiXdL6DGzzlhY,10866
219
+ claude_mpm/commander/runtime/monitor.py,sha256=C4QROkmARkZ-gh_cELLf-pLI8U-3_gqJuTnpytnybvw,12500
217
220
  claude_mpm/commander/session/__init__.py,sha256=MfKCDPGqEZdM_MuQm3GM3OuqiigQMpgPKTL8rM7Fm68,177
218
221
  claude_mpm/commander/session/context.py,sha256=_P6VbFFjLSETxOgNDVCxhfCAdP3zeIbW7qxZk51nIwM,2405
219
222
  claude_mpm/commander/session/manager.py,sha256=RiCq2zAUFGURtmyiYP8NTb447dvBIG65GhX4uQa_KO0,1675
220
223
  claude_mpm/commander/web/__init__.py,sha256=QStUHljiyh5KUycukpjtX08O0HxA9e_6_hhVZybNBRo,39
221
224
  claude_mpm/commander/work/__init__.py,sha256=XuOjTHv0dZPQwOg3NOnmNLW4pM1PWKr03egIUWeaOH8,962
222
- claude_mpm/commander/work/executor.py,sha256=5FewUKBYYBRIEtS-AcaOdroRhLLPrRUFkXdTpz_5ZZM,6123
225
+ claude_mpm/commander/work/executor.py,sha256=JVAlqWk9E6xkslQofc9gTxMxU5nBZtnhrNCaFQOxfF4,6369
223
226
  claude_mpm/commander/work/queue.py,sha256=GrfNRhY8uaxzZ6Q3I0YjjCqssQOKx-oyREdIKV2mvjQ,11895
224
227
  claude_mpm/commander/workflow/__init__.py,sha256=_eLy6z3rUj99gINqVHf0abapw8miyKRqgT3j7DAM0ZM,933
225
- claude_mpm/commander/workflow/event_handler.py,sha256=vWrbeQt-1GNMm9i4x5lQ2oAlw5pF8Q7NVZYi83im1pY,7448
228
+ claude_mpm/commander/workflow/event_handler.py,sha256=BO8kKAOJRpmY5OfibHID4fkvgqL46jUnDzJknhnE3YE,8285
226
229
  claude_mpm/commander/workflow/notifier.py,sha256=-XpR9ioRQeGn2bre4i8lMk5zFEMs_a5PIiD3DEkp_-I,4330
227
230
  claude_mpm/commands/__init__.py,sha256=paX5Ub5-UmRgiQ8UgKWIKwU2-RjLu67OmNJND-fVtjg,588
228
231
  claude_mpm/commands/mpm-config.md,sha256=GMQIsXSzKUmaIpfrohBUA7d4lga4JfYDxJqgqYQV7uM,960
@@ -1109,10 +1112,10 @@ claude_mpm/utils/subprocess_utils.py,sha256=D0izRT8anjiUb_JG72zlJR_JAw1cDkb7kalN
1109
1112
  claude_mpm/validation/__init__.py,sha256=YZhwE3mhit-lslvRLuwfX82xJ_k4haZeKmh4IWaVwtk,156
1110
1113
  claude_mpm/validation/agent_validator.py,sha256=GprtAvu80VyMXcKGsK_VhYiXWA6BjKHv7O6HKx0AB9w,20917
1111
1114
  claude_mpm/validation/frontmatter_validator.py,sha256=YpJlYNNYcV8u6hIOi3_jaRsDnzhbcQpjCBE6eyBKaFY,7076
1112
- claude_mpm-5.6.9.dist-info/licenses/LICENSE,sha256=ca3y_Rk4aPrbF6f62z8Ht5MJM9OAvbGlHvEDcj9vUQ4,3867
1113
- claude_mpm-5.6.9.dist-info/licenses/LICENSE-FAQ.md,sha256=TxfEkXVCK98RzDOer09puc7JVCP_q_bN4dHtZKHCMcM,5104
1114
- claude_mpm-5.6.9.dist-info/METADATA,sha256=zFOkKrO7QsszgLAusCpXjQiG-wsHRWCsCsxOIYntIL0,14983
1115
- claude_mpm-5.6.9.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
1116
- claude_mpm-5.6.9.dist-info/entry_points.txt,sha256=n-Uk4vwHPpuvu-g_I7-GHORzTnN_m6iyOsoLveKKD0E,228
1117
- claude_mpm-5.6.9.dist-info/top_level.txt,sha256=1nUg3FEaBySgm8t-s54jK5zoPnu3_eY6EP6IOlekyHA,11
1118
- claude_mpm-5.6.9.dist-info/RECORD,,
1115
+ claude_mpm-5.6.11.dist-info/licenses/LICENSE,sha256=ca3y_Rk4aPrbF6f62z8Ht5MJM9OAvbGlHvEDcj9vUQ4,3867
1116
+ claude_mpm-5.6.11.dist-info/licenses/LICENSE-FAQ.md,sha256=TxfEkXVCK98RzDOer09puc7JVCP_q_bN4dHtZKHCMcM,5104
1117
+ claude_mpm-5.6.11.dist-info/METADATA,sha256=sTUt9uZf2xCDms1yhicNEuEdsHI9kHzMdNnIUv_c4jg,14984
1118
+ claude_mpm-5.6.11.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
1119
+ claude_mpm-5.6.11.dist-info/entry_points.txt,sha256=n-Uk4vwHPpuvu-g_I7-GHORzTnN_m6iyOsoLveKKD0E,228
1120
+ claude_mpm-5.6.11.dist-info/top_level.txt,sha256=1nUg3FEaBySgm8t-s54jK5zoPnu3_eY6EP6IOlekyHA,11
1121
+ claude_mpm-5.6.11.dist-info/RECORD,,