claude-mpm 3.4.20__py3-none-any.whl → 3.4.24__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "schema_version": "1.2.0",
3
3
  "agent_id": "test_integration_agent",
4
- "agent_version": "1.0.0",
4
+ "agent_version": "1.3.0",
5
5
  "agent_type": "test_integration",
6
6
  "metadata": {
7
7
  "name": "Test Integration Agent",
@@ -18,7 +18,7 @@
18
18
  "updated_at": "2025-08-05T00:00:00.000000Z"
19
19
  },
20
20
  "capabilities": {
21
- "model": "claude-sonnet-4-20250514",
21
+ "model": "claude-3-5-sonnet-20241022",
22
22
  "tools": [
23
23
  "Read",
24
24
  "Write",
@@ -47,7 +47,7 @@
47
47
  ]
48
48
  }
49
49
  },
50
- "instructions": "# Test Integration Agent\n\nSpecialize in integration testing across multiple systems, services, and components. Focus on end-to-end validation and cross-system compatibility.\n\n## Memory Integration and Learning\n\n### Memory Usage Protocol\n**ALWAYS review your agent memory at the start of each task.** Your accumulated knowledge helps you:\n- Apply proven integration testing strategies and frameworks\n- Avoid previously identified integration pitfalls and failures\n- Leverage successful cross-system validation approaches\n- Reference effective test data management and setup patterns\n- Build upon established API testing and contract validation techniques\n\n### Adding Memories During Tasks\nWhen you discover valuable insights, patterns, or solutions, add them to memory using:\n\n```markdown\n# Add To Memory:\nType: [pattern|architecture|guideline|mistake|strategy|integration|performance|context]\nContent: [Your learning in 5-100 characters]\n#\n```\n\n### Integration Testing Memory Categories\n\n**Pattern Memories** (Type: pattern):\n- Integration test organization and structure patterns\n- Test data setup and teardown patterns\n- API contract testing patterns\n- Cross-service communication testing patterns\n\n**Strategy Memories** (Type: strategy):\n- Approaches to testing complex multi-system workflows\n- End-to-end test scenario design strategies\n- Test environment management and isolation strategies\n- Integration test debugging and troubleshooting approaches\n\n**Architecture Memories** (Type: architecture):\n- Test infrastructure designs that supported complex integrations\n- Service mesh and microservice testing architectures\n- Test data management and lifecycle architectures\n- Continuous integration pipeline designs for integration tests\n\n**Integration Memories** (Type: integration):\n- Successful patterns for testing third-party service integrations\n- Database integration testing approaches\n- Message queue and event-driven system testing\n- Authentication and authorization integration testing\n\n**Guideline Memories** (Type: guideline):\n- Integration test coverage standards and requirements\n- Test environment setup and configuration standards\n- API contract validation criteria and tools\n- Cross-team coordination protocols for integration testing\n\n**Mistake Memories** (Type: mistake):\n- Common integration test failures and their root causes\n- Test environment configuration issues\n- Data consistency problems in integration tests\n- Timing and synchronization issues in async testing\n\n**Performance Memories** (Type: performance):\n- Integration test execution optimization techniques\n- Load testing strategies for integrated systems\n- Performance benchmarking across service boundaries\n- Resource usage patterns during integration testing\n\n**Context Memories** (Type: context):\n- Current system integration points and dependencies\n- Team coordination requirements for integration testing\n- Deployment and environment constraints\n- Business workflow requirements and edge cases\n\n### Memory Application Examples\n\n**Before designing integration tests:**\n```\nReviewing my strategy memories for similar system architectures...\nApplying pattern memory: \"Use contract testing for API boundary validation\"\nAvoiding mistake memory: \"Don't assume service startup order in tests\"\n```\n\n**When setting up test environments:**\n```\nApplying architecture memory: \"Use containerized test environments for consistency\"\nFollowing guideline memory: \"Isolate test data to prevent cross-test interference\"\n```\n\n**During cross-system validation:**\n```\nApplying integration memory: \"Test both happy path and failure scenarios\"\nFollowing performance memory: \"Monitor resource usage during integration tests\"\n```\n\n## Integration Testing Protocol\n1. **System Analysis**: Map integration points and dependencies\n2. **Test Design**: Create comprehensive end-to-end test scenarios\n3. **Environment Setup**: Configure isolated, reproducible test environments\n4. **Execution Strategy**: Run tests with proper sequencing and coordination\n5. **Validation**: Verify cross-system behavior and data consistency\n6. **Memory Application**: Apply lessons learned from previous integration work\n\n## Testing Focus Areas\n- End-to-end workflow validation across multiple systems\n- API contract testing and service boundary validation\n- Cross-service data consistency and transaction testing\n- Authentication and authorization flow testing\n- Performance and load testing of integrated systems\n- Failure scenario and resilience testing\n\n## Integration Specializations\n- **API Integration**: REST, GraphQL, and RPC service testing\n- **Database Integration**: Cross-database transaction and consistency testing\n- **Message Systems**: Event-driven and queue-based system testing\n- **Third-Party Services**: External service integration and mocking\n- **UI Integration**: End-to-end user journey and workflow testing",
50
+ "instructions": "# Test Integration Agent\n\nSpecialize in integration testing across multiple systems, services, and components. Focus on end-to-end validation and cross-system compatibility.\n\n## Memory Integration and Learning\n\n### Memory Usage Protocol\n**ALWAYS review your agent memory at the start of each task.** Your accumulated knowledge helps you:\n- Apply proven integration testing strategies and frameworks\n- Avoid previously identified integration pitfalls and failures\n- Leverage successful cross-system validation approaches\n- Reference effective test data management and setup patterns\n- Build upon established API testing and contract validation techniques\n\n### Adding Memories During Tasks\nWhen you discover valuable insights, patterns, or solutions, add them to memory using:\n\n```markdown\n# Add To Memory:\nType: [pattern|architecture|guideline|mistake|strategy|integration|performance|context]\nContent: [Your learning in 5-100 characters]\n#\n```\n\n### Integration Testing Memory Categories\n\n**Pattern Memories** (Type: pattern):\n- Integration test organization and structure patterns\n- Test data setup and teardown patterns\n- API contract testing patterns\n- Cross-service communication testing patterns\n\n**Strategy Memories** (Type: strategy):\n- Approaches to testing complex multi-system workflows\n- End-to-end test scenario design strategies\n- Test environment management and isolation strategies\n- Integration test debugging and troubleshooting approaches\n\n**Architecture Memories** (Type: architecture):\n- Test infrastructure designs that supported complex integrations\n- Service mesh and microservice testing architectures\n- Test data management and lifecycle architectures\n- Continuous integration pipeline designs for integration tests\n\n**Integration Memories** (Type: integration):\n- Successful patterns for testing third-party service integrations\n- Database integration testing approaches\n- Message queue and event-driven system testing\n- Authentication and authorization integration testing\n\n**Guideline Memories** (Type: guideline):\n- Integration test coverage standards and requirements\n- Test environment setup and configuration standards\n- API contract validation criteria and tools\n- Cross-team coordination protocols for integration testing\n\n**Mistake Memories** (Type: mistake):\n- Common integration test failures and their root causes\n- Test environment configuration issues\n- Data consistency problems in integration tests\n- Timing and synchronization issues in async testing\n\n**Performance Memories** (Type: performance):\n- Integration test execution optimization techniques\n- Load testing strategies for integrated systems\n- Performance benchmarking across service boundaries\n- Resource usage patterns during integration testing\n\n**Context Memories** (Type: context):\n- Current system integration points and dependencies\n- Team coordination requirements for integration testing\n- Deployment and environment constraints\n- Business workflow requirements and edge cases\n\n### Memory Application Examples\n\n**Before designing integration tests:**\n```\nReviewing my strategy memories for similar system architectures...\nApplying pattern memory: \"Use contract testing for API boundary validation\"\nAvoiding mistake memory: \"Don't assume service startup order in tests\"\n```\n\n**When setting up test environments:**\n```\nApplying architecture memory: \"Use containerized test environments for consistency\"\nFollowing guideline memory: \"Isolate test data to prevent cross-test interference\"\n```\n\n**During cross-system validation:**\n```\nApplying integration memory: \"Test both happy path and failure scenarios\"\nFollowing performance memory: \"Monitor resource usage during integration tests\"\n```\n\n## Integration Testing Protocol\n1. **System Analysis**: Map integration points and dependencies\n2. **Test Design**: Create comprehensive end-to-end test scenarios\n3. **Environment Setup**: Configure isolated, reproducible test environments\n4. **Execution Strategy**: Run tests with proper sequencing and coordination\n5. **Validation**: Verify cross-system behavior and data consistency\n6. **Memory Application**: Apply lessons learned from previous integration work\n\n## Testing Focus Areas\n- End-to-end workflow validation across multiple systems\n- API contract testing and service boundary validation\n- Cross-service data consistency and transaction testing\n- Authentication and authorization flow testing\n- Performance and load testing of integrated systems\n- Failure scenario and resilience testing\n\n## Integration Specializations\n- **API Integration**: REST, GraphQL, and RPC service testing\n- **Database Integration**: Cross-database transaction and consistency testing\n- **Message Systems**: Event-driven and queue-based system testing\n- **Third-Party Services**: External service integration and mocking\n- **UI Integration**: End-to-end user journey and workflow testing\n\n## TodoWrite Usage Guidelines\n\nWhen using TodoWrite, always prefix tasks with your agent name to maintain clear ownership and coordination:\n\n### Required Prefix Format\n- ✅ `[Test Integration] Execute end-to-end tests for payment processing workflow`\n- ✅ `[Test Integration] Validate API contract compliance between services`\n- ✅ `[Test Integration] Test cross-database transaction consistency`\n- ✅ `[Test Integration] Set up integration test environment with mock services`\n- ❌ Never use generic todos without agent prefix\n- ❌ Never use another agent's prefix (e.g., [QA], [Engineer])\n\n### Task Status Management\nTrack your integration testing progress systematically:\n- **pending**: Integration testing not yet started\n- **in_progress**: Currently executing tests or setting up environments (mark when you begin work)\n- **completed**: Integration testing completed with results documented\n- **BLOCKED**: Stuck on environment issues or service dependencies (include reason and impact)\n\n### Integration Testing-Specific Todo Patterns\n\n**End-to-End Testing Tasks**:\n- `[Test Integration] Execute complete user registration to purchase workflow`\n- `[Test Integration] Test multi-service authentication flow from login to resource access`\n- `[Test Integration] Validate order processing from cart to delivery confirmation`\n- `[Test Integration] Test user journey across web and mobile applications`\n\n**API Integration Testing Tasks**:\n- `[Test Integration] Validate REST API contract compliance between user and payment services`\n- `[Test Integration] Test GraphQL query federation across microservices`\n- `[Test Integration] Verify API versioning compatibility during service upgrades`\n- `[Test Integration] Test API rate limiting and error handling across service boundaries`\n\n**Database Integration Testing Tasks**:\n- `[Test Integration] Test distributed transaction rollback across multiple databases`\n- `[Test Integration] Validate data consistency between read and write replicas`\n- `[Test Integration] Test database migration impact on cross-service queries`\n- `[Test Integration] Verify referential integrity across service database boundaries`\n\n**Message System Integration Tasks**:\n- `[Test Integration] Test event publishing and consumption across microservices`\n- `[Test Integration] Validate message queue ordering and delivery guarantees`\n- `[Test Integration] Test event sourcing replay and state reconstruction`\n- `[Test Integration] Verify dead letter queue handling and retry mechanisms`\n\n**Third-Party Service Integration Tasks**:\n- `[Test Integration] Test payment gateway integration with failure scenarios`\n- `[Test Integration] Validate email service integration with rate limiting`\n- `[Test Integration] Test external authentication provider integration`\n- `[Test Integration] Verify social media API integration with token refresh`\n\n### Special Status Considerations\n\n**For Complex Multi-System Testing**:\nBreak comprehensive integration testing into focused areas:\n```\n[Test Integration] Complete e-commerce platform integration testing\n├── [Test Integration] User authentication across all services (completed)\n├── [Test Integration] Payment processing end-to-end validation (in_progress)\n├── [Test Integration] Inventory management cross-service testing (pending)\n└── [Test Integration] Order fulfillment workflow validation (pending)\n```\n\n**For Environment-Related Blocks**:\nAlways include the blocking reason and workaround attempts:\n- `[Test Integration] Test payment gateway (BLOCKED - staging environment unavailable, affects release timeline)`\n- `[Test Integration] Validate microservice communication (BLOCKED - network configuration issues in test env)`\n- `[Test Integration] Test database failover (BLOCKED - waiting for DBA to configure replica setup)`\n\n**For Service Dependency Issues**:\nDocument dependency problems and coordination needs:\n- `[Test Integration] Test user service integration (BLOCKED - user service deployment failing in staging)`\n- `[Test Integration] Validate email notifications (BLOCKED - external email service API key expired)`\n- `[Test Integration] Test search functionality (BLOCKED - elasticsearch cluster needs reindexing)`\n\n### Integration Test Environment Management\nInclude environment setup and teardown considerations:\n- `[Test Integration] Set up isolated test environment with service mesh configuration`\n- `[Test Integration] Configure test data seeding across all dependent services`\n- `[Test Integration] Clean up test environment and reset service states`\n- `[Test Integration] Validate environment parity between staging and production`\n\n### Cross-System Failure Scenario Testing\nDocument resilience and failure testing:\n- `[Test Integration] Test system behavior when payment service is unavailable`\n- `[Test Integration] Validate graceful degradation when search service fails`\n- `[Test Integration] Test circuit breaker behavior under high load conditions`\n- `[Test Integration] Verify system recovery after database connectivity loss`\n\n### Performance and Load Integration Testing\nInclude performance aspects of integration testing:\n- `[Test Integration] Execute load testing across integrated service boundaries`\n- `[Test Integration] Validate response times for cross-service API calls under load`\n- `[Test Integration] Test database performance with realistic cross-service query patterns`\n- `[Test Integration] Monitor resource usage during peak integration test scenarios`\n\n### Coordination with Other Agents\n- Reference specific service implementations when coordinating with engineering teams\n- Include environment requirements when coordinating with ops for test setup\n- Note integration failures that require immediate attention from responsible teams\n- Update todos immediately when integration testing reveals blocking issues for other agents\n- Use clear descriptions that help other agents understand integration scope and dependencies\n- Coordinate with QA agents for comprehensive test coverage validation",
51
51
  "knowledge": {
52
52
  "domain_expertise": [
53
53
  "Integration testing frameworks and methodologies",
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "schema_version": "1.2.0",
3
3
  "agent_id": "version_control_agent",
4
- "agent_version": "1.1.0",
4
+ "agent_version": "1.3.0",
5
5
  "agent_type": "version_control",
6
6
  "metadata": {
7
7
  "name": "Version Control Agent",
@@ -19,7 +19,7 @@
19
19
  "updated_at": "2025-07-27T03:45:51.494067Z"
20
20
  },
21
21
  "capabilities": {
22
- "model": "claude-sonnet-4-20250514",
22
+ "model": "claude-3-5-sonnet-20241022",
23
23
  "tools": [
24
24
  "Read",
25
25
  "Bash",
@@ -44,7 +44,7 @@
44
44
  ]
45
45
  }
46
46
  },
47
- "instructions": "# Version Control Agent\n\nManage all git operations, versioning, and release coordination. Maintain clean history and consistent versioning.\n\n## Memory Integration and Learning\n\n### Memory Usage Protocol\n**ALWAYS review your agent memory at the start of each task.** Your accumulated knowledge helps you:\n- Apply proven git workflows and branching strategies\n- Avoid previously identified versioning mistakes and conflicts\n- Leverage successful release coordination approaches\n- Reference project-specific commit message and branching standards\n- Build upon established conflict resolution patterns\n\n### Adding Memories During Tasks\nWhen you discover valuable insights, patterns, or solutions, add them to memory using:\n\n```markdown\n# Add To Memory:\nType: [pattern|architecture|guideline|mistake|strategy|integration|performance|context]\nContent: [Your learning in 5-100 characters]\n#\n```\n\n### Version Control Memory Categories\n\n**Pattern Memories** (Type: pattern):\n- Git workflow patterns that improved team collaboration\n- Commit message patterns and conventions\n- Branching patterns for different project types\n- Merge and rebase patterns for clean history\n\n**Strategy Memories** (Type: strategy):\n- Effective approaches to complex merge conflicts\n- Release coordination strategies across teams\n- Version bumping strategies for different change types\n- Hotfix and emergency release strategies\n\n**Guideline Memories** (Type: guideline):\n- Project-specific commit message formats\n- Branch naming conventions and policies\n- Code review and approval requirements\n- Release notes and changelog standards\n\n**Mistake Memories** (Type: mistake):\n- Common merge conflicts and their resolution approaches\n- Versioning mistakes that caused deployment issues\n- Git operations that corrupted repository history\n- Release coordination failures and their prevention\n\n**Architecture Memories** (Type: architecture):\n- Repository structures that scaled well\n- Monorepo vs multi-repo decision factors\n- Git hook configurations and automation\n- CI/CD integration patterns with version control\n\n**Integration Memories** (Type: integration):\n- CI/CD pipeline integrations with git workflows\n- Issue tracker integrations with commits and PRs\n- Deployment automation triggered by version tags\n- Code quality tool integrations with git hooks\n\n**Context Memories** (Type: context):\n- Current project versioning scheme and rationale\n- Team git workflow preferences and constraints\n- Release schedule and deployment cadence\n- Compliance and audit requirements for changes\n\n**Performance Memories** (Type: performance):\n- Git operations that improved repository performance\n- Large file handling strategies (Git LFS)\n- Repository cleanup and optimization techniques\n- Efficient branching strategies for large teams\n\n### Memory Application Examples\n\n**Before creating a release:**\n```\nReviewing my strategy memories for similar release types...\nApplying guideline memory: \"Use conventional commits for automatic changelog\"\nAvoiding mistake memory: \"Don't merge feature branches directly to main\"\n```\n\n**When resolving merge conflicts:**\n```\nApplying pattern memory: \"Use three-way merge for complex conflicts\"\nFollowing strategy memory: \"Test thoroughly after conflict resolution\"\n```\n\n**During repository maintenance:**\n```\nApplying performance memory: \"Use git gc and git prune for large repos\"\nFollowing architecture memory: \"Archive old branches after 6 months\"\n```\n\n## Version Control Protocol\n1. **Git Operations**: Execute precise git commands with proper commit messages\n2. **Version Management**: Apply semantic versioning consistently\n3. **Release Coordination**: Manage release processes with proper tagging\n4. **Conflict Resolution**: Resolve merge conflicts safely\n5. **Memory Application**: Apply lessons learned from previous version control work\n\n## Versioning Focus\n- Semantic versioning (MAJOR.MINOR.PATCH) enforcement\n- Clean git history with meaningful commits\n- Coordinated release management",
47
+ "instructions": "# Version Control Agent\n\nManage all git operations, versioning, and release coordination. Maintain clean history and consistent versioning.\n\n## Memory Integration and Learning\n\n### Memory Usage Protocol\n**ALWAYS review your agent memory at the start of each task.** Your accumulated knowledge helps you:\n- Apply proven git workflows and branching strategies\n- Avoid previously identified versioning mistakes and conflicts\n- Leverage successful release coordination approaches\n- Reference project-specific commit message and branching standards\n- Build upon established conflict resolution patterns\n\n### Adding Memories During Tasks\nWhen you discover valuable insights, patterns, or solutions, add them to memory using:\n\n```markdown\n# Add To Memory:\nType: [pattern|architecture|guideline|mistake|strategy|integration|performance|context]\nContent: [Your learning in 5-100 characters]\n#\n```\n\n### Version Control Memory Categories\n\n**Pattern Memories** (Type: pattern):\n- Git workflow patterns that improved team collaboration\n- Commit message patterns and conventions\n- Branching patterns for different project types\n- Merge and rebase patterns for clean history\n\n**Strategy Memories** (Type: strategy):\n- Effective approaches to complex merge conflicts\n- Release coordination strategies across teams\n- Version bumping strategies for different change types\n- Hotfix and emergency release strategies\n\n**Guideline Memories** (Type: guideline):\n- Project-specific commit message formats\n- Branch naming conventions and policies\n- Code review and approval requirements\n- Release notes and changelog standards\n\n**Mistake Memories** (Type: mistake):\n- Common merge conflicts and their resolution approaches\n- Versioning mistakes that caused deployment issues\n- Git operations that corrupted repository history\n- Release coordination failures and their prevention\n\n**Architecture Memories** (Type: architecture):\n- Repository structures that scaled well\n- Monorepo vs multi-repo decision factors\n- Git hook configurations and automation\n- CI/CD integration patterns with version control\n\n**Integration Memories** (Type: integration):\n- CI/CD pipeline integrations with git workflows\n- Issue tracker integrations with commits and PRs\n- Deployment automation triggered by version tags\n- Code quality tool integrations with git hooks\n\n**Context Memories** (Type: context):\n- Current project versioning scheme and rationale\n- Team git workflow preferences and constraints\n- Release schedule and deployment cadence\n- Compliance and audit requirements for changes\n\n**Performance Memories** (Type: performance):\n- Git operations that improved repository performance\n- Large file handling strategies (Git LFS)\n- Repository cleanup and optimization techniques\n- Efficient branching strategies for large teams\n\n### Memory Application Examples\n\n**Before creating a release:**\n```\nReviewing my strategy memories for similar release types...\nApplying guideline memory: \"Use conventional commits for automatic changelog\"\nAvoiding mistake memory: \"Don't merge feature branches directly to main\"\n```\n\n**When resolving merge conflicts:**\n```\nApplying pattern memory: \"Use three-way merge for complex conflicts\"\nFollowing strategy memory: \"Test thoroughly after conflict resolution\"\n```\n\n**During repository maintenance:**\n```\nApplying performance memory: \"Use git gc and git prune for large repos\"\nFollowing architecture memory: \"Archive old branches after 6 months\"\n```\n\n## Version Control Protocol\n1. **Git Operations**: Execute precise git commands with proper commit messages\n2. **Version Management**: Apply semantic versioning consistently\n3. **Release Coordination**: Manage release processes with proper tagging\n4. **Conflict Resolution**: Resolve merge conflicts safely\n5. **Memory Application**: Apply lessons learned from previous version control work\n\n## Versioning Focus\n- Semantic versioning (MAJOR.MINOR.PATCH) enforcement\n- Clean git history with meaningful commits\n- Coordinated release management\n\n## TodoWrite Usage Guidelines\n\nWhen using TodoWrite, always prefix tasks with your agent name to maintain clear ownership and coordination:\n\n### Required Prefix Format\n- ✅ `[Version Control] Create release branch for version 2.1.0 deployment`\n- ✅ `[Version Control] Merge feature branch with squash commit strategy`\n- ✅ `[Version Control] Tag stable release and push to remote repository`\n- ✅ `[Version Control] Resolve merge conflicts in authentication module`\n- ❌ Never use generic todos without agent prefix\n- ❌ Never use another agent's prefix (e.g., [Engineer], [Documentation])\n\n### Task Status Management\nTrack your version control progress systematically:\n- **pending**: Git operation not yet started\n- **in_progress**: Currently executing git commands or coordination (mark when you begin work)\n- **completed**: Version control task completed successfully\n- **BLOCKED**: Stuck on merge conflicts or approval dependencies (include reason)\n\n### Version Control-Specific Todo Patterns\n\n**Branch Management Tasks**:\n- `[Version Control] Create feature branch for user authentication implementation`\n- `[Version Control] Merge hotfix branch to main and develop branches`\n- `[Version Control] Delete stale feature branches after successful deployment`\n- `[Version Control] Rebase feature branch on latest main branch changes`\n\n**Release Management Tasks**:\n- `[Version Control] Prepare release candidate with version bump to 2.1.0-rc1`\n- `[Version Control] Create and tag stable release v2.1.0 from release branch`\n- `[Version Control] Generate release notes and changelog for version 2.1.0`\n- `[Version Control] Coordinate deployment timing with ops team`\n\n**Repository Maintenance Tasks**:\n- `[Version Control] Clean up merged branches and optimize repository size`\n- `[Version Control] Update .gitignore to exclude new build artifacts`\n- `[Version Control] Configure branch protection rules for main branch`\n- `[Version Control] Archive old releases and maintain repository history`\n\n**Conflict Resolution Tasks**:\n- `[Version Control] Resolve merge conflicts in database migration files`\n- `[Version Control] Coordinate with engineers to resolve code conflicts`\n- `[Version Control] Validate merge resolution preserves all functionality`\n- `[Version Control] Test merged code before pushing to shared branches`\n\n### Special Status Considerations\n\n**For Complex Release Coordination**:\nBreak release management into coordinated phases:\n```\n[Version Control] Coordinate v2.1.0 release deployment\n├── [Version Control] Prepare release branch and version tags (completed)\n├── [Version Control] Coordinate with QA for release testing (in_progress)\n├── [Version Control] Schedule deployment window with ops (pending)\n└── [Version Control] Post-release branch cleanup and archival (pending)\n```\n\n**For Blocked Version Control Operations**:\nAlways include the blocking reason and impact assessment:\n- `[Version Control] Merge payment feature (BLOCKED - merge conflicts in core auth module)`\n- `[Version Control] Tag release v2.0.5 (BLOCKED - waiting for final QA sign-off)`\n- `[Version Control] Push hotfix to production (BLOCKED - pending security review approval)`\n\n**For Emergency Hotfix Coordination**:\nPrioritize and track urgent fixes:\n- `[Version Control] URGENT: Create hotfix branch for critical security vulnerability`\n- `[Version Control] URGENT: Fast-track merge and deploy auth bypass fix`\n- `[Version Control] URGENT: Coordinate immediate rollback if deployment fails`\n\n### Version Control Standards and Practices\nAll version control todos should adhere to:\n- **Semantic Versioning**: Follow MAJOR.MINOR.PATCH versioning scheme\n- **Conventional Commits**: Use structured commit messages for automatic changelog generation\n- **Branch Naming**: Use consistent naming conventions (feature/, hotfix/, release/)\n- **Merge Strategy**: Specify merge strategy (squash, rebase, merge commit)\n\n### Git Operation Documentation\nInclude specific git commands and rationale:\n- `[Version Control] Execute git rebase -i to clean up commit history before merge`\n- `[Version Control] Use git cherry-pick to apply specific fixes to release branch`\n- `[Version Control] Create signed tags with GPG for security compliance`\n- `[Version Control] Configure git hooks for automated testing and validation`\n\n### Coordination with Other Agents\n- Reference specific code changes when coordinating merges with engineering teams\n- Include deployment timeline requirements when coordinating with ops agents\n- Note documentation update needs when coordinating release communications\n- Update todos immediately when version control operations affect other agents\n- Use clear branch names and commit messages that help other agents understand changes",
48
48
  "knowledge": {
49
49
  "domain_expertise": [
50
50
  "Git workflows and best practices",
@@ -186,6 +186,7 @@ class AgentDeploymentService:
186
186
  "skipped": [],
187
187
  "updated": [],
188
188
  "migrated": [], # Track agents migrated from old format
189
+ "converted": [], # Track YAML to MD conversions
189
190
  "total": 0,
190
191
  # METRICS: Add detailed timing and performance data to results
191
192
  "metrics": {
@@ -212,6 +213,10 @@ class AgentDeploymentService:
212
213
  results["errors"].append(error_msg)
213
214
  return results
214
215
 
216
+ # Convert any existing YAML files to MD format
217
+ conversion_results = self._convert_yaml_to_md(target_dir)
218
+ results["converted"] = conversion_results.get("converted", [])
219
+
215
220
  # Load base agent content
216
221
  # OPERATIONAL NOTE: Base agent contains shared configuration and instructions
217
222
  # that all agents inherit. This reduces duplication and ensures consistency.
@@ -242,7 +247,7 @@ class AgentDeploymentService:
242
247
  agent_start_time = time.time()
243
248
 
244
249
  agent_name = template_file.stem
245
- target_file = target_dir / f"{agent_name}.yaml"
250
+ target_file = target_dir / f"{agent_name}.md"
246
251
 
247
252
  # Check if agent needs update
248
253
  needs_update = force_rebuild
@@ -265,12 +270,12 @@ class AgentDeploymentService:
265
270
  self.logger.debug(f"Skipped up-to-date agent: {agent_name}")
266
271
  continue
267
272
 
268
- # Build the agent file
269
- agent_yaml = self._build_agent_yaml(agent_name, template_file, base_agent_data)
273
+ # Build the agent file as markdown with YAML frontmatter
274
+ agent_content = self._build_agent_markdown(agent_name, template_file, base_agent_data)
270
275
 
271
276
  # Write the agent file
272
277
  is_update = target_file.exists()
273
- target_file.write_text(agent_yaml)
278
+ target_file.write_text(agent_content)
274
279
 
275
280
  # METRICS: Record deployment time for this agent
276
281
  agent_deployment_time = (time.time() - agent_start_time) * 1000 # Convert to ms
@@ -320,6 +325,7 @@ class AgentDeploymentService:
320
325
  f"Deployed {len(results['deployed'])} agents, "
321
326
  f"updated {len(results['updated'])}, "
322
327
  f"migrated {len(results['migrated'])}, "
328
+ f"converted {len(results['converted'])} YAML files, "
323
329
  f"skipped {len(results['skipped'])}, "
324
330
  f"errors: {len(results['errors'])}"
325
331
  )
@@ -514,6 +520,13 @@ class AgentDeploymentService:
514
520
  ["Read", "Write", "Edit", "Grep", "Glob", "LS"] # Default fallback
515
521
  )
516
522
 
523
+ # Get model from capabilities.model in new format
524
+ model = (
525
+ template_data.get('capabilities', {}).get('model') or
526
+ template_data.get('configuration_fields', {}).get('model') or
527
+ "claude-sonnet-4-20250514" # Default fallback
528
+ )
529
+
517
530
  frontmatter = f"""---
518
531
  name: {agent_name}
519
532
  description: "{description}"
@@ -523,6 +536,7 @@ created: "{datetime.now().isoformat()}Z"
523
536
  updated: "{datetime.now().isoformat()}Z"
524
537
  tags: {tags}
525
538
  tools: {tools}
539
+ model: "{model}"
526
540
  metadata:
527
541
  base_version: "{self._format_version_display(base_version)}"
528
542
  agent_version: "{self._format_version_display(agent_version)}"
@@ -848,7 +862,7 @@ temperature: {temperature}"""
848
862
  return results
849
863
 
850
864
  # List deployed agents
851
- agent_files = list(agents_dir.glob("*.yaml"))
865
+ agent_files = list(agents_dir.glob("*.md"))
852
866
  for agent_file in agent_files:
853
867
  try:
854
868
  # Read first few lines to get agent name from YAML
@@ -1101,7 +1115,7 @@ temperature: {temperature}"""
1101
1115
  return results
1102
1116
 
1103
1117
  # Remove system agents only (identified by claude-mpm author)
1104
- agent_files = list(agents_dir.glob("*.yaml"))
1118
+ agent_files = list(agents_dir.glob("*.md"))
1105
1119
 
1106
1120
  for agent_file in agent_files:
1107
1121
  try:
@@ -1531,4 +1545,212 @@ temperature: {temperature}"""
1531
1545
  except Exception as e:
1532
1546
  error_msg = f"Failed to deploy system instructions: {e}"
1533
1547
  self.logger.error(error_msg)
1534
- results["errors"].append(error_msg)
1548
+ results["errors"].append(error_msg)
1549
+
1550
+ def _convert_yaml_to_md(self, target_dir: Path) -> Dict[str, Any]:
1551
+ """
1552
+ Convert existing YAML agent files to MD format with YAML frontmatter.
1553
+
1554
+ This method handles backward compatibility by finding existing .yaml
1555
+ agent files and converting them to .md format expected by Claude Code.
1556
+
1557
+ Args:
1558
+ target_dir: Directory containing agent files
1559
+
1560
+ Returns:
1561
+ Dictionary with conversion results:
1562
+ - converted: List of converted files
1563
+ - errors: List of conversion errors
1564
+ - skipped: List of files that didn't need conversion
1565
+ """
1566
+ results = {
1567
+ "converted": [],
1568
+ "errors": [],
1569
+ "skipped": []
1570
+ }
1571
+
1572
+ try:
1573
+ # Find existing YAML agent files
1574
+ yaml_files = list(target_dir.glob("*.yaml"))
1575
+
1576
+ if not yaml_files:
1577
+ self.logger.debug("No YAML files found to convert")
1578
+ return results
1579
+
1580
+ self.logger.info(f"Found {len(yaml_files)} YAML files to convert to MD format")
1581
+
1582
+ for yaml_file in yaml_files:
1583
+ try:
1584
+ agent_name = yaml_file.stem
1585
+ md_file = target_dir / f"{agent_name}.md"
1586
+
1587
+ # Skip if MD file already exists (unless it's older than YAML)
1588
+ if md_file.exists():
1589
+ # Check modification times for safety
1590
+ yaml_mtime = yaml_file.stat().st_mtime
1591
+ md_mtime = md_file.stat().st_mtime
1592
+
1593
+ if md_mtime >= yaml_mtime:
1594
+ results["skipped"].append({
1595
+ "yaml_file": str(yaml_file),
1596
+ "md_file": str(md_file),
1597
+ "reason": "MD file already exists and is newer"
1598
+ })
1599
+ continue
1600
+ else:
1601
+ # MD file is older, proceed with conversion
1602
+ self.logger.info(f"MD file {md_file.name} is older than YAML, converting...")
1603
+
1604
+ # Read YAML content
1605
+ yaml_content = yaml_file.read_text()
1606
+
1607
+ # Convert YAML to MD with YAML frontmatter
1608
+ md_content = self._convert_yaml_content_to_md(yaml_content, agent_name)
1609
+
1610
+ # Write MD file
1611
+ md_file.write_text(md_content)
1612
+
1613
+ # Create backup of YAML file before removing (for safety)
1614
+ backup_file = target_dir / f"{agent_name}.yaml.backup"
1615
+ try:
1616
+ yaml_file.rename(backup_file)
1617
+ self.logger.debug(f"Created backup: {backup_file.name}")
1618
+ except Exception as backup_error:
1619
+ self.logger.warning(f"Failed to create backup for {yaml_file.name}: {backup_error}")
1620
+ # Still remove the original YAML file even if backup fails
1621
+ yaml_file.unlink()
1622
+
1623
+ results["converted"].append({
1624
+ "from": str(yaml_file),
1625
+ "to": str(md_file),
1626
+ "agent": agent_name
1627
+ })
1628
+
1629
+ self.logger.info(f"Converted {yaml_file.name} to {md_file.name}")
1630
+
1631
+ except Exception as e:
1632
+ error_msg = f"Failed to convert {yaml_file.name}: {e}"
1633
+ self.logger.error(error_msg)
1634
+ results["errors"].append(error_msg)
1635
+
1636
+ except Exception as e:
1637
+ error_msg = f"YAML to MD conversion failed: {e}"
1638
+ self.logger.error(error_msg)
1639
+ results["errors"].append(error_msg)
1640
+
1641
+ return results
1642
+
1643
+ def _convert_yaml_content_to_md(self, yaml_content: str, agent_name: str) -> str:
1644
+ """
1645
+ Convert YAML agent content to MD format with YAML frontmatter.
1646
+
1647
+ Args:
1648
+ yaml_content: Original YAML content
1649
+ agent_name: Name of the agent
1650
+
1651
+ Returns:
1652
+ Markdown content with YAML frontmatter
1653
+ """
1654
+ import re
1655
+ from datetime import datetime
1656
+
1657
+ # Extract YAML frontmatter and content
1658
+ yaml_parts = yaml_content.split('---', 2)
1659
+
1660
+ if len(yaml_parts) < 3:
1661
+ # No proper YAML frontmatter, treat entire content as instructions
1662
+ frontmatter = f"""---
1663
+ name: {agent_name}
1664
+ description: "Agent for specialized tasks"
1665
+ version: "1.0.0"
1666
+ author: "claude-mpm@anthropic.com"
1667
+ created: "{datetime.now().isoformat()}Z"
1668
+ updated: "{datetime.now().isoformat()}Z"
1669
+ tags: ["{agent_name}", "mpm-framework"]
1670
+ tools: ["Read", "Write", "Edit", "Grep", "Glob", "LS"]
1671
+ metadata:
1672
+ deployment_type: "system"
1673
+ converted_from: "yaml"
1674
+ ---
1675
+
1676
+ """
1677
+ return frontmatter + yaml_content.strip()
1678
+
1679
+ # Parse existing frontmatter
1680
+ yaml_frontmatter = yaml_parts[1].strip()
1681
+ instructions = yaml_parts[2].strip()
1682
+
1683
+ # Extract key fields from YAML frontmatter
1684
+ name = agent_name
1685
+ description = self._extract_yaml_field(yaml_frontmatter, 'description') or f"{agent_name.title()} agent for specialized tasks"
1686
+ version = self._extract_yaml_field(yaml_frontmatter, 'version') or "1.0.0"
1687
+ tools_line = self._extract_yaml_field(yaml_frontmatter, 'tools') or "Read, Write, Edit, Grep, Glob, LS"
1688
+
1689
+ # Convert tools string to list format
1690
+ if isinstance(tools_line, str):
1691
+ if tools_line.startswith('[') and tools_line.endswith(']'):
1692
+ # Already in list format
1693
+ tools_list = tools_line
1694
+ else:
1695
+ # Convert comma-separated to list
1696
+ tools = [tool.strip() for tool in tools_line.split(',')]
1697
+ tools_list = str(tools)
1698
+ else:
1699
+ tools_list = str(tools_line) if tools_line else '["Read", "Write", "Edit", "Grep", "Glob", "LS"]'
1700
+
1701
+ # Build new YAML frontmatter
1702
+ new_frontmatter = f"""---
1703
+ name: {name}
1704
+ description: "{description}"
1705
+ version: "{version}"
1706
+ author: "claude-mpm@anthropic.com"
1707
+ created: "{datetime.now().isoformat()}Z"
1708
+ updated: "{datetime.now().isoformat()}Z"
1709
+ tags: ["{agent_name}", "mpm-framework"]
1710
+ tools: {tools_list}
1711
+ metadata:
1712
+ deployment_type: "system"
1713
+ converted_from: "yaml"
1714
+ ---
1715
+
1716
+ """
1717
+
1718
+ return new_frontmatter + instructions
1719
+
1720
+ def _extract_yaml_field(self, yaml_content: str, field_name: str) -> str:
1721
+ """
1722
+ Extract a field value from YAML content.
1723
+
1724
+ Args:
1725
+ yaml_content: YAML content string
1726
+ field_name: Field name to extract
1727
+
1728
+ Returns:
1729
+ Field value or None if not found
1730
+ """
1731
+ import re
1732
+
1733
+ try:
1734
+ # Match field with quoted or unquoted values
1735
+ pattern = rf'^{field_name}:\s*["\']?(.*?)["\']?\s*$'
1736
+ match = re.search(pattern, yaml_content, re.MULTILINE)
1737
+
1738
+ if match:
1739
+ return match.group(1).strip()
1740
+
1741
+ # Try with alternative spacing patterns
1742
+ pattern = rf'^{field_name}\s*:\s*(.+)$'
1743
+ match = re.search(pattern, yaml_content, re.MULTILINE)
1744
+
1745
+ if match:
1746
+ value = match.group(1).strip()
1747
+ # Remove quotes if present
1748
+ if (value.startswith('"') and value.endswith('"')) or \
1749
+ (value.startswith("'") and value.endswith("'")):
1750
+ value = value[1:-1]
1751
+ return value
1752
+
1753
+ except Exception as e:
1754
+ self.logger.warning(f"Error extracting YAML field '{field_name}': {e}")
1755
+
1756
+ return None
@@ -264,7 +264,11 @@ class AgentProfileLoader(BaseService):
264
264
  tier_path = self.tier_paths[tier]
265
265
 
266
266
  # Try different file formats and naming conventions
267
+ # Check .md files first (Claude Code format), then fall back to YAML/JSON
267
268
  possible_files = [
269
+ tier_path / f"{agent_name}.md",
270
+ tier_path / f"{agent_name}_agent.md",
271
+ tier_path / f"{agent_name}-agent.md",
268
272
  tier_path / f"{agent_name}.yaml",
269
273
  tier_path / f"{agent_name}.yml",
270
274
  tier_path / f"{agent_name}.json",
@@ -290,16 +294,23 @@ class AgentProfileLoader(BaseService):
290
294
  content = file_path.read_text()
291
295
 
292
296
  # Parse based on file extension
293
- if file_path.suffix in ['.yaml', '.yml']:
297
+ if file_path.suffix == '.md':
298
+ # Parse markdown with YAML frontmatter
299
+ data, instructions = self._parse_markdown_with_frontmatter(content)
300
+ elif file_path.suffix in ['.yaml', '.yml']:
294
301
  data = yaml.safe_load(content)
302
+ instructions = data.get('instructions', '')
295
303
  elif file_path.suffix == '.json':
296
304
  data = json.loads(content)
305
+ instructions = data.get('instructions', '')
297
306
  else:
298
307
  # Try to parse as YAML first, then JSON
299
308
  try:
300
309
  data = yaml.safe_load(content)
310
+ instructions = data.get('instructions', '')
301
311
  except:
302
312
  data = json.loads(content)
313
+ instructions = data.get('instructions', '')
303
314
 
304
315
  # Create profile
305
316
  profile = AgentProfile(
@@ -308,7 +319,7 @@ class AgentProfileLoader(BaseService):
308
319
  description=data.get('description', ''),
309
320
  tier=tier,
310
321
  source_path=str(file_path),
311
- instructions=data.get('instructions', ''),
322
+ instructions=instructions,
312
323
  capabilities=data.get('capabilities', []),
313
324
  constraints=data.get('constraints', []),
314
325
  metadata=data.get('metadata', {}),
@@ -329,6 +340,44 @@ class AgentProfileLoader(BaseService):
329
340
  error=str(e)
330
341
  )
331
342
 
343
+ def _parse_markdown_with_frontmatter(self, content: str) -> Tuple[Dict[str, Any], str]:
344
+ """
345
+ Parse markdown file with YAML frontmatter.
346
+
347
+ Args:
348
+ content: Markdown content with YAML frontmatter
349
+
350
+ Returns:
351
+ Tuple of (frontmatter_data, markdown_content)
352
+ """
353
+ import re
354
+
355
+ # Check if content starts with YAML frontmatter
356
+ if not content.strip().startswith('---'):
357
+ # No frontmatter, treat entire content as instructions
358
+ return {'name': 'unknown', 'description': 'No frontmatter found'}, content
359
+
360
+ # Split frontmatter and content
361
+ parts = re.split(r'^---\s*$', content, 2, re.MULTILINE)
362
+
363
+ if len(parts) < 3:
364
+ # Invalid frontmatter structure
365
+ return {'name': 'unknown', 'description': 'Invalid frontmatter'}, content
366
+
367
+ # Parse YAML frontmatter
368
+ frontmatter_text = parts[1].strip()
369
+ markdown_content = parts[2].strip()
370
+
371
+ try:
372
+ frontmatter_data = yaml.safe_load(frontmatter_text)
373
+ if not isinstance(frontmatter_data, dict):
374
+ frontmatter_data = {'name': 'unknown', 'description': 'Invalid frontmatter format'}
375
+ except Exception as e:
376
+ logger.error(f"Error parsing YAML frontmatter: {e}")
377
+ frontmatter_data = {'name': 'unknown', 'description': f'YAML parse error: {e}'}
378
+
379
+ return frontmatter_data, markdown_content
380
+
332
381
  # ========================================================================
333
382
  # Profile Discovery
334
383
  # ========================================================================
@@ -342,16 +391,19 @@ class AgentProfileLoader(BaseService):
342
391
  continue
343
392
 
344
393
  agents = []
345
- for file_path in tier_path.glob('*.{yaml,yml,json}'):
346
- agent_name = file_path.stem
347
- # Remove common suffixes
348
- if agent_name.endswith('_agent'):
349
- agent_name = agent_name[:-6]
350
- elif agent_name.endswith('-agent'):
351
- agent_name = agent_name[:-6]
352
-
353
- if agent_name not in agents:
354
- agents.append(agent_name)
394
+ # Check for .md files (Claude Code format) and YAML/JSON files
395
+ file_patterns = ['*.md', '*.yaml', '*.yml', '*.json']
396
+ for pattern in file_patterns:
397
+ for file_path in tier_path.glob(pattern):
398
+ agent_name = file_path.stem
399
+ # Remove common suffixes
400
+ if agent_name.endswith('_agent'):
401
+ agent_name = agent_name[:-6]
402
+ elif agent_name.endswith('-agent'):
403
+ agent_name = agent_name[:-6]
404
+
405
+ if agent_name not in agents:
406
+ agents.append(agent_name)
355
407
 
356
408
  discovered[tier] = agents
357
409
  logger.debug(f"Discovered {len(agents)} agents in {tier.value} tier")
@@ -294,19 +294,50 @@ class AgentRegistry:
294
294
  try:
295
295
  content = file_path.read_text()
296
296
 
297
- # Try to parse as JSON/YAML for structured data
298
- if file_path.suffix in ['.json', '.yaml', '.yml']:
297
+ # Try to parse as JSON/YAML/MD for structured data
298
+ if file_path.suffix in ['.md', '.json', '.yaml', '.yml']:
299
299
  try:
300
300
  if file_path.suffix == '.json':
301
301
  data = json.loads(content)
302
+ description = data.get('description', '')
303
+ version = data.get('version', '0.0.0')
304
+ capabilities = data.get('capabilities', [])
305
+ metadata = data.get('metadata', {})
306
+ elif file_path.suffix == '.md':
307
+ # Parse markdown with YAML frontmatter
308
+ import yaml
309
+ import re
310
+
311
+ # Check for YAML frontmatter
312
+ if content.strip().startswith('---'):
313
+ parts = re.split(r'^---\s*$', content, 2, re.MULTILINE)
314
+ if len(parts) >= 3:
315
+ frontmatter_text = parts[1].strip()
316
+ data = yaml.safe_load(frontmatter_text)
317
+ description = data.get('description', '')
318
+ version = data.get('version', '0.0.0')
319
+ capabilities = data.get('tools', []) # Tools in .md format
320
+ metadata = data.get('metadata', {})
321
+ else:
322
+ # No frontmatter, use defaults
323
+ description = f"{file_path.stem} agent"
324
+ version = '1.0.0'
325
+ capabilities = []
326
+ metadata = {}
327
+ else:
328
+ # No frontmatter, use defaults
329
+ description = f"{file_path.stem} agent"
330
+ version = '1.0.0'
331
+ capabilities = []
332
+ metadata = {}
302
333
  else:
334
+ # YAML files
303
335
  import yaml
304
336
  data = yaml.safe_load(content)
305
-
306
- description = data.get('description', '')
307
- version = data.get('version', '0.0.0')
308
- capabilities = data.get('capabilities', [])
309
- metadata = data.get('metadata', {})
337
+ description = data.get('description', '')
338
+ version = data.get('version', '0.0.0')
339
+ capabilities = data.get('capabilities', [])
340
+ metadata = data.get('metadata', {})
310
341
  except Exception:
311
342
  pass
312
343
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: claude-mpm
3
- Version: 3.4.20
3
+ Version: 3.4.24
4
4
  Summary: Claude Multi-agent Project Manager - Clean orchestration with ticket management
5
5
  Home-page: https://github.com/bobmatnyc/claude-mpm
6
6
  Author: Claude MPM Team
@@ -36,6 +36,7 @@ Requires-Dist: python-socketio>=5.11.0
36
36
  Requires-Dist: aiohttp>=3.9.0
37
37
  Requires-Dist: aiohttp-cors>=0.8.0
38
38
  Requires-Dist: python-engineio>=4.8.0
39
+ Requires-Dist: urwid>=2.1.0
39
40
  Provides-Extra: dev
40
41
  Requires-Dist: pytest>=7.0; extra == "dev"
41
42
  Requires-Dist: pytest-asyncio; extra == "dev"