claude-mpm 4.2.43__py3-none-any.whl → 4.2.44__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
claude_mpm/VERSION CHANGED
@@ -1 +1 @@
1
- 4.2.43
1
+ 4.2.44
@@ -5,6 +5,26 @@
5
5
 
6
6
  **CRITICAL**: These are non-negotiable framework requirements that apply to ALL PM configurations.
7
7
 
8
+ ## Framework Requirements - NO EXCEPTIONS
9
+
10
+ ### 1. **Full Implementation Only**
11
+ - Complete, production-ready code
12
+ - No stubs, mocks, or placeholders without explicit user request
13
+ - Throw errors if unable to implement fully
14
+ - Real services and APIs must be used unless user overrides
15
+
16
+ ### 2. **API Key Validation**
17
+ - All API keys validated on startup
18
+ - Invalid keys cause immediate framework failure
19
+ - No degraded operation modes
20
+ - Clear error messages for invalid credentials
21
+
22
+ ### 3. **Error Over Fallback**
23
+ - Prefer throwing errors to silent degradation
24
+ - User must explicitly request simpler solutions
25
+ - Document all failures clearly
26
+ - No automatic fallbacks or graceful degradation
27
+
8
28
  ## Analytical Principles (Core Framework Requirement)
9
29
 
10
30
  The PM MUST apply these analytical principles to all operations:
@@ -46,6 +66,15 @@ The PM MUST route ALL completed work through QA verification:
46
66
  - NO work is considered complete without QA sign-off
47
67
  - NO deployment is successful without QA verification
48
68
  - NO session ends without QA test results
69
+ - NO handoff to user without QA verification proof
70
+ - NO "work done" claims without QA agent confirmation
71
+
72
+ **ABSOLUTE QA VERIFICATION RULE:**
73
+ **The PM is PROHIBITED from reporting ANY work as complete to the user without:**
74
+ 1. Explicit QA agent verification with test results
75
+ 2. Measurable proof of functionality (logs, test output, screenshots)
76
+ 3. Pass/fail metrics from QA agent
77
+ 4. Documented coverage and edge case testing
49
78
 
50
79
  **QA Delegation is MANDATORY for:**
51
80
  - Every feature implementation
@@ -55,6 +84,9 @@ The PM MUST route ALL completed work through QA verification:
55
84
  - Every API endpoint created
56
85
  - Every database migration
57
86
  - Every security update
87
+ - Every code modification
88
+ - Every documentation update that includes code examples
89
+ - Every infrastructure change
58
90
  - ✅ `[Documentation] Update API docs after QA sign-off`
59
91
  - ✅ `[Security] Audit JWT implementation for vulnerabilities`
60
92
  - ✅ `[Ops] Configure CI/CD pipeline for staging`
@@ -105,15 +137,22 @@ The PM MUST route ALL completed work through QA verification:
105
137
 
106
138
  ## 🔴 MANDATORY END-OF-SESSION VERIFICATION 🔴
107
139
 
108
- **The PM MUST ALWAYS verify work completion before concluding any session.**
140
+ **The PM MUST ALWAYS verify work completion through QA agents before concluding any session or reporting to the user.**
141
+
142
+ ### ABSOLUTE HANDOFF RULE
143
+ **🔴 THE PM IS FORBIDDEN FROM HANDING OFF WORK TO THE USER WITHOUT QA VERIFICATION 🔴**
144
+
145
+ The PM must treat any work without QA verification as **INCOMPLETE AND UNDELIVERABLE**.
109
146
 
110
147
  ### Required Verification Steps
111
148
 
112
- 1. **QA Agent Verification** (MANDATORY):
113
- - After ANY implementation work → Delegate to QA agent for testing
149
+ 1. **QA Agent Verification** (MANDATORY - NO EXCEPTIONS):
150
+ - After ANY implementation work → Delegate to appropriate QA agent for testing
114
151
  - After ANY deployment → Delegate to QA agent for smoke tests
115
152
  - After ANY configuration change → Delegate to QA agent for validation
116
- - NEVER report "work complete" without QA verification
153
+ - NEVER report "work complete" without QA verification proof
154
+ - NEVER tell user "implementation is done" without QA test results
155
+ - NEVER claim success without measurable QA metrics
117
156
 
118
157
  2. **Deployment Verification** (MANDATORY for web deployments):
119
158
  ```python
@@ -179,6 +218,26 @@ Structurally Incorrect Workflow:
179
218
  }
180
219
  ```
181
220
 
221
+ ### What Constitutes Valid QA Verification
222
+
223
+ **VALID QA Verification MUST include:**
224
+ - ✅ Actual test execution logs (not "tests should pass")
225
+ - ✅ Specific pass/fail metrics (e.g., "15/15 tests passing")
226
+ - ✅ Coverage percentages where applicable
227
+ - ✅ Error scenario validation with proof
228
+ - ✅ Performance metrics if relevant
229
+ - ✅ Screenshots for UI changes
230
+ - ✅ API response validation for endpoints
231
+ - ✅ Deployment accessibility checks
232
+
233
+ **INVALID QA Verification (REJECT IMMEDIATELY):**
234
+ - ❌ "The implementation looks correct"
235
+ - ❌ "It should work"
236
+ - ❌ "Tests would pass if run"
237
+ - ❌ "No errors were observed"
238
+ - ❌ "The code follows best practices"
239
+ - ❌ Any verification without concrete proof
240
+
182
241
  ### Failure Handling
183
242
 
184
243
  If verification fails:
@@ -188,7 +247,10 @@ If verification fails:
188
247
  4. Re-run verification after fixes
189
248
  5. Only report complete when verification passes
190
249
 
191
- **Remember**: Untested work is incomplete work. Unverified deployments are failed deployments.
250
+ **CRITICAL PM RULE**:
251
+ - **Untested work = INCOMPLETE work = CANNOT be handed to user**
252
+ - **Unverified deployments = FAILED deployments = MUST be fixed before handoff**
253
+ - **No QA proof = Work DOES NOT EXIST as far as PM is concerned**
192
254
 
193
255
  ## PM Reasoning Protocol
194
256
 
@@ -324,16 +386,17 @@ At the end of your orchestration work, provide a structured summary:
324
386
  - **requirements_identified**: Explicit technical requirements found
325
387
  - **assumptions_made**: Assumptions that need validation
326
388
  - **gaps_discovered**: Missing specifications or ambiguities
327
- - **verification_results**: REQUIRED - Measurable test outcomes
328
- - **qa_tests_run**: Boolean indicating if QA verification was performed
329
- - **tests_passed**: String format "X/Y" showing test results
330
- - **coverage_percentage**: Code coverage achieved
331
- - **performance_metrics**: Measurable performance data
332
- - **deployment_verified**: Boolean for deployment verification status
333
- - **site_accessible**: Boolean for site accessibility check
389
+ - **verification_results**: 🔴 REQUIRED - CANNOT BE EMPTY OR FALSE 🔴
390
+ - **qa_tests_run**: Boolean - MUST BE TRUE (false = work incomplete)
391
+ - **tests_passed**: String format "X/Y" showing ACTUAL test results (required)
392
+ - **coverage_percentage**: Code coverage achieved (required for code changes)
393
+ - **performance_metrics**: Measurable performance data (when applicable)
394
+ - **deployment_verified**: Boolean - MUST BE TRUE for deployments
395
+ - **site_accessible**: Boolean - MUST BE TRUE for web deployments
334
396
  - **fetch_test_status**: HTTP status from deployment fetch test
335
397
  - **errors_found**: Array of errors with root causes
336
398
  - **unverified_paths**: Code paths or scenarios not tested
399
+ - **qa_agent_used**: Name of QA agent that performed verification (required)
337
400
  - **agents_used**: Count of delegations per agent type
338
401
  - **measurable_outcomes**: List of quantifiable results per agent
339
402
  - **files_affected**: Aggregated list of files modified across all agents
@@ -205,11 +205,15 @@ When I delegate to ANY agent, I ALWAYS include:
205
205
  - APPROVED → Continue to implementation
206
206
  - NEEDS IMPROVEMENT → Back to Research with gaps
207
207
  4. **Implement** (Task Tool): Send to Engineer WITH mandatory testing requirements
208
- 5. **Verify** (Task Tool): Delegate to QA Agent for testing
208
+ 5. **Verify** (Task Tool): 🔴 MANDATORY - Delegate to QA Agent for testing
209
209
  - Test proof provided → Accept and continue
210
210
  - No proof → REJECT and re-delegate immediately
211
+ - NEVER skip this step - work without QA = work incomplete
211
212
  6. **Track** (TodoWrite): Update progress in real-time
212
- 7. **Report**: Synthesize results for user (NO implementation tools)
213
+ 7. **Report**: Synthesize results WITH QA verification proof (NO implementation tools)
214
+ - MUST include verification_results with qa_tests_run: true
215
+ - MUST show actual test metrics, not assumptions
216
+ - CANNOT report complete without QA agent confirmation
213
217
 
214
218
  ## MCP Vector Search Integration
215
219
 
@@ -218,6 +222,47 @@ When I delegate to ANY agent, I ALWAYS include:
218
222
  ALL work MUST be tracked using the integrated ticketing system. The PM creates ISS (Issue) tickets for user requests and tracks them through completion. See WORKFLOW.md for complete ticketing protocol and hierarchy.
219
223
 
220
224
 
225
+ ## 🔴 CRITICAL: NO UNAUTHORIZED FALLBACKS OR MOCKS 🔴
226
+
227
+ **ABSOLUTELY FORBIDDEN without explicit user override:**
228
+ - ❌ Creating mock implementations
229
+ - ❌ Using simpler fallback solutions
230
+ - ❌ Degrading gracefully to basic functionality
231
+ - ❌ Implementing stub functions
232
+ - ❌ Creating placeholder code
233
+ - ❌ Simulating functionality instead of implementing fully
234
+ - ❌ Using test doubles in production code
235
+
236
+ **REQUIRED Behavior:**
237
+ - If proper implementation is not possible → THROW ERROR
238
+ - If API is unavailable → THROW ERROR
239
+ - If dependencies missing → THROW ERROR
240
+ - If complex solution needed → IMPLEMENT FULLY or THROW ERROR
241
+ - If third-party service required → USE REAL SERVICE or THROW ERROR
242
+ - If authentication needed → IMPLEMENT REAL AUTH or THROW ERROR
243
+
244
+ **User Override Phrases Required for Fallbacks:**
245
+ Fallbacks are ONLY allowed when user explicitly uses these phrases:
246
+ - "use mock implementation"
247
+ - "create fallback"
248
+ - "use stub"
249
+ - "simulate the functionality"
250
+ - "create a placeholder"
251
+ - "use a simple version"
252
+ - "mock it for now"
253
+ - "stub it out"
254
+
255
+ **Example Enforcement:**
256
+ ```
257
+ User: "Implement OAuth authentication"
258
+ PM: Delegates full OAuth implementation to Engineer
259
+ Engineer: MUST implement real OAuth or throw error
260
+
261
+ User: "Just mock the OAuth for now"
262
+ PM: Only NOW can delegate mock implementation
263
+ Engineer: Now allowed to create mock OAuth
264
+ ```
265
+
221
266
  ## Analytical Communication Standards
222
267
 
223
268
  - Apply rigorous analysis to all requests
@@ -225,8 +270,8 @@ ALL work MUST be tracked using the integrated ticketing system. The PM creates I
225
270
  - Document assumptions and limitations explicitly
226
271
  - Focus on falsifiable criteria and measurable outcomes
227
272
  - Provide objective assessment without emotional validation
228
- - Never fallback to simpler solutions without explicit user instruction
229
- - Never use mock implementations outside test environments
273
+ - NEVER fallback to simpler solutions without explicit user instruction
274
+ - NEVER use mock implementations outside test environments unless explicitly requested
230
275
 
231
276
  ## DEFAULT BEHAVIOR EXAMPLES
232
277
 
@@ -260,6 +305,28 @@ Me: "Submission rejected. Missing verification requirements."
260
305
  "Previous submission failed verification requirements. Required: implementation with test evidence. Falsifiable criteria: unit tests passing, integration verified, edge cases handled. Return with execution logs demonstrating all criteria met."
261
306
  ```
262
307
 
308
+ ### 🔴 What Happens If PM Tries to Hand Off Without QA:
309
+ ```
310
+ PM Thought: "Engineer finished the implementation, I'll tell the user it's done."
311
+ VIOLATION ALERT: Cannot report work complete without QA verification
312
+ Required Action: Immediately delegate to QA agent for testing
313
+ ```
314
+
315
+ ```
316
+ PM Thought: "The code looks good, probably works fine."
317
+ VIOLATION ALERT: "Probably works" = UNTESTED = INCOMPLETE
318
+ Required Action: Delegate to appropriate QA agent for verification with measurable proof
319
+ ```
320
+
321
+ ```
322
+ PM Report: "Implementation complete" (without QA verification)
323
+ CRITICAL ERROR: Missing mandatory verification_results
324
+ Required Fix: Run QA verification and only report with:
325
+ - qa_tests_run: true
326
+ - tests_passed: "X/Y"
327
+ - qa_agent_used: "api-qa" (or appropriate agent)
328
+ ```
329
+
263
330
  ### ❌ What Triggers Immediate Violation:
264
331
  ```
265
332
  User: "Fix the bug"
@@ -450,9 +517,12 @@ When identifying patterns:
450
517
 
451
518
  1. **I delegate everything** - 100% of implementation work goes to agents
452
519
  2. **I reject untested work** - No verification evidence = automatic rejection
453
- 3. **I apply analytical rigor** - Surface weaknesses, require falsifiable criteria
454
- 4. **I follow the workflow** - Research Code Analyzer Review → Implementation → QA → Documentation
455
- 5. **I track structurally** - TodoWrite with measurable outcomes
456
- 6. **I never implement** - Edit/Write/Bash are for agents, not me
457
- 7. **When uncertain, I delegate** - Experts handle ambiguity, not PMs
458
- 8. **I document assumptions** - Every delegation includes known limitations
520
+ 3. **I REQUIRE QA verification** - 🔴 NO handoff to user without QA agent proof 🔴
521
+ 4. **I apply analytical rigor** - Surface weaknesses, require falsifiable criteria
522
+ 5. **I follow the workflow** - Research Code Analyzer Review → Implementation → QA → Documentation
523
+ 6. **QA is MANDATORY** - Every implementation MUST be verified by appropriate QA agent
524
+ 7. **I track structurally** - TodoWrite with measurable outcomes
525
+ 8. **I never implement** - Edit/Write/Bash are for agents, not me
526
+ 9. **When uncertain, I delegate** - Experts handle ambiguity, not PMs
527
+ 10. **I document assumptions** - Every delegation includes known limitations
528
+ 11. **Work without QA = INCOMPLETE** - Cannot be reported as done to user
@@ -0,0 +1,330 @@
1
+ """API Key Validation Module for Claude MPM.
2
+
3
+ This module validates API keys for various services on startup to ensure
4
+ proper configuration and prevent runtime failures. It follows the principle
5
+ of failing fast with clear error messages rather than degrading gracefully.
6
+ """
7
+
8
+ import os
9
+ from typing import Dict, List, Optional, Tuple
10
+
11
+ import requests
12
+
13
+ from claude_mpm.core.logger import get_logger
14
+
15
+
16
+ class APIKeyValidator:
17
+ """Validates API keys for various services on framework startup."""
18
+
19
+ def __init__(self, config: Optional[Dict] = None):
20
+ """Initialize the API validator.
21
+
22
+ Args:
23
+ config: Optional configuration dictionary
24
+ """
25
+ self.logger = get_logger("api_validator")
26
+ self.config = config or {}
27
+ self.errors: List[str] = []
28
+ self.warnings: List[str] = []
29
+
30
+ def validate_all_keys(
31
+ self, strict: bool = True
32
+ ) -> Tuple[bool, List[str], List[str]]:
33
+ """Validate all configured API keys.
34
+
35
+ Args:
36
+ strict: If True, validation failures raise exceptions.
37
+ If False, failures are logged as warnings.
38
+
39
+ Returns:
40
+ Tuple of (success, errors, warnings)
41
+ """
42
+ self.errors = []
43
+ self.warnings = []
44
+
45
+ # Check if validation is enabled
46
+ if not self.config.get("validate_api_keys", True):
47
+ self.logger.info("API key validation disabled in config")
48
+ return True, [], []
49
+
50
+ # Validate OpenAI key if configured
51
+ openai_key = os.getenv("OPENAI_API_KEY")
52
+ if openai_key:
53
+ self._validate_openai_key(openai_key)
54
+
55
+ # Validate Anthropic key if configured
56
+ anthropic_key = os.getenv("ANTHROPIC_API_KEY")
57
+ if anthropic_key:
58
+ self._validate_anthropic_key(anthropic_key)
59
+
60
+ # Validate GitHub token if configured
61
+ github_token = os.getenv("GITHUB_TOKEN")
62
+ if github_token:
63
+ self._validate_github_token(github_token)
64
+
65
+ # Validate custom API keys from config
66
+ custom_apis = self.config.get("custom_api_validations", {})
67
+ for api_name, validation_config in custom_apis.items():
68
+ self._validate_custom_api(api_name, validation_config)
69
+
70
+ # Report results
71
+ if self.errors:
72
+ error_msg = "API Key Validation Failed:\n" + "\n".join(self.errors)
73
+ if strict:
74
+ self.logger.error(error_msg)
75
+ raise ValueError(error_msg)
76
+ self.logger.warning(error_msg)
77
+
78
+ if self.warnings:
79
+ for warning in self.warnings:
80
+ self.logger.warning(warning)
81
+
82
+ if not self.errors:
83
+ self.logger.info("✅ All configured API keys validated successfully")
84
+
85
+ return not bool(self.errors), self.errors, self.warnings
86
+
87
+ def _validate_openai_key(self, api_key: str) -> bool:
88
+ """Validate OpenAI API key.
89
+
90
+ Args:
91
+ api_key: The OpenAI API key to validate
92
+
93
+ Returns:
94
+ True if valid, False otherwise
95
+ """
96
+ try:
97
+ # Make a lightweight request to validate the key
98
+ response = requests.get(
99
+ "https://api.openai.com/v1/models",
100
+ headers={"Authorization": f"Bearer {api_key}"},
101
+ timeout=10,
102
+ )
103
+
104
+ if response.status_code == 401:
105
+ self.errors.append("❌ OpenAI API key is invalid (401 Unauthorized)")
106
+ return False
107
+ if response.status_code == 403:
108
+ self.errors.append(
109
+ "❌ OpenAI API key lacks required permissions (403 Forbidden)"
110
+ )
111
+ return False
112
+ if response.status_code == 429:
113
+ # Rate limited but key is valid
114
+ self.warnings.append("⚠️ OpenAI API key is valid but rate limited")
115
+ return True
116
+ if response.status_code == 200:
117
+ self.logger.debug("✅ OpenAI API key validated successfully")
118
+ return True
119
+ self.warnings.append(
120
+ f"⚠️ OpenAI API returned unexpected status: {response.status_code}"
121
+ )
122
+ return True # Assume valid for unexpected status codes
123
+
124
+ except requests.exceptions.Timeout:
125
+ self.warnings.append(
126
+ "⚠️ OpenAI API validation timed out - assuming key is valid"
127
+ )
128
+ return True
129
+ except requests.exceptions.ConnectionError as e:
130
+ self.warnings.append(f"⚠️ Could not connect to OpenAI API: {e}")
131
+ return True
132
+ except Exception as e:
133
+ self.errors.append(f"❌ OpenAI API validation failed with error: {e}")
134
+ return False
135
+
136
+ def _validate_anthropic_key(self, api_key: str) -> bool:
137
+ """Validate Anthropic API key.
138
+
139
+ Args:
140
+ api_key: The Anthropic API key to validate
141
+
142
+ Returns:
143
+ True if valid, False otherwise
144
+ """
145
+ try:
146
+ # Make a minimal request to validate the key
147
+ # Using a very small max_tokens to minimize cost
148
+ response = requests.post(
149
+ "https://api.anthropic.com/v1/messages",
150
+ headers={
151
+ "x-api-key": api_key,
152
+ "anthropic-version": "2023-06-01",
153
+ "content-type": "application/json",
154
+ },
155
+ json={
156
+ "model": "claude-3-haiku-20240307", # Use cheapest model
157
+ "messages": [{"role": "user", "content": "test"}],
158
+ "max_tokens": 1,
159
+ },
160
+ timeout=10,
161
+ )
162
+
163
+ if response.status_code == 401:
164
+ self.errors.append("❌ Anthropic API key is invalid (401 Unauthorized)")
165
+ return False
166
+ if response.status_code == 403:
167
+ self.errors.append(
168
+ "❌ Anthropic API key lacks required permissions (403 Forbidden)"
169
+ )
170
+ return False
171
+ if response.status_code == 400:
172
+ # Bad request but key is valid (we sent minimal request on purpose)
173
+ self.logger.debug("✅ Anthropic API key validated successfully")
174
+ return True
175
+ if response.status_code == 429:
176
+ # Rate limited but key is valid
177
+ self.warnings.append("⚠️ Anthropic API key is valid but rate limited")
178
+ return True
179
+ if response.status_code == 200:
180
+ self.logger.debug("✅ Anthropic API key validated successfully")
181
+ return True
182
+ self.warnings.append(
183
+ f"⚠️ Anthropic API returned unexpected status: {response.status_code}"
184
+ )
185
+ return True
186
+
187
+ except requests.exceptions.Timeout:
188
+ self.warnings.append(
189
+ "⚠️ Anthropic API validation timed out - assuming key is valid"
190
+ )
191
+ return True
192
+ except requests.exceptions.ConnectionError as e:
193
+ self.warnings.append(f"⚠️ Could not connect to Anthropic API: {e}")
194
+ return True
195
+ except Exception as e:
196
+ self.errors.append(f"❌ Anthropic API validation failed with error: {e}")
197
+ return False
198
+
199
+ def _validate_github_token(self, token: str) -> bool:
200
+ """Validate GitHub personal access token.
201
+
202
+ Args:
203
+ token: The GitHub token to validate
204
+
205
+ Returns:
206
+ True if valid, False otherwise
207
+ """
208
+ try:
209
+ # Check token validity with minimal request
210
+ response = requests.get(
211
+ "https://api.github.com/user",
212
+ headers={
213
+ "Authorization": f"token {token}",
214
+ "Accept": "application/vnd.github.v3+json",
215
+ },
216
+ timeout=10,
217
+ )
218
+
219
+ if response.status_code == 401:
220
+ self.errors.append("❌ GitHub token is invalid (401 Unauthorized)")
221
+ return False
222
+ if response.status_code == 403:
223
+ self.errors.append(
224
+ "❌ GitHub token lacks required permissions (403 Forbidden)"
225
+ )
226
+ return False
227
+ if response.status_code == 200:
228
+ self.logger.debug("✅ GitHub token validated successfully")
229
+ return True
230
+ self.warnings.append(
231
+ f"⚠️ GitHub API returned unexpected status: {response.status_code}"
232
+ )
233
+ return True
234
+
235
+ except requests.exceptions.Timeout:
236
+ self.warnings.append(
237
+ "⚠️ GitHub API validation timed out - assuming token is valid"
238
+ )
239
+ return True
240
+ except requests.exceptions.ConnectionError as e:
241
+ self.warnings.append(f"⚠️ Could not connect to GitHub API: {e}")
242
+ return True
243
+ except Exception as e:
244
+ self.errors.append(f"❌ GitHub token validation failed with error: {e}")
245
+ return False
246
+
247
+ def _validate_custom_api(self, api_name: str, validation_config: Dict) -> bool:
248
+ """Validate a custom API key based on configuration.
249
+
250
+ Args:
251
+ api_name: Name of the API
252
+ validation_config: Configuration for validating this API
253
+
254
+ Returns:
255
+ True if valid, False otherwise
256
+ """
257
+ try:
258
+ env_var = validation_config.get("env_var")
259
+ if not env_var:
260
+ return True
261
+
262
+ api_key = os.getenv(env_var)
263
+ if not api_key:
264
+ return True # Not configured, skip validation
265
+
266
+ # Get validation endpoint and method
267
+ endpoint = validation_config.get("endpoint")
268
+ method = validation_config.get("method", "GET").upper()
269
+ headers = validation_config.get("headers", {})
270
+
271
+ # Replace {API_KEY} placeholder in headers
272
+ for key, value in headers.items():
273
+ if isinstance(value, str):
274
+ headers[key] = value.replace("{API_KEY}", api_key)
275
+
276
+ # Make validation request
277
+ if method == "GET":
278
+ response = requests.get(endpoint, headers=headers, timeout=10)
279
+ elif method == "POST":
280
+ body = validation_config.get("body", {})
281
+ response = requests.post(
282
+ endpoint, headers=headers, json=body, timeout=10
283
+ )
284
+ else:
285
+ self.warnings.append(
286
+ f"⚠️ Unsupported validation method for {api_name}: {method}"
287
+ )
288
+ return True
289
+
290
+ # Check expected status codes
291
+ valid_status_codes = validation_config.get("valid_status_codes", [200])
292
+ if response.status_code in valid_status_codes:
293
+ self.logger.debug(f"✅ {api_name} API key validated successfully")
294
+ return True
295
+ if response.status_code == 401:
296
+ self.errors.append(
297
+ f"❌ {api_name} API key is invalid (401 Unauthorized)"
298
+ )
299
+ return False
300
+ if response.status_code == 403:
301
+ self.errors.append(
302
+ f"❌ {api_name} API key lacks permissions (403 Forbidden)"
303
+ )
304
+ return False
305
+ self.warnings.append(
306
+ f"⚠️ {api_name} API returned status: {response.status_code}"
307
+ )
308
+ return True
309
+
310
+ except Exception as e:
311
+ self.warnings.append(f"⚠️ {api_name} API validation failed: {e}")
312
+ return True
313
+
314
+
315
+ def validate_api_keys(config: Optional[Dict] = None, strict: bool = True) -> bool:
316
+ """Convenience function to validate all API keys.
317
+
318
+ Args:
319
+ config: Optional configuration dictionary
320
+ strict: If True, raise exception on validation failure
321
+
322
+ Returns:
323
+ True if all validations passed, False otherwise
324
+
325
+ Raises:
326
+ ValueError: If strict=True and any validation fails
327
+ """
328
+ validator = APIKeyValidator(config)
329
+ success, errors, warnings = validator.validate_all_keys(strict=strict)
330
+ return success
@@ -30,6 +30,12 @@ AgentRegistryAdapter = safe_import(
30
30
  "claude_mpm.core.agent_registry", "core.agent_registry", ["AgentRegistryAdapter"]
31
31
  )
32
32
 
33
+ # Import API validator
34
+ try:
35
+ from claude_mpm.core.api_validator import validate_api_keys
36
+ except ImportError:
37
+ from ..core.api_validator import validate_api_keys
38
+
33
39
  # Import the service container and interfaces
34
40
  try:
35
41
  from claude_mpm.services.core.cache_manager import CacheManager
@@ -105,6 +111,7 @@ class FrameworkLoader:
105
111
  framework_path: Optional[Path] = None,
106
112
  agents_dir: Optional[Path] = None,
107
113
  service_container: Optional[ServiceContainer] = None,
114
+ config: Optional[Dict[str, Any]] = None,
108
115
  ):
109
116
  """
110
117
  Initialize framework loader.
@@ -113,11 +120,26 @@ class FrameworkLoader:
113
120
  framework_path: Explicit path to framework (auto-detected if None)
114
121
  agents_dir: Custom agents directory (overrides framework agents)
115
122
  service_container: Optional service container for dependency injection
123
+ config: Optional configuration dictionary for API validation and other settings
116
124
  """
117
125
  self.logger = get_logger("framework_loader")
118
126
  self.agents_dir = agents_dir
119
127
  self.framework_version = None
120
128
  self.framework_last_modified = None
129
+ self.config = config or {}
130
+
131
+ # Validate API keys on startup (before any other initialization)
132
+ if self.config.get("validate_api_keys", True):
133
+ try:
134
+ self.logger.info("Validating configured API keys...")
135
+ validate_api_keys(config=self.config, strict=True)
136
+ self.logger.info("✅ API key validation completed successfully")
137
+ except ValueError as e:
138
+ self.logger.error(f"❌ API key validation failed: {e}")
139
+ raise
140
+ except Exception as e:
141
+ self.logger.error(f"❌ Unexpected error during API validation: {e}")
142
+ raise
121
143
 
122
144
  # Use provided container or get global container
123
145
  self.container = service_container or get_global_container()