@jaguilar87/gaia-ops 2.2.3 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,54 @@ All notable changes to the CLAUDE.md orchestrator instructions are documented in
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [2.3.0] - 2025-11-11
9
+
10
+ ### Added - Phase 0 Clarification Module
11
+ - **NEW:** `tools/clarification/` module for intelligent ambiguity detection before routing
12
+ - `clarification/engine.py`: Core clarification engine (refactored from clarify_engine.py)
13
+ - `clarification/patterns.py`: Ambiguity detection patterns (ServiceAmbiguityPattern, NamespaceAmbiguityPattern, etc.)
14
+ - `clarification/workflow.py`: High-level helper functions for orchestrators (`execute_workflow()`)
15
+ - `clarification/__init__.py`: Clean public API
16
+ - **Protocol G** in `agents/gaia.md`: Clarification system analysis and troubleshooting guide
17
+ - **Rule 5.0.1** in `templates/CLAUDE.template.md`: Phase 0 implementation guide with code examples
18
+ - **Phase 0 integration** in `/speckit.specify` command
19
+ - **Regression tests** in `tests/integration/test_phase_0_regression.py`
20
+ - **Clarification metrics** to Key System Metrics (target: 20-30% clarification rate)
21
+
22
+ ### Changed - Module Restructuring (BREAKING)
23
+ - **BREAKING:** `clarify_engine.py` and `clarify_patterns.py` moved to `clarification/` module
24
+ - **Old imports:** `from clarify_engine import request_clarification`
25
+ - **New imports:** `from clarification import execute_workflow, request_clarification`
26
+ - Updated `application_services` structure in project-context.json:
27
+ - Added `tech_stack` field (replaces `technology`)
28
+ - Added `namespace` field for service location
29
+ - **Removed** `status` field (dynamic state must be verified in real-time, not stored in SSOT)
30
+ - Service metadata now shows only static information: `tech_stack | namespace | port`
31
+
32
+ ### Fixed
33
+ - Import paths in `tests/tools/test_clarify_engine.py` updated to new module structure
34
+ - Service metadata test updated to reflect removal of dynamic status field
35
+ - All 20 unit tests passing with new module structure
36
+
37
+ ### Documentation
38
+ - Added comprehensive Phase 0 implementation guide
39
+ - Added troubleshooting guide for clarification system
40
+ - Updated speckit.specify.md with Phase 0 workflow integration
41
+ - Added Protocol G diagnostic steps in gaia.md
42
+
43
+ ### Migration Guide for v2.3.0
44
+ ```python
45
+ # Before (v2.2.x)
46
+ from clarify_engine import request_clarification, process_clarification
47
+
48
+ # After (v2.3.0)
49
+ from clarification import execute_workflow
50
+
51
+ # Simple usage
52
+ result = execute_workflow(user_prompt)
53
+ enriched_prompt = result["enriched_prompt"]
54
+ ```
55
+
8
56
  ## [2.2.3] - 2025-11-11
9
57
 
10
58
  ### Fixed - Deterministic Project Context Location
package/agents/gaia.md CHANGED
@@ -113,6 +113,8 @@ When `npx @jaguilar87/gaia-ops init` (or `gaia-init` after a global install) run
113
113
  ### Key System Metrics (What to Track)
114
114
 
115
115
  - **Routing Accuracy:** Target 92.7% (from tests)
116
+ - **Clarification Rate:** Target 20-30% (Phase 0 effectiveness)
117
+ - **Clarification Effectiveness:** Routing accuracy improvement post-enrichment
116
118
  - **Context Efficiency:** 79-85% token savings (via context_provider.py)
117
119
  - **Test Coverage:** 55+ tests, 100% pass rate
118
120
  - **Production Uptime:** Track via logs/
@@ -472,6 +474,88 @@ You:
472
474
 
473
475
  **Output:** Feature RFC (Request for Comments)
474
476
 
477
+ ### Protocol G: Clarification System Analysis
478
+
479
+ **Trigger:** "¿Por qué no se activó clarificación?" or "Ambiguity detection issues" or "Analyze Phase 0"
480
+
481
+ **Steps:**
482
+ 1. Review clarification logs:
483
+ ```bash
484
+ cat .claude/logs/clarifications.jsonl | jq .
485
+ ```
486
+
487
+ 2. Check configuration:
488
+ ```bash
489
+ cat .claude/config/clarification_rules.json
490
+ jq '.global_settings' .claude/config/clarification_rules.json
491
+ ```
492
+
493
+ 3. Test detection manually:
494
+ ```python
495
+ import sys
496
+ sys.path.insert(0, '.claude/tools')
497
+ from clarification import execute_workflow, request_clarification
498
+
499
+ result = request_clarification("Check the API")
500
+ print(f"Needs clarification: {result['needs_clarification']}")
501
+ print(f"Ambiguity score: {result.get('ambiguity_score', 0)}")
502
+ print(f"Patterns detected: {[a['pattern'] for a in result.get('ambiguity_points', [])]}")
503
+ ```
504
+
505
+ 4. Review pattern definitions:
506
+ ```bash
507
+ cat .claude/tools/clarification/patterns.py
508
+ ```
509
+
510
+ 5. Analyze recent clarifications:
511
+ ```bash
512
+ cat .claude/logs/clarifications.jsonl | \
513
+ jq -r '[.timestamp, .ambiguity_score, .original_prompt] | @csv' | \
514
+ tail -20
515
+ ```
516
+
517
+ 6. Benchmark effectiveness:
518
+ - **Clarification rate:** Target 20-30%
519
+ - **User satisfaction:** No complaints about "too many questions"
520
+ - **Routing accuracy improvement:** Measure before/after enrichment
521
+
522
+ 7. Check module structure:
523
+ ```bash
524
+ ls -la .claude/tools/clarification/
525
+ # Should show: __init__.py, engine.py, patterns.py, workflow.py
526
+ ```
527
+
528
+ **Output:** Clarification effectiveness report + tuning recommendations
529
+
530
+ **Common Issues:**
531
+
532
+ | Issue | Symptom | Fix |
533
+ |-------|---------|-----|
534
+ | Threshold too high | Ambiguity not detected | Lower from 30 to 20 in `clarification_rules.json` |
535
+ | Threshold too low | Too many questions | Raise from 30 to 40 |
536
+ | Missing patterns | New ambiguous terms not caught | Add to `patterns.py` (ServiceAmbiguityPattern.keywords) |
537
+ | Spanish keywords missing | Spanish prompts not detected | Add to keywords list in patterns |
538
+ | Import errors | Module not found | Check symlinks to gaia-ops package |
539
+ | No services found | Tests failing | Verify `project-context.json` has `application_services` section |
540
+
541
+ **Metrics to Track:**
542
+
543
+ ```python
544
+ # Calculate clarification rate
545
+ import json
546
+
547
+ with open('.claude/logs/clarifications.jsonl') as f:
548
+ logs = [json.loads(line) for line in f]
549
+
550
+ total_requests = len(logs)
551
+ clarified = sum(1 for log in logs if log.get('ambiguity_score', 0) > 30)
552
+ rate = (clarified / total_requests * 100) if total_requests > 0 else 0
553
+
554
+ print(f"Clarification rate: {rate:.1f}%")
555
+ print(f"Target: 20-30%")
556
+ print(f"Status: {'✅ Good' if 20 <= rate <= 30 else '⚠️ Needs tuning'}")
557
+ ```
558
+
475
559
  ## Research Guidelines (WebSearch Usage)
476
560
 
477
561
  When researching, follow this pattern:
@@ -31,32 +31,23 @@ Given those arguments, do this:
31
31
 
32
32
  ```python
33
33
  import sys
34
- sys.path.insert(0, '/home/jaguilar/aaxis/rnd/repositories/.claude/tools')
35
- from clarify_engine import request_clarification, process_clarification
34
+ sys.path.insert(0, '.claude/tools')
35
+ from clarification import execute_workflow
36
36
 
37
- # Detect ambiguity in feature description
38
- clarification_data = request_clarification(
37
+ # Detect and resolve ambiguity in feature description
38
+ result = execute_workflow(
39
39
  user_prompt=feature_description,
40
- command_context={"command": "speckit.specify"}
40
+ command_context={"command": "speckit.specify"},
41
+ ask_user_question_func=AskUserQuestion # Claude Code tool
41
42
  )
42
43
 
43
- if clarification_data["needs_clarification"]:
44
- # Present summary
45
- print(clarification_data["summary"])
44
+ # Use enriched description for remaining steps
45
+ feature_description = result["enriched_prompt"]
46
46
 
47
- # Ask questions (AskUserQuestion tool)
48
- response = AskUserQuestion(**clarification_data["question_config"])
49
-
50
- # Enrich feature description
51
- result = process_clarification(
52
- clarification_data["engine_instance"],
53
- feature_description,
54
- response["answers"],
55
- clarification_data["clarification_context"]
56
- )
57
-
58
- # Use enriched description for remaining steps
59
- feature_description = result["enriched_prompt"]
47
+ # Log clarification if it occurred
48
+ if result["clarification_occurred"]:
49
+ from clarification import get_clarification_summary
50
+ print(get_clarification_summary(result["clarification_data"]))
60
51
  ```
61
52
 
62
53
  **What gets clarified**:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@jaguilar87/gaia-ops",
3
- "version": "2.2.3",
3
+ "version": "2.3.0",
4
4
  "description": "Multi-agent orchestration system for Claude Code - DevOps automation toolkit",
5
5
  "main": "index.js",
6
6
  "type": "module",
@@ -27,7 +27,7 @@
27
27
 
28
28
  | Phase | Action | Tool | Mandatory |
29
29
  |-------|--------|------|-----------|
30
- | 0 | Clarification (if ambiguous) | `clarify_engine.py` | Conditional |
30
+ | 0 | Clarification (if ambiguous) | `clarification` module | Conditional |
31
31
  | 1 | Route to agent | `agent_router.py` | Yes |
32
32
  | 2 | Provision context | `context_provider.py` | Yes |
33
33
  | 3 | Invoke (Planning) | `Task` tool | Yes |
@@ -37,6 +37,55 @@
37
37
 
38
38
  **See:** `.claude/config/orchestration-workflow.md` for complete details.
39
39
 
40
+ ### Rule 5.0.1 [P0]: Phase 0 Implementation
41
+
42
+ **When to invoke Phase 0:**
43
+ - User prompt contains generic terms: "the service", "the API", "the cluster"
44
+ - User mentions "production" but project-context says "non-prod"
45
+ - User references resource without specifying which (Redis, DB, namespace)
46
+ - Ambiguity score > 30 (threshold configurable in `.claude/config/clarification_rules.json`)
47
+
48
+ **When to skip Phase 0:**
49
+ - User prompt is specific: "tcm-api in tcm-non-prod"
50
+ - Read-only queries: "show me logs"
51
+ - Simple commands: "/help", "/status"
52
+
53
+ **Code Integration:**
54
+
55
+ ```python
56
+ import sys
57
+ sys.path.insert(0, '.claude/tools')
58
+ from clarification import execute_workflow
59
+
60
+ # At orchestrator entry point
61
+ result = execute_workflow(
62
+ user_prompt=user_prompt,
63
+ command_context={"command": "general_prompt"}
64
+ )
65
+
66
+ enriched_prompt = result["enriched_prompt"]
67
+
68
+ # Then proceed to Phase 1 with enriched_prompt
69
+ # agent = route_to_agent(enriched_prompt)
70
+ ```
71
+
72
+ **Manual Mode (custom UX):**
73
+
74
+ ```python
75
+ from clarification import execute_workflow
76
+
77
+ result = execute_workflow(user_prompt) # No ask_user_question_func
78
+
79
+ if result.get("needs_manual_questioning"):
80
+ # Show summary and questions to user
81
+ print(result["summary"])
82
+
83
+ # Get user responses with custom UI
84
+ # Then manually process clarification
85
+ ```
86
+
87
+ **See:** `.claude/config/orchestration-workflow.md` lines 25-150 for complete Phase 0 protocol.
88
+
40
89
  ### Rule 5.1 [P0]: Approval Gate Enforcement
41
90
  - Phase 4 CANNOT be skipped for T3 operations
42
91
  - Phase 5 requires `validation["approved"] == True`
@@ -0,0 +1,175 @@
1
+ """
2
+ Phase 0 Regression Tests
3
+
4
+ Regression tests for specific cases that should trigger clarification.
5
+ These tests validate real-world scenarios where ambiguity detection is critical.
6
+ """
7
+
8
+ import pytest
9
+ import sys
10
+ import os
11
+
12
+ sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', 'tools'))
13
+
14
+ from clarification import request_clarification, execute_workflow
15
+
16
+
17
+ def test_regression_valida_servicio_tcm():
18
+ """
19
+ Regression test for: "valida el servicio de tcm"
20
+
21
+ This SHOULD trigger clarification because:
22
+ 1. "el servicio" is ambiguous (ServiceAmbiguityPattern)
23
+ 2. "tcm" is not a specific service name (could be tcm-api, tcm-web, tcm-bot, tcm-jobs)
24
+ 3. "validar" is generic (no specific action)
25
+
26
+ Expected behavior:
27
+ - Detect service ambiguity
28
+ - Ambiguity score >= 30 (above threshold)
29
+ - Offer options: tcm-api, tcm-web, tcm-bot, tcm-jobs
30
+ """
31
+
32
+ user_request = "valida el servicio de tcm"
33
+
34
+ # Phase 0: Should detect ambiguity
35
+ clarification = request_clarification(user_request)
36
+
37
+ # Assertions
38
+ assert clarification["needs_clarification"] == True, \
39
+ "Should need clarification for ambiguous service reference"
40
+
41
+ assert clarification["ambiguity_score"] >= 30, \
42
+ f"Ambiguity score {clarification['ambiguity_score']} should be >= 30"
43
+
44
+ # Should detect service ambiguity pattern
45
+ service_ambiguity = next(
46
+ (a for a in clarification.get("ambiguity_points", [])
47
+ if "service" in a["pattern"]),
48
+ None
49
+ )
50
+
51
+ assert service_ambiguity is not None, \
52
+ "Should detect service ambiguity pattern"
53
+
54
+ # Should offer service options containing "tcm"
55
+ available_options = service_ambiguity.get("available_options", [])
56
+ tcm_services = [opt for opt in available_options if "tcm" in opt.lower()]
57
+
58
+ assert len(tcm_services) > 0, \
59
+ f"Should offer TCM services, got: {available_options}"
60
+
61
+
62
+ def test_regression_check_the_api():
63
+ """
64
+ Regression test for: "Check the API"
65
+
66
+ Generic reference to "the API" when multiple APIs exist should trigger clarification.
67
+ """
68
+
69
+ clarification = request_clarification("Check the API")
70
+
71
+ assert clarification["needs_clarification"] == True
72
+ assert clarification["ambiguity_score"] > 30
73
+
74
+ # Should detect service ambiguity
75
+ patterns = [a["pattern"] for a in clarification.get("ambiguity_points", [])]
76
+ assert "service_ambiguity" in patterns
77
+
78
+
79
+ def test_regression_deploy_to_cluster():
80
+ """
81
+ Regression test for: "Deploy to cluster"
82
+
83
+ Missing namespace specification should trigger clarification.
84
+ """
85
+
86
+ clarification = request_clarification("Deploy to cluster")
87
+
88
+ assert clarification["needs_clarification"] == True
89
+
90
+ # Should detect namespace ambiguity
91
+ patterns = [a["pattern"] for a in clarification.get("ambiguity_points", [])]
92
+ assert "namespace_ambiguity" in patterns
93
+
94
+
95
+ def test_regression_deploy_to_production():
96
+ """
97
+ Regression test for: "Deploy to production"
98
+
99
+ When project-context says "non-prod" but user mentions "production",
100
+ should trigger environment warning.
101
+ """
102
+
103
+ clarification = request_clarification("Deploy to production")
104
+
105
+ assert clarification["needs_clarification"] == True
106
+
107
+ # Should detect environment mismatch
108
+ patterns = [a["pattern"] for a in clarification.get("ambiguity_points", [])]
109
+ assert "environment_ambiguity" in patterns
110
+
111
+ # Should have high weight (90)
112
+ env_ambiguity = next(
113
+ (a for a in clarification.get("ambiguity_points", [])
114
+ if "environment" in a["pattern"]),
115
+ None
116
+ )
117
+ assert env_ambiguity["weight"] >= 80, \
118
+ "Environment mismatch should have high weight"
119
+
120
+
121
+ def test_no_clarification_for_specific_prompt():
122
+ """
123
+ Test that specific prompts do NOT trigger clarification.
124
+
125
+ "Check tcm-api service in tcm-non-prod namespace" is fully qualified
126
+ and should not need clarification.
127
+ """
128
+
129
+ clarification = request_clarification(
130
+ "Check tcm-api service in tcm-non-prod namespace"
131
+ )
132
+
133
+ assert clarification["needs_clarification"] == False, \
134
+ "Specific prompt should not need clarification"
135
+
136
+ assert clarification.get("ambiguity_score", 0) <= 30, \
137
+ "Specific prompt should have low ambiguity score"
138
+
139
+
140
+ def test_spanish_keywords_detection():
141
+ """
142
+ Test that Spanish keywords are properly detected.
143
+
144
+ "Chequea el servicio" should trigger same clarification as "Check the service"
145
+ """
146
+
147
+ clarification = request_clarification("Chequea el servicio")
148
+
149
+ assert clarification["needs_clarification"] == True
150
+
151
+ # Should detect service ambiguity
152
+ patterns = [a["pattern"] for a in clarification.get("ambiguity_points", [])]
153
+ assert "service_ambiguity" in patterns
154
+
155
+
156
+ def test_execute_workflow_without_ask_function():
157
+ """
158
+ Test execute_workflow in manual mode (no ask_user_question_func).
159
+
160
+ Should return questions for manual handling.
161
+ """
162
+
163
+ result = execute_workflow("Check the API")
164
+
165
+ assert result.get("needs_manual_questioning") == True
166
+ assert "questions" in result
167
+ assert len(result["questions"]) > 0
168
+ assert "summary" in result
169
+
170
+ # Original prompt not enriched yet
171
+ assert result["enriched_prompt"] == "Check the API"
172
+
173
+
174
+ if __name__ == "__main__":
175
+ pytest.main([__file__, "-v"])
@@ -1,5 +1,5 @@
1
1
  """
2
- Unit tests for clarify_engine.py
2
+ Unit tests for clarification module
3
3
  """
4
4
 
5
5
  import pytest
@@ -11,7 +11,7 @@ import os
11
11
  tools_path = os.path.join(os.path.dirname(__file__), '..', '..', 'tools')
12
12
  sys.path.insert(0, tools_path)
13
13
 
14
- from clarify_engine import ClarificationEngine, request_clarification, process_clarification
14
+ from clarification import ClarificationEngine, request_clarification, process_clarification
15
15
 
16
16
 
17
17
  @pytest.fixture
@@ -338,8 +338,7 @@ def test_get_option_metadata_service(engine_with_mock_context):
338
338
  "tcm-api": {
339
339
  "tech_stack": "NestJS",
340
340
  "namespace": "tcm-non-prod",
341
- "port": 3001,
342
- "status": "running"
341
+ "port": 3001
343
342
  }
344
343
  }
345
344
  }
@@ -349,7 +348,7 @@ def test_get_option_metadata_service(engine_with_mock_context):
349
348
  assert "NestJS" in metadata
350
349
  assert "tcm-non-prod" in metadata
351
350
  assert "3001" in metadata
352
- assert "✅" in metadata # Running status emoji
351
+ # Status is NOT in project-context (verified in real-time only)
353
352
 
354
353
 
355
354
  def test_get_option_metadata_namespace(engine_with_mock_context):
@@ -0,0 +1,52 @@
1
+ """
2
+ Clarification Module
3
+
4
+ Intelligent ambiguity detection and prompt enrichment for Phase 0 of the
5
+ orchestration workflow.
6
+
7
+ This module detects ambiguous user prompts (e.g., "check the API" when multiple
8
+ APIs exist) and generates targeted clarification questions with rich options
9
+ from project-context.json.
10
+
11
+ Public API:
12
+ - execute_workflow(): High-level function for orchestrators
13
+ - request_clarification(): Detect ambiguity and generate questions
14
+ - process_clarification(): Enrich prompt with user responses
15
+ - ClarificationEngine: Core engine class
16
+ - detect_all_ambiguities(): Pattern-based detection
17
+
18
+ Example usage:
19
+ from clarification import execute_workflow
20
+
21
+ result = execute_workflow("check the API")
22
+ enriched_prompt = result["enriched_prompt"]
23
+ """
24
+
25
+ from .engine import (
26
+ ClarificationEngine,
27
+ request_clarification,
28
+ process_clarification
29
+ )
30
+
31
+ from .patterns import detect_all_ambiguities
32
+
33
+ from .workflow import (
34
+ execute_workflow,
35
+ should_skip_clarification,
36
+ get_clarification_summary
37
+ )
38
+
39
+ __all__ = [
40
+ # Main workflow function (recommended for orchestrators)
41
+ 'execute_workflow',
42
+ 'should_skip_clarification',
43
+ 'get_clarification_summary',
44
+
45
+ # Lower-level functions (for advanced usage)
46
+ 'ClarificationEngine',
47
+ 'request_clarification',
48
+ 'process_clarification',
49
+ 'detect_all_ambiguities',
50
+ ]
51
+
52
+ __version__ = '1.0.0'
@@ -52,7 +52,7 @@ class ClarificationEngine:
52
52
  }
53
53
  """
54
54
  # Import patterns here to avoid circular dependency
55
- from clarify_patterns import detect_all_ambiguities
55
+ from .patterns import detect_all_ambiguities
56
56
 
57
57
  # Step 1: Detect all ambiguities using patterns
58
58
  ambiguities = detect_all_ambiguities(user_prompt, self.project_context)
@@ -330,10 +330,10 @@ class ClarificationEngine:
330
330
  tech_stack = svc.get("tech_stack", "N/A")
331
331
  namespace = svc.get("namespace", "N/A")
332
332
  port = svc.get("port", "N/A")
333
- status = svc.get("status", "unknown")
334
- status_emoji = "✅" if status == "running" else "⏸️"
335
333
 
336
- return f"{tech_stack} | Namespace: {namespace} | Puerto: {port} | Estado: {status_emoji} {status}"
334
+ # Status is NOT stored in project-context (must be verified in real-time)
335
+ # Only show static metadata
336
+ return f"{tech_stack} | Namespace: {namespace} | Puerto: {port}"
337
337
 
338
338
  # Namespace metadata
339
339
  elif pattern == "namespace_ambiguity" and "namespace_metadata" in ambiguity:
@@ -0,0 +1,205 @@
1
+ """
2
+ Clarification Workflow Functions
3
+
4
+ High-level helper functions for orchestrators to integrate Phase 0 clarification
5
+ with minimal code changes.
6
+ """
7
+
8
+ import sys
9
+ from typing import Dict, Any, Optional
10
+
11
+ from .engine import request_clarification, process_clarification
12
+
13
+
14
+ def execute_workflow(
15
+ user_prompt: str,
16
+ command_context: Optional[Dict[str, Any]] = None,
17
+ ask_user_question_func: Optional[callable] = None
18
+ ) -> Dict[str, Any]:
19
+ """
20
+ Execute complete Phase 0 clarification workflow.
21
+
22
+ This is the main entry point for orchestrators. It handles:
23
+ 1. Ambiguity detection
24
+ 2. Question generation
25
+ 3. User interaction (if ask function provided)
26
+ 4. Prompt enrichment
27
+
28
+ Args:
29
+ user_prompt: Original user request
30
+ command_context: Optional context (e.g., {"command": "speckit.specify"})
31
+ ask_user_question_func: Function to ask questions (e.g., AskUserQuestion tool)
32
+ If None, returns questions for manual handling
33
+
34
+ Returns:
35
+ {
36
+ "enriched_prompt": str, # Enriched or original prompt
37
+ "clarification_occurred": bool, # True if clarification happened
38
+ "clarification_data": Dict or None, # Full clarification data (for logging)
39
+ "questions": List[Dict], # Questions asked (if any)
40
+
41
+ # Only if ask_user_question_func is None:
42
+ "needs_manual_questioning": bool, # True if manual question handling needed
43
+ "summary": str # Clarification summary to show user
44
+ }
45
+
46
+ Example (automatic mode - with AskUserQuestion):
47
+ from clarification import execute_workflow
48
+
49
+ result = execute_workflow(
50
+ user_prompt="check the API",
51
+ ask_user_question_func=AskUserQuestion # Claude Code tool
52
+ )
53
+ enriched_prompt = result["enriched_prompt"]
54
+
55
+ Example (manual mode - for custom UX):
56
+ result = execute_workflow("check the API")
57
+
58
+ if result.get("needs_manual_questioning"):
59
+ print(result["summary"])
60
+ for q in result["questions"]:
61
+ # Show question to user with custom UI
62
+ # Get user response
63
+ # Then call process_clarification manually
64
+ """
65
+
66
+ # Step 1: Detect ambiguity
67
+ clarification = request_clarification(
68
+ user_prompt=user_prompt,
69
+ command_context=command_context or {"command": "general_prompt"}
70
+ )
71
+
72
+ # Step 2: Decision point - no clarification needed
73
+ if not clarification["needs_clarification"]:
74
+ return {
75
+ "enriched_prompt": user_prompt,
76
+ "clarification_occurred": False,
77
+ "clarification_data": None,
78
+ "questions": []
79
+ }
80
+
81
+ # Step 3: If no ask function provided, return questions for manual handling
82
+ if ask_user_question_func is None:
83
+ return {
84
+ "enriched_prompt": user_prompt, # Not enriched yet
85
+ "clarification_occurred": False,
86
+ "clarification_data": clarification,
87
+ "questions": clarification["question_config"]["questions"],
88
+ "summary": clarification["summary"],
89
+ "needs_manual_questioning": True
90
+ }
91
+
92
+ # Step 4: Ask user questions (using provided function)
93
+ try:
94
+ user_responses = ask_user_question_func(**clarification["question_config"])
95
+ except Exception as e:
96
+ # If questioning fails, return original prompt
97
+ print(f"Warning: Failed to ask clarification questions: {e}", file=sys.stderr)
98
+ return {
99
+ "enriched_prompt": user_prompt,
100
+ "clarification_occurred": False,
101
+ "clarification_data": None,
102
+ "questions": [],
103
+ "error": str(e)
104
+ }
105
+
106
+ # Step 5: Enrich prompt with responses
107
+ result = process_clarification(
108
+ engine_instance=clarification["engine_instance"],
109
+ original_prompt=user_prompt,
110
+ user_responses=user_responses.get("answers", {}),
111
+ clarification_context=clarification["clarification_context"]
112
+ )
113
+
114
+ return {
115
+ "enriched_prompt": result["enriched_prompt"],
116
+ "clarification_occurred": True,
117
+ "clarification_data": {
118
+ "original_prompt": user_prompt,
119
+ "ambiguity_score": clarification.get("ambiguity_score", 0),
120
+ "patterns_detected": [a["pattern"] for a in clarification.get("ambiguity_points", [])],
121
+ "user_responses": user_responses.get("answers", {})
122
+ },
123
+ "questions": clarification["question_config"]["questions"]
124
+ }
125
+
126
+
127
+ def should_skip_clarification(
128
+ user_prompt: str,
129
+ command_context: Optional[Dict[str, Any]] = None
130
+ ) -> bool:
131
+ """
132
+ Quick check if Phase 0 should be skipped for this prompt.
133
+
134
+ Skip clarification for:
135
+ - System commands (starts with "/")
136
+ - Read-only queries with low ambiguity ("show", "get", "list")
137
+ - Already-specific prompts (includes service name, namespace, etc.)
138
+
139
+ Args:
140
+ user_prompt: User's request
141
+ command_context: Optional command context
142
+
143
+ Returns:
144
+ True if Phase 0 should be skipped, False otherwise
145
+
146
+ Example:
147
+ if should_skip_clarification("/help"):
148
+ # Skip Phase 0
149
+ else:
150
+ # Execute Phase 0
151
+ """
152
+
153
+ # Skip system commands
154
+ if user_prompt.strip().startswith("/"):
155
+ return True
156
+
157
+ # For read-only queries, use higher ambiguity threshold
158
+ read_only_keywords = ["show", "get", "list", "ver", "mostrar", "listar", "view"]
159
+ if any(keyword in user_prompt.lower() for keyword in read_only_keywords):
160
+ clarification = request_clarification(user_prompt, command_context)
161
+ # Higher threshold (50 instead of 30) for read-only operations
162
+ return clarification.get("ambiguity_score", 0) < 50
163
+
164
+ return False
165
+
166
+
167
+ def get_clarification_summary(clarification_data: Dict[str, Any]) -> str:
168
+ """
169
+ Generate human-readable summary of clarification that occurred.
170
+
171
+ Useful for logging or displaying to user.
172
+
173
+ Args:
174
+ clarification_data: Clarification data from execute_workflow()
175
+
176
+ Returns:
177
+ String summary (multi-line)
178
+
179
+ Example:
180
+ result = execute_workflow("check the API")
181
+ if result["clarification_occurred"]:
182
+ print(get_clarification_summary(result["clarification_data"]))
183
+ """
184
+
185
+ if not clarification_data:
186
+ return "No clarification needed"
187
+
188
+ original = clarification_data.get("original_prompt", "N/A")
189
+ score = clarification_data.get("ambiguity_score", 0)
190
+ patterns = clarification_data.get("patterns_detected", [])
191
+ responses = clarification_data.get("user_responses", {})
192
+
193
+ lines = []
194
+ lines.append("=" * 60)
195
+ lines.append("PHASE 0 CLARIFICATION SUMMARY")
196
+ lines.append("=" * 60)
197
+ lines.append(f"Original prompt: {original}")
198
+ lines.append(f"Ambiguity score: {score}/100")
199
+ lines.append(f"Patterns detected: {', '.join(patterns)}")
200
+ lines.append(f"\nUser responses:")
201
+ for question_id, answer in responses.items():
202
+ lines.append(f" {question_id}: {answer}")
203
+ lines.append("=" * 60)
204
+
205
+ return "\n".join(lines)