tech-hub-skills 1.2.0 → 1.5.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/{LICENSE → .claude/LICENSE} +21 -21
- package/.claude/README.md +291 -0
- package/.claude/bin/cli.js +266 -0
- package/{bin → .claude/bin}/copilot.js +182 -182
- package/{bin → .claude/bin}/postinstall.js +42 -42
- package/{tech_hub_skills/skills → .claude/commands}/README.md +336 -336
- package/{tech_hub_skills/skills → .claude/commands}/ai-engineer.md +104 -104
- package/{tech_hub_skills/skills → .claude/commands}/aws.md +143 -143
- package/{tech_hub_skills/skills → .claude/commands}/azure.md +149 -149
- package/{tech_hub_skills/skills → .claude/commands}/backend-developer.md +108 -108
- package/{tech_hub_skills/skills → .claude/commands}/code-review.md +399 -399
- package/{tech_hub_skills/skills → .claude/commands}/compliance-automation.md +747 -747
- package/{tech_hub_skills/skills → .claude/commands}/compliance-officer.md +108 -108
- package/{tech_hub_skills/skills → .claude/commands}/data-engineer.md +113 -113
- package/{tech_hub_skills/skills → .claude/commands}/data-governance.md +102 -102
- package/{tech_hub_skills/skills → .claude/commands}/data-scientist.md +123 -123
- package/{tech_hub_skills/skills → .claude/commands}/database-admin.md +109 -109
- package/{tech_hub_skills/skills → .claude/commands}/devops.md +160 -160
- package/{tech_hub_skills/skills → .claude/commands}/docker.md +160 -160
- package/{tech_hub_skills/skills → .claude/commands}/enterprise-dashboard.md +613 -613
- package/{tech_hub_skills/skills → .claude/commands}/finops.md +184 -184
- package/{tech_hub_skills/skills → .claude/commands}/frontend-developer.md +108 -108
- package/{tech_hub_skills/skills → .claude/commands}/gcp.md +143 -143
- package/{tech_hub_skills/skills → .claude/commands}/ml-engineer.md +115 -115
- package/{tech_hub_skills/skills → .claude/commands}/mlops.md +187 -187
- package/{tech_hub_skills/skills → .claude/commands}/network-engineer.md +109 -109
- package/{tech_hub_skills/skills → .claude/commands}/optimization-advisor.md +329 -329
- package/{tech_hub_skills/skills → .claude/commands}/orchestrator.md +623 -623
- package/{tech_hub_skills/skills → .claude/commands}/platform-engineer.md +102 -102
- package/{tech_hub_skills/skills → .claude/commands}/process-automation.md +226 -226
- package/{tech_hub_skills/skills → .claude/commands}/process-changelog.md +184 -184
- package/{tech_hub_skills/skills → .claude/commands}/process-documentation.md +484 -484
- package/{tech_hub_skills/skills → .claude/commands}/process-kanban.md +324 -324
- package/{tech_hub_skills/skills → .claude/commands}/process-versioning.md +214 -214
- package/{tech_hub_skills/skills → .claude/commands}/product-designer.md +104 -104
- package/{tech_hub_skills/skills → .claude/commands}/project-starter.md +443 -443
- package/{tech_hub_skills/skills → .claude/commands}/qa-engineer.md +109 -109
- package/{tech_hub_skills/skills → .claude/commands}/security-architect.md +135 -135
- package/{tech_hub_skills/skills → .claude/commands}/sre.md +109 -109
- package/{tech_hub_skills/skills → .claude/commands}/system-design.md +126 -126
- package/{tech_hub_skills/skills → .claude/commands}/technical-writer.md +101 -101
- package/.claude/package.json +46 -0
- package/{tech_hub_skills → .claude}/roles/ai-engineer/skills/01-prompt-engineering/README.md +252 -252
- package/.claude/roles/ai-engineer/skills/01-prompt-engineering/prompt_ab_tester.py +356 -0
- package/.claude/roles/ai-engineer/skills/01-prompt-engineering/prompt_template_manager.py +274 -0
- package/.claude/roles/ai-engineer/skills/01-prompt-engineering/token_cost_estimator.py +324 -0
- package/{tech_hub_skills → .claude}/roles/ai-engineer/skills/02-rag-pipeline/README.md +448 -448
- package/.claude/roles/ai-engineer/skills/02-rag-pipeline/document_chunker.py +336 -0
- package/.claude/roles/ai-engineer/skills/02-rag-pipeline/rag_pipeline.sql +213 -0
- package/{tech_hub_skills → .claude}/roles/ai-engineer/skills/03-agent-orchestration/README.md +599 -599
- package/{tech_hub_skills → .claude}/roles/ai-engineer/skills/04-llm-guardrails/README.md +735 -735
- package/{tech_hub_skills → .claude}/roles/ai-engineer/skills/05-vector-embeddings/README.md +711 -711
- package/{tech_hub_skills → .claude}/roles/ai-engineer/skills/06-llm-evaluation/README.md +777 -777
- package/{tech_hub_skills → .claude}/roles/azure/skills/01-infrastructure-fundamentals/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/02-data-factory/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/03-synapse-analytics/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/04-databricks/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/05-functions/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/06-kubernetes-service/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/07-openai-service/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/08-machine-learning/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/09-storage-adls/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/10-networking/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/11-sql-cosmos/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/azure/skills/12-event-hubs/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/code-review/skills/01-automated-code-review/README.md +394 -394
- package/{tech_hub_skills → .claude}/roles/code-review/skills/02-pr-review-workflow/README.md +427 -427
- package/{tech_hub_skills → .claude}/roles/code-review/skills/03-code-quality-gates/README.md +518 -518
- package/{tech_hub_skills → .claude}/roles/code-review/skills/04-reviewer-assignment/README.md +504 -504
- package/{tech_hub_skills → .claude}/roles/code-review/skills/05-review-analytics/README.md +540 -540
- package/{tech_hub_skills → .claude}/roles/data-engineer/skills/01-lakehouse-architecture/README.md +550 -550
- package/.claude/roles/data-engineer/skills/01-lakehouse-architecture/bronze_ingestion.py +337 -0
- package/.claude/roles/data-engineer/skills/01-lakehouse-architecture/medallion_queries.sql +300 -0
- package/{tech_hub_skills → .claude}/roles/data-engineer/skills/02-etl-pipeline/README.md +580 -580
- package/{tech_hub_skills → .claude}/roles/data-engineer/skills/03-data-quality/README.md +579 -579
- package/{tech_hub_skills → .claude}/roles/data-engineer/skills/04-streaming-pipelines/README.md +608 -608
- package/{tech_hub_skills → .claude}/roles/data-engineer/skills/05-performance-optimization/README.md +547 -547
- package/{tech_hub_skills → .claude}/roles/data-governance/skills/01-data-catalog/README.md +112 -112
- package/{tech_hub_skills → .claude}/roles/data-governance/skills/02-data-lineage/README.md +129 -129
- package/{tech_hub_skills → .claude}/roles/data-governance/skills/03-data-quality-framework/README.md +182 -182
- package/{tech_hub_skills → .claude}/roles/data-governance/skills/04-access-control/README.md +39 -39
- package/{tech_hub_skills → .claude}/roles/data-governance/skills/05-master-data-management/README.md +40 -40
- package/{tech_hub_skills → .claude}/roles/data-governance/skills/06-compliance-privacy/README.md +46 -46
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/01-eda-automation/README.md +230 -230
- package/.claude/roles/data-scientist/skills/01-eda-automation/eda_generator.py +446 -0
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/02-statistical-modeling/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/03-feature-engineering/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/04-predictive-modeling/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/05-customer-analytics/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/06-campaign-analysis/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/07-experimentation/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/data-scientist/skills/08-data-visualization/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/01-cicd-pipeline/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/02-container-orchestration/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/03-infrastructure-as-code/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/04-gitops/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/05-environment-management/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/06-automated-testing/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/07-release-management/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/08-monitoring-alerting/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/devops/skills/09-devsecops/README.md +265 -265
- package/{tech_hub_skills → .claude}/roles/finops/skills/01-cost-visibility/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/02-resource-tagging/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/03-budget-management/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/04-reserved-instances/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/05-spot-optimization/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/06-storage-tiering/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/07-compute-rightsizing/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/finops/skills/08-chargeback/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/ml-engineer/skills/01-mlops-pipeline/README.md +566 -566
- package/{tech_hub_skills → .claude}/roles/ml-engineer/skills/02-feature-engineering/README.md +655 -655
- package/{tech_hub_skills → .claude}/roles/ml-engineer/skills/03-model-training/README.md +704 -704
- package/{tech_hub_skills → .claude}/roles/ml-engineer/skills/04-model-serving/README.md +845 -845
- package/{tech_hub_skills → .claude}/roles/ml-engineer/skills/05-model-monitoring/README.md +874 -874
- package/{tech_hub_skills → .claude}/roles/mlops/skills/01-ml-pipeline-orchestration/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/02-experiment-tracking/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/03-model-registry/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/04-feature-store/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/05-model-deployment/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/06-model-observability/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/07-data-versioning/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/08-ab-testing/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/mlops/skills/09-automated-retraining/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/platform-engineer/skills/01-internal-developer-platform/README.md +153 -153
- package/{tech_hub_skills → .claude}/roles/platform-engineer/skills/02-self-service-infrastructure/README.md +57 -57
- package/{tech_hub_skills → .claude}/roles/platform-engineer/skills/03-slo-sli-management/README.md +59 -59
- package/{tech_hub_skills → .claude}/roles/platform-engineer/skills/04-developer-experience/README.md +57 -57
- package/{tech_hub_skills → .claude}/roles/platform-engineer/skills/05-incident-management/README.md +73 -73
- package/{tech_hub_skills → .claude}/roles/platform-engineer/skills/06-capacity-management/README.md +59 -59
- package/{tech_hub_skills → .claude}/roles/product-designer/skills/01-requirements-discovery/README.md +407 -407
- package/{tech_hub_skills → .claude}/roles/product-designer/skills/02-user-research/README.md +382 -382
- package/{tech_hub_skills → .claude}/roles/product-designer/skills/03-brainstorming-ideation/README.md +437 -437
- package/{tech_hub_skills → .claude}/roles/product-designer/skills/04-ux-design/README.md +496 -496
- package/{tech_hub_skills → .claude}/roles/product-designer/skills/05-product-market-fit/README.md +376 -376
- package/{tech_hub_skills → .claude}/roles/product-designer/skills/06-stakeholder-management/README.md +412 -412
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/01-pii-detection/README.md +319 -319
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/02-threat-modeling/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/03-infrastructure-security/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/04-iam/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/05-application-security/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/06-secrets-management/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/security-architect/skills/07-security-monitoring/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/01-architecture-patterns/README.md +337 -337
- package/{tech_hub_skills → .claude}/roles/system-design/skills/02-requirements-engineering/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/03-scalability/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/04-high-availability/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/05-cost-optimization-design/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/06-api-design/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/07-observability-architecture/README.md +264 -264
- package/{tech_hub_skills → .claude}/roles/system-design/skills/08-process-automation/PROCESS_TEMPLATE.md +336 -336
- package/{tech_hub_skills → .claude}/roles/system-design/skills/08-process-automation/README.md +521 -521
- package/.claude/roles/system-design/skills/08-process-automation/ai_prompt_generator.py +744 -0
- package/.claude/roles/system-design/skills/08-process-automation/automation_recommender.py +688 -0
- package/.claude/roles/system-design/skills/08-process-automation/plan_generator.py +679 -0
- package/.claude/roles/system-design/skills/08-process-automation/process_analyzer.py +528 -0
- package/.claude/roles/system-design/skills/08-process-automation/process_parser.py +684 -0
- package/.claude/roles/system-design/skills/08-process-automation/role_matcher.py +615 -0
- package/.claude/skills/README.md +336 -0
- package/.claude/skills/ai-engineer.md +104 -0
- package/.claude/skills/aws.md +143 -0
- package/.claude/skills/azure.md +149 -0
- package/.claude/skills/backend-developer.md +108 -0
- package/.claude/skills/code-review.md +399 -0
- package/.claude/skills/compliance-automation.md +747 -0
- package/.claude/skills/compliance-officer.md +108 -0
- package/.claude/skills/data-engineer.md +113 -0
- package/.claude/skills/data-governance.md +102 -0
- package/.claude/skills/data-scientist.md +123 -0
- package/.claude/skills/database-admin.md +109 -0
- package/.claude/skills/devops.md +160 -0
- package/.claude/skills/docker.md +160 -0
- package/.claude/skills/enterprise-dashboard.md +613 -0
- package/.claude/skills/finops.md +184 -0
- package/.claude/skills/frontend-developer.md +108 -0
- package/.claude/skills/gcp.md +143 -0
- package/.claude/skills/ml-engineer.md +115 -0
- package/.claude/skills/mlops.md +187 -0
- package/.claude/skills/network-engineer.md +109 -0
- package/.claude/skills/optimization-advisor.md +329 -0
- package/.claude/skills/orchestrator.md +623 -0
- package/.claude/skills/platform-engineer.md +102 -0
- package/.claude/skills/process-automation.md +226 -0
- package/.claude/skills/process-changelog.md +184 -0
- package/.claude/skills/process-documentation.md +484 -0
- package/.claude/skills/process-kanban.md +324 -0
- package/.claude/skills/process-versioning.md +214 -0
- package/.claude/skills/product-designer.md +104 -0
- package/.claude/skills/project-starter.md +443 -0
- package/.claude/skills/qa-engineer.md +109 -0
- package/.claude/skills/security-architect.md +135 -0
- package/.claude/skills/sre.md +109 -0
- package/.claude/skills/system-design.md +126 -0
- package/.claude/skills/technical-writer.md +101 -0
- package/.gitattributes +2 -0
- package/GITHUB_COPILOT.md +106 -0
- package/README.md +192 -291
- package/package.json +16 -46
- package/bin/cli.js +0 -241
|
@@ -1,735 +1,735 @@
|
|
|
1
|
-
# Skill 4: LLM Guardrails & Safety
|
|
2
|
-
|
|
3
|
-
## 🎯 Overview
|
|
4
|
-
Implement comprehensive safety mechanisms for LLM applications including content filtering, bias detection, hallucination prevention, and compliance controls for production deployments.
|
|
5
|
-
|
|
6
|
-
## 🔗 Connections
|
|
7
|
-
- **Data Engineer**: Training data filtering, safety metrics storage (de-01, de-03)
|
|
8
|
-
- **Security Architect**: PII detection, prompt injection prevention (sa-01, sa-08)
|
|
9
|
-
- **ML Engineer**: Safety model fine-tuning and deployment (ml-03, ml-04)
|
|
10
|
-
- **MLOps**: Safety metrics monitoring, guardrail versioning (mo-01, mo-04)
|
|
11
|
-
- **FinOps**: Guardrail execution cost optimization (fo-01, fo-07)
|
|
12
|
-
- **DevOps**: Guardrail service deployment, failover mechanisms (do-01, do-03)
|
|
13
|
-
- **Data Scientist**: Bias analysis, safety model evaluation (ds-01, ds-08)
|
|
14
|
-
|
|
15
|
-
## 🛠️ Tools Included
|
|
16
|
-
|
|
17
|
-
### 1. `content_filter.py`
|
|
18
|
-
Multi-layer content filtering for harmful, toxic, and inappropriate outputs with custom policies.
|
|
19
|
-
|
|
20
|
-
### 2. `hallucination_detector.py`
|
|
21
|
-
Fact-checking and source verification system to detect and prevent hallucinations.
|
|
22
|
-
|
|
23
|
-
### 3. `bias_detector.py`
|
|
24
|
-
Identify and mitigate demographic, gender, and cultural biases in model outputs.
|
|
25
|
-
|
|
26
|
-
### 4. `prompt_injection_guard.py`
|
|
27
|
-
Defense against prompt injection, jailbreaking, and adversarial attacks.
|
|
28
|
-
|
|
29
|
-
### 5. `compliance_checker.py`
|
|
30
|
-
Industry-specific compliance validation (HIPAA, GDPR, financial regulations).
|
|
31
|
-
|
|
32
|
-
## 📊 Key Metrics
|
|
33
|
-
- Content filter accuracy (precision/recall)
|
|
34
|
-
- Hallucination detection rate
|
|
35
|
-
- Bias score across demographics
|
|
36
|
-
- Prompt injection block rate
|
|
37
|
-
- Compliance violation prevention rate
|
|
38
|
-
|
|
39
|
-
## 🚀 Quick Start
|
|
40
|
-
|
|
41
|
-
```python
|
|
42
|
-
from llm_guardrails import GuardrailPipeline
|
|
43
|
-
from anthropic import Anthropic
|
|
44
|
-
|
|
45
|
-
# Initialize guardrails
|
|
46
|
-
guardrails = GuardrailPipeline(
|
|
47
|
-
filters=["toxicity", "pii", "bias", "hallucination"],
|
|
48
|
-
compliance_standards=["gdpr", "hipaa"],
|
|
49
|
-
strictness_level="high"
|
|
50
|
-
)
|
|
51
|
-
|
|
52
|
-
client = Anthropic()
|
|
53
|
-
|
|
54
|
-
# Wrap LLM calls with guardrails
|
|
55
|
-
def safe_llm_call(prompt: str, user_context: dict = None):
|
|
56
|
-
# Pre-processing guardrails
|
|
57
|
-
validated_prompt = guardrails.validate_input(
|
|
58
|
-
prompt=prompt,
|
|
59
|
-
user_context=user_context
|
|
60
|
-
)
|
|
61
|
-
|
|
62
|
-
if not validated_prompt.safe:
|
|
63
|
-
return {
|
|
64
|
-
"blocked": True,
|
|
65
|
-
"reason": validated_prompt.violation_reason,
|
|
66
|
-
"severity": validated_prompt.severity
|
|
67
|
-
}
|
|
68
|
-
|
|
69
|
-
# Make LLM call
|
|
70
|
-
response = client.messages.create(
|
|
71
|
-
model="claude-3-5-sonnet-20241022",
|
|
72
|
-
max_tokens=1024,
|
|
73
|
-
messages=[{"role": "user", "content": validated_prompt.sanitized_prompt}]
|
|
74
|
-
)
|
|
75
|
-
|
|
76
|
-
# Post-processing guardrails
|
|
77
|
-
validated_output = guardrails.validate_output(
|
|
78
|
-
output=response.content[0].text,
|
|
79
|
-
prompt=prompt,
|
|
80
|
-
context=user_context
|
|
81
|
-
)
|
|
82
|
-
|
|
83
|
-
if not validated_output.safe:
|
|
84
|
-
return {
|
|
85
|
-
"blocked": True,
|
|
86
|
-
"reason": validated_output.violation_reason,
|
|
87
|
-
"alternative": validated_output.safe_alternative
|
|
88
|
-
}
|
|
89
|
-
|
|
90
|
-
return {
|
|
91
|
-
"blocked": False,
|
|
92
|
-
"response": validated_output.sanitized_output,
|
|
93
|
-
"safety_score": validated_output.safety_score
|
|
94
|
-
}
|
|
95
|
-
|
|
96
|
-
# Use with safety guarantees
|
|
97
|
-
result = safe_llm_call("Explain the treatment protocol for diabetes")
|
|
98
|
-
print(result["response"])
|
|
99
|
-
```
|
|
100
|
-
|
|
101
|
-
## 📚 Best Practices
|
|
102
|
-
|
|
103
|
-
### Cost Optimization (FinOps Integration)
|
|
104
|
-
|
|
105
|
-
1. **Optimize Guardrail Execution Order**
|
|
106
|
-
- Run cheap filters first (regex, keyword matching)
|
|
107
|
-
- Use ML-based filters only when needed
|
|
108
|
-
- Implement early termination for obvious violations
|
|
109
|
-
- Cache guardrail results for similar inputs
|
|
110
|
-
- Reference: FinOps fo-07 (AI/ML Cost Optimization)
|
|
111
|
-
|
|
112
|
-
2. **Batch Guardrail Processing**
|
|
113
|
-
- Process multiple inputs in batches
|
|
114
|
-
- Amortize model loading costs
|
|
115
|
-
- Use batch APIs for classification models
|
|
116
|
-
- Implement async processing for non-blocking checks
|
|
117
|
-
- Reference: FinOps fo-03 (Budget Management)
|
|
118
|
-
|
|
119
|
-
3. **Tiered Guardrail Strategies**
|
|
120
|
-
- Light filtering for low-risk applications
|
|
121
|
-
- Comprehensive checks for high-risk domains
|
|
122
|
-
- Dynamic filtering based on user trust scores
|
|
123
|
-
- Cost-aware filter selection
|
|
124
|
-
- Reference: FinOps fo-01 (Cost Monitoring)
|
|
125
|
-
|
|
126
|
-
4. **Cache Guardrail Results**
|
|
127
|
-
- Cache validation results with semantic similarity
|
|
128
|
-
- Reuse PII detection results
|
|
129
|
-
- Cache compliance check outcomes
|
|
130
|
-
- Monitor cache hit rates for optimization
|
|
131
|
-
- Reference: ai-01 (Prompt Caching)
|
|
132
|
-
|
|
133
|
-
### Security & Privacy (Security Architect Integration)
|
|
134
|
-
|
|
135
|
-
5. **PII Detection & Redaction**
|
|
136
|
-
- Scan all inputs and outputs for PII
|
|
137
|
-
- Redact or mask sensitive information
|
|
138
|
-
- Maintain audit trail of PII handling
|
|
139
|
-
- Comply with data protection regulations
|
|
140
|
-
- Reference: Security Architect sa-01 (PII Detection)
|
|
141
|
-
|
|
142
|
-
6. **Prompt Injection Prevention**
|
|
143
|
-
- Detect and block prompt injection attempts
|
|
144
|
-
- Implement input sanitization
|
|
145
|
-
- Use structured prompts with clear boundaries
|
|
146
|
-
- Monitor for jailbreaking patterns
|
|
147
|
-
- Reference: Security Architect sa-08 (LLM Security)
|
|
148
|
-
|
|
149
|
-
7. **Access Control & Audit Logging**
|
|
150
|
-
- Log all guardrail violations
|
|
151
|
-
- Implement RBAC for guardrail configuration
|
|
152
|
-
- Track user-level safety metrics
|
|
153
|
-
- Alert on suspicious patterns
|
|
154
|
-
- Reference: Security Architect sa-02 (IAM), sa-06 (Data Governance)
|
|
155
|
-
|
|
156
|
-
### Data Quality & Governance (Data Engineer Integration)
|
|
157
|
-
|
|
158
|
-
8. **Training Data Filtering**
|
|
159
|
-
- Apply guardrails to training datasets
|
|
160
|
-
- Remove toxic and biased examples
|
|
161
|
-
- Validate data quality before fine-tuning
|
|
162
|
-
- Track data lineage for safety-critical data
|
|
163
|
-
- Reference: Data Engineer de-03 (Data Quality)
|
|
164
|
-
|
|
165
|
-
9. **Safety Metrics Storage**
|
|
166
|
-
- Persist guardrail execution results
|
|
167
|
-
- Store violation patterns for analysis
|
|
168
|
-
- Track safety metrics over time
|
|
169
|
-
- Enable historical safety audits
|
|
170
|
-
- Reference: Data Engineer de-01 (Data Ingestion)
|
|
171
|
-
|
|
172
|
-
### Model Lifecycle Management (MLOps Integration)
|
|
173
|
-
|
|
174
|
-
10. **Guardrail Model Versioning**
|
|
175
|
-
- Version all safety models in registry
|
|
176
|
-
- Track guardrail model performance
|
|
177
|
-
- A/B test new guardrail versions
|
|
178
|
-
- Rollback capability for safety regressions
|
|
179
|
-
- Reference: MLOps mo-01 (Model Registry), mo-03 (Versioning)
|
|
180
|
-
|
|
181
|
-
11. **Safety Metrics Monitoring**
|
|
182
|
-
- Track false positive/negative rates
|
|
183
|
-
- Monitor guardrail execution latency
|
|
184
|
-
- Alert on guardrail failures or bypasses
|
|
185
|
-
- Dashboard for real-time safety metrics
|
|
186
|
-
- Reference: MLOps mo-04 (Monitoring)
|
|
187
|
-
|
|
188
|
-
12. **Guardrail Drift Detection**
|
|
189
|
-
- Monitor changes in violation patterns
|
|
190
|
-
- Detect emerging attack vectors
|
|
191
|
-
- Track effectiveness degradation
|
|
192
|
-
- Retrain safety models as needed
|
|
193
|
-
- Reference: MLOps mo-05 (Drift Detection)
|
|
194
|
-
|
|
195
|
-
### Deployment & Operations (DevOps Integration)
|
|
196
|
-
|
|
197
|
-
13. **Deploy Guardrails as Microservices**
|
|
198
|
-
- Separate service for each guardrail type
|
|
199
|
-
- Independent scaling based on load
|
|
200
|
-
- Circuit breakers for guardrail failures
|
|
201
|
-
- Health checks and monitoring
|
|
202
|
-
- Reference: DevOps do-03 (Containerization)
|
|
203
|
-
|
|
204
|
-
14. **CI/CD for Guardrail Updates**
|
|
205
|
-
- Automated testing for guardrail changes
|
|
206
|
-
- Canary deployments for new filters
|
|
207
|
-
- Rollback on increased false positives
|
|
208
|
-
- Continuous safety benchmarking
|
|
209
|
-
- Reference: DevOps do-01 (CI/CD)
|
|
210
|
-
|
|
211
|
-
15. **High Availability for Safety Systems**
|
|
212
|
-
- Multi-region guardrail deployment
|
|
213
|
-
- Fallback to conservative filtering on failures
|
|
214
|
-
- Load balancing across guardrail instances
|
|
215
|
-
- Zero-downtime updates
|
|
216
|
-
- Reference: DevOps do-04 (High Availability)
|
|
217
|
-
|
|
218
|
-
### Azure-Specific Best Practices
|
|
219
|
-
|
|
220
|
-
16. **Azure AI Content Safety**
|
|
221
|
-
- Integrate Azure Content Safety API
|
|
222
|
-
- Use managed safety models
|
|
223
|
-
- Enable custom categories for domain-specific filtering
|
|
224
|
-
- Monitor via Azure Monitor
|
|
225
|
-
- Reference: Azure az-04 (AI/ML Services)
|
|
226
|
-
|
|
227
|
-
17. **Azure OpenAI Safety Features**
|
|
228
|
-
- Enable content filtering in Azure OpenAI
|
|
229
|
-
- Use content filtering configurations
|
|
230
|
-
- Implement custom blocklists
|
|
231
|
-
- Monitor safety events in Application Insights
|
|
232
|
-
- Reference: Azure az-05 (Azure OpenAI)
|
|
233
|
-
|
|
234
|
-
## 💰 Cost Optimization Examples
|
|
235
|
-
|
|
236
|
-
### Tiered Guardrail Strategy
|
|
237
|
-
```python
|
|
238
|
-
from llm_guardrails import GuardrailPipeline, FilterLevel
|
|
239
|
-
|
|
240
|
-
class CostOptimizedGuardrails:
|
|
241
|
-
def __init__(self):
|
|
242
|
-
# Define tiered filtering strategies
|
|
243
|
-
self.light_filters = GuardrailPipeline(
|
|
244
|
-
filters=["regex_profanity", "keyword_blocklist"],
|
|
245
|
-
level=FilterLevel.LIGHT
|
|
246
|
-
)
|
|
247
|
-
|
|
248
|
-
self.standard_filters = GuardrailPipeline(
|
|
249
|
-
filters=["toxicity_classifier", "pii_detection"],
|
|
250
|
-
level=FilterLevel.STANDARD
|
|
251
|
-
)
|
|
252
|
-
|
|
253
|
-
self.comprehensive_filters = GuardrailPipeline(
|
|
254
|
-
filters=[
|
|
255
|
-
"toxicity_classifier",
|
|
256
|
-
"pii_detection",
|
|
257
|
-
"bias_detector",
|
|
258
|
-
"hallucination_checker",
|
|
259
|
-
"compliance_validator"
|
|
260
|
-
],
|
|
261
|
-
level=FilterLevel.COMPREHENSIVE
|
|
262
|
-
)
|
|
263
|
-
|
|
264
|
-
def select_filters(self, user_trust_score: float, content_risk: str):
|
|
265
|
-
"""Select appropriate filter level based on context."""
|
|
266
|
-
if user_trust_score > 0.9 and content_risk == "low":
|
|
267
|
-
return self.light_filters # $0.0001 per request
|
|
268
|
-
|
|
269
|
-
elif user_trust_score > 0.7 and content_risk in ["low", "medium"]:
|
|
270
|
-
return self.standard_filters # $0.001 per request
|
|
271
|
-
|
|
272
|
-
else:
|
|
273
|
-
return self.comprehensive_filters # $0.005 per request
|
|
274
|
-
|
|
275
|
-
def validate(self, prompt: str, user_context: dict):
|
|
276
|
-
filters = self.select_filters(
|
|
277
|
-
user_trust_score=user_context.get("trust_score", 0.5),
|
|
278
|
-
content_risk=user_context.get("risk_level", "high")
|
|
279
|
-
)
|
|
280
|
-
|
|
281
|
-
return filters.validate_input(prompt)
|
|
282
|
-
|
|
283
|
-
# Usage
|
|
284
|
-
guardrails = CostOptimizedGuardrails()
|
|
285
|
-
|
|
286
|
-
# Low-risk user, low-risk content → cheap filtering
|
|
287
|
-
result = guardrails.validate(
|
|
288
|
-
prompt="What's the weather today?",
|
|
289
|
-
user_context={"trust_score": 0.95, "risk_level": "low"}
|
|
290
|
-
)
|
|
291
|
-
|
|
292
|
-
# High-risk content → comprehensive filtering
|
|
293
|
-
result = guardrails.validate(
|
|
294
|
-
prompt="Provide medical advice for my condition",
|
|
295
|
-
user_context={"trust_score": 0.5, "risk_level": "high"}
|
|
296
|
-
)
|
|
297
|
-
```
|
|
298
|
-
|
|
299
|
-
### Cached Guardrail Results
|
|
300
|
-
```python
|
|
301
|
-
from functools import lru_cache
|
|
302
|
-
import hashlib
|
|
303
|
-
from semantic_cache import SemanticCache
|
|
304
|
-
|
|
305
|
-
class CachedGuardrailPipeline:
|
|
306
|
-
def __init__(self):
|
|
307
|
-
self.guardrails = GuardrailPipeline()
|
|
308
|
-
self.semantic_cache = SemanticCache(
|
|
309
|
-
similarity_threshold=0.95,
|
|
310
|
-
ttl_seconds=3600
|
|
311
|
-
)
|
|
312
|
-
|
|
313
|
-
def validate_input(self, prompt: str, user_context: dict = None):
|
|
314
|
-
# Check semantic cache for similar prompts
|
|
315
|
-
cached_result = self.semantic_cache.get(prompt)
|
|
316
|
-
if cached_result:
|
|
317
|
-
print("✅ Cache hit - guardrail cost saved!")
|
|
318
|
-
return cached_result
|
|
319
|
-
|
|
320
|
-
# Run guardrails
|
|
321
|
-
result = self.guardrails.validate_input(prompt, user_context)
|
|
322
|
-
|
|
323
|
-
# Cache the result
|
|
324
|
-
if result.safe: # Only cache safe results
|
|
325
|
-
self.semantic_cache.set(prompt, result)
|
|
326
|
-
|
|
327
|
-
return result
|
|
328
|
-
|
|
329
|
-
# Track cost savings
|
|
330
|
-
guardrails = CachedGuardrailPipeline()
|
|
331
|
-
|
|
332
|
-
# Generate cost report
|
|
333
|
-
savings_report = guardrails.semantic_cache.get_savings_report()
|
|
334
|
-
print(f"Cache hit rate: {savings_report.hit_rate:.2%}")
|
|
335
|
-
print(f"Requests saved: {savings_report.hits}")
|
|
336
|
-
print(f"Cost savings: ${savings_report.cost_saved:.4f}")
|
|
337
|
-
```
|
|
338
|
-
|
|
339
|
-
### Batch Processing for Guardrails
|
|
340
|
-
```python
|
|
341
|
-
from llm_guardrails import BatchGuardrailPipeline
|
|
342
|
-
|
|
343
|
-
class BatchGuardrails:
|
|
344
|
-
def __init__(self):
|
|
345
|
-
self.pipeline = BatchGuardrailPipeline(
|
|
346
|
-
batch_size=32, # Process 32 items at once
|
|
347
|
-
max_wait_ms=100 # Wait up to 100ms to fill batch
|
|
348
|
-
)
|
|
349
|
-
|
|
350
|
-
async def validate_async(self, prompts: List[str]):
|
|
351
|
-
"""Validate multiple prompts efficiently."""
|
|
352
|
-
# Batch processing reduces per-item cost by 70%
|
|
353
|
-
results = await self.pipeline.validate_batch(prompts)
|
|
354
|
-
|
|
355
|
-
return [
|
|
356
|
-
{
|
|
357
|
-
"prompt": prompt,
|
|
358
|
-
"safe": result.safe,
|
|
359
|
-
"violations": result.violations
|
|
360
|
-
}
|
|
361
|
-
for prompt, result in zip(prompts, results)
|
|
362
|
-
]
|
|
363
|
-
|
|
364
|
-
# Usage for high-throughput applications
|
|
365
|
-
guardrails = BatchGuardrails()
|
|
366
|
-
|
|
367
|
-
# Validate 100 prompts in batches of 32
|
|
368
|
-
prompts = [f"User query {i}" for i in range(100)]
|
|
369
|
-
results = await guardrails.validate_async(prompts)
|
|
370
|
-
|
|
371
|
-
# Cost comparison:
|
|
372
|
-
# Individual processing: 100 requests × $0.001 = $0.10
|
|
373
|
-
# Batch processing: 4 batches × $0.008 = $0.032 (68% savings)
|
|
374
|
-
```
|
|
375
|
-
|
|
376
|
-
## 🔒 Security Best Practices Examples
|
|
377
|
-
|
|
378
|
-
### Comprehensive PII Detection
|
|
379
|
-
```python
|
|
380
|
-
from pii_detector import PIIDetector # from sa-01
|
|
381
|
-
from data_anonymizer import DataAnonymizer
|
|
382
|
-
|
|
383
|
-
class PIIGuardrail:
|
|
384
|
-
def __init__(self):
|
|
385
|
-
self.detector = PIIDetector()
|
|
386
|
-
self.anonymizer = DataAnonymizer()
|
|
387
|
-
|
|
388
|
-
def validate_and_sanitize(self, text: str, mode: str = "redact"):
|
|
389
|
-
"""Detect and handle PII in text."""
|
|
390
|
-
# Detect PII
|
|
391
|
-
findings = self.detector.analyze_text(text)
|
|
392
|
-
|
|
393
|
-
if not findings:
|
|
394
|
-
return {
|
|
395
|
-
"safe": True,
|
|
396
|
-
"sanitized_text": text,
|
|
397
|
-
"pii_found": False
|
|
398
|
-
}
|
|
399
|
-
|
|
400
|
-
# Handle based on mode
|
|
401
|
-
if mode == "redact":
|
|
402
|
-
sanitized = self.anonymizer.redact_pii(text, findings)
|
|
403
|
-
elif mode == "mask":
|
|
404
|
-
sanitized = self.anonymizer.mask_pii(text, findings)
|
|
405
|
-
elif mode == "block":
|
|
406
|
-
return {
|
|
407
|
-
"safe": False,
|
|
408
|
-
"reason": "PII detected - request blocked",
|
|
409
|
-
"pii_types": [f.entity_type for f in findings]
|
|
410
|
-
}
|
|
411
|
-
|
|
412
|
-
# Log for compliance
|
|
413
|
-
self._log_pii_handling({
|
|
414
|
-
"pii_types": [f.entity_type for f in findings],
|
|
415
|
-
"action": mode,
|
|
416
|
-
"timestamp": datetime.now()
|
|
417
|
-
})
|
|
418
|
-
|
|
419
|
-
return {
|
|
420
|
-
"safe": True,
|
|
421
|
-
"sanitized_text": sanitized,
|
|
422
|
-
"pii_found": True,
|
|
423
|
-
"pii_types": [f.entity_type for f in findings]
|
|
424
|
-
}
|
|
425
|
-
|
|
426
|
-
# Integration with LLM pipeline
|
|
427
|
-
pii_guard = PIIGuardrail()
|
|
428
|
-
|
|
429
|
-
def safe_llm_call_with_pii_protection(prompt: str):
|
|
430
|
-
# Check input
|
|
431
|
-
input_result = pii_guard.validate_and_sanitize(prompt, mode="redact")
|
|
432
|
-
|
|
433
|
-
if not input_result["safe"]:
|
|
434
|
-
return {"blocked": True, "reason": input_result["reason"]}
|
|
435
|
-
|
|
436
|
-
# Make LLM call with sanitized prompt
|
|
437
|
-
response = llm_call(input_result["sanitized_text"])
|
|
438
|
-
|
|
439
|
-
# Check output
|
|
440
|
-
output_result = pii_guard.validate_and_sanitize(response, mode="mask")
|
|
441
|
-
|
|
442
|
-
return {
|
|
443
|
-
"response": output_result["sanitized_text"],
|
|
444
|
-
"pii_found": input_result["pii_found"] or output_result["pii_found"]
|
|
445
|
-
}
|
|
446
|
-
```
|
|
447
|
-
|
|
448
|
-
### Prompt Injection Defense
|
|
449
|
-
```python
|
|
450
|
-
from prompt_injection_detector import PromptInjectionDetector # from sa-08
|
|
451
|
-
|
|
452
|
-
class PromptInjectionGuardrail:
|
|
453
|
-
def __init__(self):
|
|
454
|
-
self.detector = PromptInjectionDetector()
|
|
455
|
-
self.attack_patterns = [
|
|
456
|
-
r"ignore previous instructions",
|
|
457
|
-
r"disregard all prior",
|
|
458
|
-
r"new instructions:",
|
|
459
|
-
r"system:",
|
|
460
|
-
r"<\|im_start\|>",
|
|
461
|
-
# Add more patterns
|
|
462
|
-
]
|
|
463
|
-
|
|
464
|
-
def validate(self, prompt: str):
|
|
465
|
-
"""Detect prompt injection attempts."""
|
|
466
|
-
# Rule-based detection (fast)
|
|
467
|
-
for pattern in self.attack_patterns:
|
|
468
|
-
if re.search(pattern, prompt, re.IGNORECASE):
|
|
469
|
-
return {
|
|
470
|
-
"safe": False,
|
|
471
|
-
"threat": "prompt_injection",
|
|
472
|
-
"confidence": 0.95,
|
|
473
|
-
"pattern_matched": pattern
|
|
474
|
-
}
|
|
475
|
-
|
|
476
|
-
# ML-based detection (comprehensive)
|
|
477
|
-
ml_result = self.detector.analyze(prompt)
|
|
478
|
-
|
|
479
|
-
if ml_result.injection_score > 0.7:
|
|
480
|
-
return {
|
|
481
|
-
"safe": False,
|
|
482
|
-
"threat": "prompt_injection",
|
|
483
|
-
"confidence": ml_result.injection_score,
|
|
484
|
-
"attack_type": ml_result.attack_type
|
|
485
|
-
}
|
|
486
|
-
|
|
487
|
-
return {"safe": True, "confidence": 1 - ml_result.injection_score}
|
|
488
|
-
|
|
489
|
-
def sanitize(self, prompt: str):
|
|
490
|
-
"""Sanitize potentially malicious prompts."""
|
|
491
|
-
# Remove special tokens
|
|
492
|
-
sanitized = re.sub(r'<\|.*?\|>', '', prompt)
|
|
493
|
-
|
|
494
|
-
# Escape problematic characters
|
|
495
|
-
sanitized = sanitized.replace('\\n\\n', ' ')
|
|
496
|
-
|
|
497
|
-
# Enforce length limits
|
|
498
|
-
if len(sanitized) > 2000:
|
|
499
|
-
sanitized = sanitized[:2000]
|
|
500
|
-
|
|
501
|
-
return sanitized
|
|
502
|
-
|
|
503
|
-
# Usage
|
|
504
|
-
injection_guard = PromptInjectionGuardrail()
|
|
505
|
-
|
|
506
|
-
def protected_llm_call(user_prompt: str):
|
|
507
|
-
# Validate input
|
|
508
|
-
validation = injection_guard.validate(user_prompt)
|
|
509
|
-
|
|
510
|
-
if not validation["safe"]:
|
|
511
|
-
# Log attack attempt
|
|
512
|
-
security_logger.warning(
|
|
513
|
-
f"Prompt injection blocked: {validation['attack_type']} "
|
|
514
|
-
f"(confidence: {validation['confidence']:.2%})"
|
|
515
|
-
)
|
|
516
|
-
return {"blocked": True, "reason": "Security violation detected"}
|
|
517
|
-
|
|
518
|
-
# Sanitize and proceed
|
|
519
|
-
safe_prompt = injection_guard.sanitize(user_prompt)
|
|
520
|
-
return llm_call(safe_prompt)
|
|
521
|
-
```
|
|
522
|
-
|
|
523
|
-
### Hallucination Detection
|
|
524
|
-
```python
|
|
525
|
-
from hallucination_detector import HallucinationDetector
|
|
526
|
-
|
|
527
|
-
class HallucinationGuardrail:
|
|
528
|
-
def __init__(self):
|
|
529
|
-
self.detector = HallucinationDetector()
|
|
530
|
-
|
|
531
|
-
def validate_output(self, output: str, context: dict, sources: List[str]):
|
|
532
|
-
"""Check if LLM output is grounded in provided sources."""
|
|
533
|
-
# Extract claims from output
|
|
534
|
-
claims = self.detector.extract_claims(output)
|
|
535
|
-
|
|
536
|
-
unverified_claims = []
|
|
537
|
-
for claim in claims:
|
|
538
|
-
# Check if claim is supported by sources
|
|
539
|
-
verification = self.detector.verify_claim(
|
|
540
|
-
claim=claim,
|
|
541
|
-
sources=sources,
|
|
542
|
-
context=context
|
|
543
|
-
)
|
|
544
|
-
|
|
545
|
-
if not verification.supported:
|
|
546
|
-
unverified_claims.append({
|
|
547
|
-
"claim": claim,
|
|
548
|
-
"confidence": verification.confidence,
|
|
549
|
-
"reason": verification.reason
|
|
550
|
-
})
|
|
551
|
-
|
|
552
|
-
# Calculate hallucination score
|
|
553
|
-
hallucination_score = len(unverified_claims) / len(claims) if claims else 0
|
|
554
|
-
|
|
555
|
-
if hallucination_score > 0.3: # More than 30% unverified
|
|
556
|
-
return {
|
|
557
|
-
"safe": False,
|
|
558
|
-
"reason": "High hallucination risk",
|
|
559
|
-
"score": hallucination_score,
|
|
560
|
-
"unverified_claims": unverified_claims
|
|
561
|
-
}
|
|
562
|
-
|
|
563
|
-
return {
|
|
564
|
-
"safe": True,
|
|
565
|
-
"score": hallucination_score,
|
|
566
|
-
"verified_claims": len(claims) - len(unverified_claims)
|
|
567
|
-
}
|
|
568
|
-
|
|
569
|
-
# Integration with RAG pipeline
|
|
570
|
-
hallucination_guard = HallucinationGuardrail()
|
|
571
|
-
|
|
572
|
-
def rag_with_hallucination_check(query: str):
|
|
573
|
-
# Retrieve context
|
|
574
|
-
sources = rag_pipeline.retrieve(query, top_k=5)
|
|
575
|
-
|
|
576
|
-
# Generate response
|
|
577
|
-
response = llm_generate(query, sources)
|
|
578
|
-
|
|
579
|
-
# Validate response
|
|
580
|
-
validation = hallucination_guard.validate_output(
|
|
581
|
-
output=response,
|
|
582
|
-
context={"query": query},
|
|
583
|
-
sources=sources
|
|
584
|
-
)
|
|
585
|
-
|
|
586
|
-
if not validation["safe"]:
|
|
587
|
-
# Return conservative response or request more sources
|
|
588
|
-
return {
|
|
589
|
-
"response": "I don't have enough reliable information to answer this.",
|
|
590
|
-
"reason": validation["reason"]
|
|
591
|
-
}
|
|
592
|
-
|
|
593
|
-
return {"response": response, "safety_score": 1 - validation["score"]}
|
|
594
|
-
```
|
|
595
|
-
|
|
596
|
-
## 📊 Enhanced Metrics & Monitoring
|
|
597
|
-
|
|
598
|
-
| Metric Category | Metric | Target | Tool |
|
|
599
|
-
|-----------------|--------|--------|------|
|
|
600
|
-
| **Content Safety** | Toxic content block rate | 100% | Azure Content Safety |
|
|
601
|
-
| | False positive rate | <2% | Custom evaluator |
|
|
602
|
-
| | Filter accuracy | >0.98 | MLflow |
|
|
603
|
-
| | Response time (p95) | <200ms | Azure Monitor |
|
|
604
|
-
| **PII Protection** | PII detection recall | >0.99 | Custom evaluator |
|
|
605
|
-
| | PII detection precision | >0.95 | Custom evaluator |
|
|
606
|
-
| | Redaction accuracy | >0.98 | Audit logs |
|
|
607
|
-
| **Prompt Injection** | Injection detection rate | >0.95 | Security monitor |
|
|
608
|
-
| | False positive rate | <5% | Security logs |
|
|
609
|
-
| | Attack pattern coverage | >90% | Security audit |
|
|
610
|
-
| **Hallucination** | Hallucination detection rate | >0.85 | Custom evaluator |
|
|
611
|
-
| | Fact-check accuracy | >0.90 | MLflow |
|
|
612
|
-
| | Source grounding score | >0.85 | Custom metric |
|
|
613
|
-
| **Costs** | Cost per guardrail check | <$0.002 | FinOps dashboard |
|
|
614
|
-
| | Cache hit rate | >60% | App Insights |
|
|
615
|
-
| **Compliance** | GDPR compliance rate | 100% | Compliance tracker |
|
|
616
|
-
| | HIPAA violation prevention | 100% | Audit logs |
|
|
617
|
-
|
|
618
|
-
## 🚀 Deployment Pipeline
|
|
619
|
-
|
|
620
|
-
### CI/CD for Guardrail System
|
|
621
|
-
```yaml
|
|
622
|
-
# .github/workflows/guardrails-deployment.yml
|
|
623
|
-
name: Guardrails Deployment
|
|
624
|
-
|
|
625
|
-
on:
|
|
626
|
-
push:
|
|
627
|
-
paths:
|
|
628
|
-
- 'guardrails/**'
|
|
629
|
-
branches:
|
|
630
|
-
- main
|
|
631
|
-
|
|
632
|
-
jobs:
|
|
633
|
-
test-guardrails:
|
|
634
|
-
runs-on: ubuntu-latest
|
|
635
|
-
steps:
|
|
636
|
-
- name: Unit test all guardrails
|
|
637
|
-
run: pytest tests/test_guardrails.py -v
|
|
638
|
-
|
|
639
|
-
- name: Test PII detection accuracy
|
|
640
|
-
run: pytest tests/test_pii_detection.py --min-recall 0.99
|
|
641
|
-
|
|
642
|
-
- name: Test prompt injection detection
|
|
643
|
-
run: pytest tests/test_injection_detection.py --min-accuracy 0.95
|
|
644
|
-
|
|
645
|
-
- name: Benchmark guardrail performance
|
|
646
|
-
run: python scripts/benchmark_guardrails.py
|
|
647
|
-
|
|
648
|
-
- name: Test false positive rates
|
|
649
|
-
run: pytest tests/test_false_positives.py --max-fp-rate 0.02
|
|
650
|
-
|
|
651
|
-
security-validation:
|
|
652
|
-
runs-on: ubuntu-latest
|
|
653
|
-
steps:
|
|
654
|
-
- name: Validate security policies
|
|
655
|
-
run: python scripts/validate_security_policies.py
|
|
656
|
-
|
|
657
|
-
- name: Test adversarial examples
|
|
658
|
-
run: pytest tests/test_adversarial.py
|
|
659
|
-
|
|
660
|
-
- name: Compliance check
|
|
661
|
-
run: python scripts/check_compliance.py --standards gdpr,hipaa
|
|
662
|
-
|
|
663
|
-
deploy-guardrails:
|
|
664
|
-
needs: [test-guardrails, security-validation]
|
|
665
|
-
runs-on: ubuntu-latest
|
|
666
|
-
steps:
|
|
667
|
-
- name: Build guardrail service
|
|
668
|
-
run: docker build -t guardrails-service:${{ github.sha }} .
|
|
669
|
-
|
|
670
|
-
- name: Push to Azure Container Registry
|
|
671
|
-
run: |
|
|
672
|
-
az acr login --name myregistry
|
|
673
|
-
docker push myregistry.azurecr.io/guardrails-service:${{ github.sha }}
|
|
674
|
-
|
|
675
|
-
- name: Deploy to AKS
|
|
676
|
-
run: |
|
|
677
|
-
kubectl set image deployment/guardrails-service \
|
|
678
|
-
guardrails=myregistry.azurecr.io/guardrails-service:${{ github.sha }}
|
|
679
|
-
|
|
680
|
-
- name: Run smoke tests
|
|
681
|
-
run: python scripts/smoke_test_guardrails.py
|
|
682
|
-
|
|
683
|
-
- name: Monitor guardrail metrics
|
|
684
|
-
run: python scripts/monitor_guardrails.py --duration 1h --alert-on-regression
|
|
685
|
-
```
|
|
686
|
-
|
|
687
|
-
## 🔄 Integration Workflow
|
|
688
|
-
|
|
689
|
-
### End-to-End Guardrail Pipeline with All Roles
|
|
690
|
-
```
|
|
691
|
-
1. User Input Received
|
|
692
|
-
↓
|
|
693
|
-
2. Input Length & Format Validation
|
|
694
|
-
↓
|
|
695
|
-
3. Prompt Injection Detection (sa-08)
|
|
696
|
-
↓
|
|
697
|
-
4. PII Detection in Input (sa-01)
|
|
698
|
-
↓
|
|
699
|
-
5. Input Sanitization
|
|
700
|
-
↓
|
|
701
|
-
6. Cost-Optimized Filter Selection (fo-07)
|
|
702
|
-
↓
|
|
703
|
-
7. LLM Processing with Caching (ai-01)
|
|
704
|
-
↓
|
|
705
|
-
8. Output Content Safety Check (sa-08)
|
|
706
|
-
↓
|
|
707
|
-
9. Hallucination Detection (ai-04)
|
|
708
|
-
↓
|
|
709
|
-
10. Bias Detection (ds-01)
|
|
710
|
-
↓
|
|
711
|
-
11. PII Detection in Output (sa-01)
|
|
712
|
-
↓
|
|
713
|
-
12. Compliance Validation (sa-06)
|
|
714
|
-
↓
|
|
715
|
-
13. Output Sanitization
|
|
716
|
-
↓
|
|
717
|
-
14. Safety Metrics Logging (mo-04)
|
|
718
|
-
↓
|
|
719
|
-
15. Cost Attribution (fo-01)
|
|
720
|
-
↓
|
|
721
|
-
16. Guardrail Performance Monitoring (mo-04)
|
|
722
|
-
↓
|
|
723
|
-
17. Safe Response Delivery
|
|
724
|
-
```
|
|
725
|
-
|
|
726
|
-
## 🎯 Quick Wins
|
|
727
|
-
|
|
728
|
-
1. **Enable Azure Content Safety** - Instant toxic content filtering with managed service
|
|
729
|
-
2. **Implement PII detection** - Prevent data leakage and compliance violations
|
|
730
|
-
3. **Add prompt injection defense** - Block jailbreaking and adversarial attacks
|
|
731
|
-
4. **Cache guardrail results** - 60%+ cost reduction on repeated checks
|
|
732
|
-
5. **Use tiered filtering** - Balance cost and safety based on risk level
|
|
733
|
-
6. **Set up safety monitoring** - Real-time alerts on guardrail failures
|
|
734
|
-
7. **Implement hallucination detection** - Improve output factuality for RAG systems
|
|
735
|
-
8. **Enable compliance validation** - Automated GDPR/HIPAA checks before deployment
|
|
1
|
+
# Skill 4: LLM Guardrails & Safety
|
|
2
|
+
|
|
3
|
+
## 🎯 Overview
|
|
4
|
+
Implement comprehensive safety mechanisms for LLM applications including content filtering, bias detection, hallucination prevention, and compliance controls for production deployments.
|
|
5
|
+
|
|
6
|
+
## 🔗 Connections
|
|
7
|
+
- **Data Engineer**: Training data filtering, safety metrics storage (de-01, de-03)
|
|
8
|
+
- **Security Architect**: PII detection, prompt injection prevention (sa-01, sa-08)
|
|
9
|
+
- **ML Engineer**: Safety model fine-tuning and deployment (ml-03, ml-04)
|
|
10
|
+
- **MLOps**: Safety metrics monitoring, guardrail versioning (mo-01, mo-04)
|
|
11
|
+
- **FinOps**: Guardrail execution cost optimization (fo-01, fo-07)
|
|
12
|
+
- **DevOps**: Guardrail service deployment, failover mechanisms (do-01, do-03)
|
|
13
|
+
- **Data Scientist**: Bias analysis, safety model evaluation (ds-01, ds-08)
|
|
14
|
+
|
|
15
|
+
## 🛠️ Tools Included
|
|
16
|
+
|
|
17
|
+
### 1. `content_filter.py`
|
|
18
|
+
Multi-layer content filtering for harmful, toxic, and inappropriate outputs with custom policies.
|
|
19
|
+
|
|
20
|
+
### 2. `hallucination_detector.py`
|
|
21
|
+
Fact-checking and source verification system to detect and prevent hallucinations.
|
|
22
|
+
|
|
23
|
+
### 3. `bias_detector.py`
|
|
24
|
+
Identify and mitigate demographic, gender, and cultural biases in model outputs.
|
|
25
|
+
|
|
26
|
+
### 4. `prompt_injection_guard.py`
|
|
27
|
+
Defense against prompt injection, jailbreaking, and adversarial attacks.
|
|
28
|
+
|
|
29
|
+
### 5. `compliance_checker.py`
|
|
30
|
+
Industry-specific compliance validation (HIPAA, GDPR, financial regulations).
|
|
31
|
+
|
|
32
|
+
## 📊 Key Metrics
|
|
33
|
+
- Content filter accuracy (precision/recall)
|
|
34
|
+
- Hallucination detection rate
|
|
35
|
+
- Bias score across demographics
|
|
36
|
+
- Prompt injection block rate
|
|
37
|
+
- Compliance violation prevention rate
|
|
38
|
+
|
|
39
|
+
## 🚀 Quick Start
|
|
40
|
+
|
|
41
|
+
```python
|
|
42
|
+
from llm_guardrails import GuardrailPipeline
|
|
43
|
+
from anthropic import Anthropic
|
|
44
|
+
|
|
45
|
+
# Initialize guardrails
|
|
46
|
+
guardrails = GuardrailPipeline(
|
|
47
|
+
filters=["toxicity", "pii", "bias", "hallucination"],
|
|
48
|
+
compliance_standards=["gdpr", "hipaa"],
|
|
49
|
+
strictness_level="high"
|
|
50
|
+
)
|
|
51
|
+
|
|
52
|
+
client = Anthropic()
|
|
53
|
+
|
|
54
|
+
# Wrap LLM calls with guardrails
|
|
55
|
+
def safe_llm_call(prompt: str, user_context: dict = None):
|
|
56
|
+
# Pre-processing guardrails
|
|
57
|
+
validated_prompt = guardrails.validate_input(
|
|
58
|
+
prompt=prompt,
|
|
59
|
+
user_context=user_context
|
|
60
|
+
)
|
|
61
|
+
|
|
62
|
+
if not validated_prompt.safe:
|
|
63
|
+
return {
|
|
64
|
+
"blocked": True,
|
|
65
|
+
"reason": validated_prompt.violation_reason,
|
|
66
|
+
"severity": validated_prompt.severity
|
|
67
|
+
}
|
|
68
|
+
|
|
69
|
+
# Make LLM call
|
|
70
|
+
response = client.messages.create(
|
|
71
|
+
model="claude-3-5-sonnet-20241022",
|
|
72
|
+
max_tokens=1024,
|
|
73
|
+
messages=[{"role": "user", "content": validated_prompt.sanitized_prompt}]
|
|
74
|
+
)
|
|
75
|
+
|
|
76
|
+
# Post-processing guardrails
|
|
77
|
+
validated_output = guardrails.validate_output(
|
|
78
|
+
output=response.content[0].text,
|
|
79
|
+
prompt=prompt,
|
|
80
|
+
context=user_context
|
|
81
|
+
)
|
|
82
|
+
|
|
83
|
+
if not validated_output.safe:
|
|
84
|
+
return {
|
|
85
|
+
"blocked": True,
|
|
86
|
+
"reason": validated_output.violation_reason,
|
|
87
|
+
"alternative": validated_output.safe_alternative
|
|
88
|
+
}
|
|
89
|
+
|
|
90
|
+
return {
|
|
91
|
+
"blocked": False,
|
|
92
|
+
"response": validated_output.sanitized_output,
|
|
93
|
+
"safety_score": validated_output.safety_score
|
|
94
|
+
}
|
|
95
|
+
|
|
96
|
+
# Use with safety guarantees
|
|
97
|
+
result = safe_llm_call("Explain the treatment protocol for diabetes")
|
|
98
|
+
print(result["response"])
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
## 📚 Best Practices
|
|
102
|
+
|
|
103
|
+
### Cost Optimization (FinOps Integration)
|
|
104
|
+
|
|
105
|
+
1. **Optimize Guardrail Execution Order**
|
|
106
|
+
- Run cheap filters first (regex, keyword matching)
|
|
107
|
+
- Use ML-based filters only when needed
|
|
108
|
+
- Implement early termination for obvious violations
|
|
109
|
+
- Cache guardrail results for similar inputs
|
|
110
|
+
- Reference: FinOps fo-07 (AI/ML Cost Optimization)
|
|
111
|
+
|
|
112
|
+
2. **Batch Guardrail Processing**
|
|
113
|
+
- Process multiple inputs in batches
|
|
114
|
+
- Amortize model loading costs
|
|
115
|
+
- Use batch APIs for classification models
|
|
116
|
+
- Implement async processing for non-blocking checks
|
|
117
|
+
- Reference: FinOps fo-03 (Budget Management)
|
|
118
|
+
|
|
119
|
+
3. **Tiered Guardrail Strategies**
|
|
120
|
+
- Light filtering for low-risk applications
|
|
121
|
+
- Comprehensive checks for high-risk domains
|
|
122
|
+
- Dynamic filtering based on user trust scores
|
|
123
|
+
- Cost-aware filter selection
|
|
124
|
+
- Reference: FinOps fo-01 (Cost Monitoring)
|
|
125
|
+
|
|
126
|
+
4. **Cache Guardrail Results**
|
|
127
|
+
- Cache validation results with semantic similarity
|
|
128
|
+
- Reuse PII detection results
|
|
129
|
+
- Cache compliance check outcomes
|
|
130
|
+
- Monitor cache hit rates for optimization
|
|
131
|
+
- Reference: ai-01 (Prompt Caching)
|
|
132
|
+
|
|
133
|
+
### Security & Privacy (Security Architect Integration)
|
|
134
|
+
|
|
135
|
+
5. **PII Detection & Redaction**
|
|
136
|
+
- Scan all inputs and outputs for PII
|
|
137
|
+
- Redact or mask sensitive information
|
|
138
|
+
- Maintain audit trail of PII handling
|
|
139
|
+
- Comply with data protection regulations
|
|
140
|
+
- Reference: Security Architect sa-01 (PII Detection)
|
|
141
|
+
|
|
142
|
+
6. **Prompt Injection Prevention**
|
|
143
|
+
- Detect and block prompt injection attempts
|
|
144
|
+
- Implement input sanitization
|
|
145
|
+
- Use structured prompts with clear boundaries
|
|
146
|
+
- Monitor for jailbreaking patterns
|
|
147
|
+
- Reference: Security Architect sa-08 (LLM Security)
|
|
148
|
+
|
|
149
|
+
7. **Access Control & Audit Logging**
|
|
150
|
+
- Log all guardrail violations
|
|
151
|
+
- Implement RBAC for guardrail configuration
|
|
152
|
+
- Track user-level safety metrics
|
|
153
|
+
- Alert on suspicious patterns
|
|
154
|
+
- Reference: Security Architect sa-02 (IAM), sa-06 (Data Governance)
|
|
155
|
+
|
|
156
|
+
### Data Quality & Governance (Data Engineer Integration)
|
|
157
|
+
|
|
158
|
+
8. **Training Data Filtering**
|
|
159
|
+
- Apply guardrails to training datasets
|
|
160
|
+
- Remove toxic and biased examples
|
|
161
|
+
- Validate data quality before fine-tuning
|
|
162
|
+
- Track data lineage for safety-critical data
|
|
163
|
+
- Reference: Data Engineer de-03 (Data Quality)
|
|
164
|
+
|
|
165
|
+
9. **Safety Metrics Storage**
|
|
166
|
+
- Persist guardrail execution results
|
|
167
|
+
- Store violation patterns for analysis
|
|
168
|
+
- Track safety metrics over time
|
|
169
|
+
- Enable historical safety audits
|
|
170
|
+
- Reference: Data Engineer de-01 (Data Ingestion)
|
|
171
|
+
|
|
172
|
+
### Model Lifecycle Management (MLOps Integration)
|
|
173
|
+
|
|
174
|
+
10. **Guardrail Model Versioning**
|
|
175
|
+
- Version all safety models in registry
|
|
176
|
+
- Track guardrail model performance
|
|
177
|
+
- A/B test new guardrail versions
|
|
178
|
+
- Rollback capability for safety regressions
|
|
179
|
+
- Reference: MLOps mo-01 (Model Registry), mo-03 (Versioning)
|
|
180
|
+
|
|
181
|
+
11. **Safety Metrics Monitoring**
|
|
182
|
+
- Track false positive/negative rates
|
|
183
|
+
- Monitor guardrail execution latency
|
|
184
|
+
- Alert on guardrail failures or bypasses
|
|
185
|
+
- Dashboard for real-time safety metrics
|
|
186
|
+
- Reference: MLOps mo-04 (Monitoring)
|
|
187
|
+
|
|
188
|
+
12. **Guardrail Drift Detection**
|
|
189
|
+
- Monitor changes in violation patterns
|
|
190
|
+
- Detect emerging attack vectors
|
|
191
|
+
- Track effectiveness degradation
|
|
192
|
+
- Retrain safety models as needed
|
|
193
|
+
- Reference: MLOps mo-05 (Drift Detection)
|
|
194
|
+
|
|
195
|
+
### Deployment & Operations (DevOps Integration)
|
|
196
|
+
|
|
197
|
+
13. **Deploy Guardrails as Microservices**
|
|
198
|
+
- Separate service for each guardrail type
|
|
199
|
+
- Independent scaling based on load
|
|
200
|
+
- Circuit breakers for guardrail failures
|
|
201
|
+
- Health checks and monitoring
|
|
202
|
+
- Reference: DevOps do-03 (Containerization)
|
|
203
|
+
|
|
204
|
+
14. **CI/CD for Guardrail Updates**
|
|
205
|
+
- Automated testing for guardrail changes
|
|
206
|
+
- Canary deployments for new filters
|
|
207
|
+
- Rollback on increased false positives
|
|
208
|
+
- Continuous safety benchmarking
|
|
209
|
+
- Reference: DevOps do-01 (CI/CD)
|
|
210
|
+
|
|
211
|
+
15. **High Availability for Safety Systems**
|
|
212
|
+
- Multi-region guardrail deployment
|
|
213
|
+
- Fallback to conservative filtering on failures
|
|
214
|
+
- Load balancing across guardrail instances
|
|
215
|
+
- Zero-downtime updates
|
|
216
|
+
- Reference: DevOps do-04 (High Availability)
|
|
217
|
+
|
|
218
|
+
### Azure-Specific Best Practices
|
|
219
|
+
|
|
220
|
+
16. **Azure AI Content Safety**
|
|
221
|
+
- Integrate Azure Content Safety API
|
|
222
|
+
- Use managed safety models
|
|
223
|
+
- Enable custom categories for domain-specific filtering
|
|
224
|
+
- Monitor via Azure Monitor
|
|
225
|
+
- Reference: Azure az-04 (AI/ML Services)
|
|
226
|
+
|
|
227
|
+
17. **Azure OpenAI Safety Features**
|
|
228
|
+
- Enable content filtering in Azure OpenAI
|
|
229
|
+
- Use content filtering configurations
|
|
230
|
+
- Implement custom blocklists
|
|
231
|
+
- Monitor safety events in Application Insights
|
|
232
|
+
- Reference: Azure az-05 (Azure OpenAI)
|
|
233
|
+
|
|
234
|
+
## 💰 Cost Optimization Examples
|
|
235
|
+
|
|
236
|
+
### Tiered Guardrail Strategy
|
|
237
|
+
```python
|
|
238
|
+
from llm_guardrails import GuardrailPipeline, FilterLevel
|
|
239
|
+
|
|
240
|
+
class CostOptimizedGuardrails:
|
|
241
|
+
def __init__(self):
|
|
242
|
+
# Define tiered filtering strategies
|
|
243
|
+
self.light_filters = GuardrailPipeline(
|
|
244
|
+
filters=["regex_profanity", "keyword_blocklist"],
|
|
245
|
+
level=FilterLevel.LIGHT
|
|
246
|
+
)
|
|
247
|
+
|
|
248
|
+
self.standard_filters = GuardrailPipeline(
|
|
249
|
+
filters=["toxicity_classifier", "pii_detection"],
|
|
250
|
+
level=FilterLevel.STANDARD
|
|
251
|
+
)
|
|
252
|
+
|
|
253
|
+
self.comprehensive_filters = GuardrailPipeline(
|
|
254
|
+
filters=[
|
|
255
|
+
"toxicity_classifier",
|
|
256
|
+
"pii_detection",
|
|
257
|
+
"bias_detector",
|
|
258
|
+
"hallucination_checker",
|
|
259
|
+
"compliance_validator"
|
|
260
|
+
],
|
|
261
|
+
level=FilterLevel.COMPREHENSIVE
|
|
262
|
+
)
|
|
263
|
+
|
|
264
|
+
def select_filters(self, user_trust_score: float, content_risk: str):
|
|
265
|
+
"""Select appropriate filter level based on context."""
|
|
266
|
+
if user_trust_score > 0.9 and content_risk == "low":
|
|
267
|
+
return self.light_filters # $0.0001 per request
|
|
268
|
+
|
|
269
|
+
elif user_trust_score > 0.7 and content_risk in ["low", "medium"]:
|
|
270
|
+
return self.standard_filters # $0.001 per request
|
|
271
|
+
|
|
272
|
+
else:
|
|
273
|
+
return self.comprehensive_filters # $0.005 per request
|
|
274
|
+
|
|
275
|
+
def validate(self, prompt: str, user_context: dict):
|
|
276
|
+
filters = self.select_filters(
|
|
277
|
+
user_trust_score=user_context.get("trust_score", 0.5),
|
|
278
|
+
content_risk=user_context.get("risk_level", "high")
|
|
279
|
+
)
|
|
280
|
+
|
|
281
|
+
return filters.validate_input(prompt)
|
|
282
|
+
|
|
283
|
+
# Usage
|
|
284
|
+
guardrails = CostOptimizedGuardrails()
|
|
285
|
+
|
|
286
|
+
# Low-risk user, low-risk content → cheap filtering
|
|
287
|
+
result = guardrails.validate(
|
|
288
|
+
prompt="What's the weather today?",
|
|
289
|
+
user_context={"trust_score": 0.95, "risk_level": "low"}
|
|
290
|
+
)
|
|
291
|
+
|
|
292
|
+
# High-risk content → comprehensive filtering
|
|
293
|
+
result = guardrails.validate(
|
|
294
|
+
prompt="Provide medical advice for my condition",
|
|
295
|
+
user_context={"trust_score": 0.5, "risk_level": "high"}
|
|
296
|
+
)
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
### Cached Guardrail Results
|
|
300
|
+
```python
|
|
301
|
+
from functools import lru_cache
|
|
302
|
+
import hashlib
|
|
303
|
+
from semantic_cache import SemanticCache
|
|
304
|
+
|
|
305
|
+
class CachedGuardrailPipeline:
|
|
306
|
+
def __init__(self):
|
|
307
|
+
self.guardrails = GuardrailPipeline()
|
|
308
|
+
self.semantic_cache = SemanticCache(
|
|
309
|
+
similarity_threshold=0.95,
|
|
310
|
+
ttl_seconds=3600
|
|
311
|
+
)
|
|
312
|
+
|
|
313
|
+
def validate_input(self, prompt: str, user_context: dict = None):
|
|
314
|
+
# Check semantic cache for similar prompts
|
|
315
|
+
cached_result = self.semantic_cache.get(prompt)
|
|
316
|
+
if cached_result:
|
|
317
|
+
print("✅ Cache hit - guardrail cost saved!")
|
|
318
|
+
return cached_result
|
|
319
|
+
|
|
320
|
+
# Run guardrails
|
|
321
|
+
result = self.guardrails.validate_input(prompt, user_context)
|
|
322
|
+
|
|
323
|
+
# Cache the result
|
|
324
|
+
if result.safe: # Only cache safe results
|
|
325
|
+
self.semantic_cache.set(prompt, result)
|
|
326
|
+
|
|
327
|
+
return result
|
|
328
|
+
|
|
329
|
+
# Track cost savings
|
|
330
|
+
guardrails = CachedGuardrailPipeline()
|
|
331
|
+
|
|
332
|
+
# Generate cost report
|
|
333
|
+
savings_report = guardrails.semantic_cache.get_savings_report()
|
|
334
|
+
print(f"Cache hit rate: {savings_report.hit_rate:.2%}")
|
|
335
|
+
print(f"Requests saved: {savings_report.hits}")
|
|
336
|
+
print(f"Cost savings: ${savings_report.cost_saved:.4f}")
|
|
337
|
+
```
|
|
338
|
+
|
|
339
|
+
### Batch Processing for Guardrails
|
|
340
|
+
```python
|
|
341
|
+
from llm_guardrails import BatchGuardrailPipeline
|
|
342
|
+
|
|
343
|
+
class BatchGuardrails:
|
|
344
|
+
def __init__(self):
|
|
345
|
+
self.pipeline = BatchGuardrailPipeline(
|
|
346
|
+
batch_size=32, # Process 32 items at once
|
|
347
|
+
max_wait_ms=100 # Wait up to 100ms to fill batch
|
|
348
|
+
)
|
|
349
|
+
|
|
350
|
+
async def validate_async(self, prompts: List[str]):
|
|
351
|
+
"""Validate multiple prompts efficiently."""
|
|
352
|
+
# Batch processing reduces per-item cost by 70%
|
|
353
|
+
results = await self.pipeline.validate_batch(prompts)
|
|
354
|
+
|
|
355
|
+
return [
|
|
356
|
+
{
|
|
357
|
+
"prompt": prompt,
|
|
358
|
+
"safe": result.safe,
|
|
359
|
+
"violations": result.violations
|
|
360
|
+
}
|
|
361
|
+
for prompt, result in zip(prompts, results)
|
|
362
|
+
]
|
|
363
|
+
|
|
364
|
+
# Usage for high-throughput applications
|
|
365
|
+
guardrails = BatchGuardrails()
|
|
366
|
+
|
|
367
|
+
# Validate 100 prompts in batches of 32
|
|
368
|
+
prompts = [f"User query {i}" for i in range(100)]
|
|
369
|
+
results = await guardrails.validate_async(prompts)
|
|
370
|
+
|
|
371
|
+
# Cost comparison:
|
|
372
|
+
# Individual processing: 100 requests × $0.001 = $0.10
|
|
373
|
+
# Batch processing: 4 batches × $0.008 = $0.032 (68% savings)
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
## 🔒 Security Best Practices Examples
|
|
377
|
+
|
|
378
|
+
### Comprehensive PII Detection
|
|
379
|
+
```python
|
|
380
|
+
from pii_detector import PIIDetector # from sa-01
|
|
381
|
+
from data_anonymizer import DataAnonymizer
|
|
382
|
+
|
|
383
|
+
class PIIGuardrail:
|
|
384
|
+
def __init__(self):
|
|
385
|
+
self.detector = PIIDetector()
|
|
386
|
+
self.anonymizer = DataAnonymizer()
|
|
387
|
+
|
|
388
|
+
def validate_and_sanitize(self, text: str, mode: str = "redact"):
|
|
389
|
+
"""Detect and handle PII in text."""
|
|
390
|
+
# Detect PII
|
|
391
|
+
findings = self.detector.analyze_text(text)
|
|
392
|
+
|
|
393
|
+
if not findings:
|
|
394
|
+
return {
|
|
395
|
+
"safe": True,
|
|
396
|
+
"sanitized_text": text,
|
|
397
|
+
"pii_found": False
|
|
398
|
+
}
|
|
399
|
+
|
|
400
|
+
# Handle based on mode
|
|
401
|
+
if mode == "redact":
|
|
402
|
+
sanitized = self.anonymizer.redact_pii(text, findings)
|
|
403
|
+
elif mode == "mask":
|
|
404
|
+
sanitized = self.anonymizer.mask_pii(text, findings)
|
|
405
|
+
elif mode == "block":
|
|
406
|
+
return {
|
|
407
|
+
"safe": False,
|
|
408
|
+
"reason": "PII detected - request blocked",
|
|
409
|
+
"pii_types": [f.entity_type for f in findings]
|
|
410
|
+
}
|
|
411
|
+
|
|
412
|
+
# Log for compliance
|
|
413
|
+
self._log_pii_handling({
|
|
414
|
+
"pii_types": [f.entity_type for f in findings],
|
|
415
|
+
"action": mode,
|
|
416
|
+
"timestamp": datetime.now()
|
|
417
|
+
})
|
|
418
|
+
|
|
419
|
+
return {
|
|
420
|
+
"safe": True,
|
|
421
|
+
"sanitized_text": sanitized,
|
|
422
|
+
"pii_found": True,
|
|
423
|
+
"pii_types": [f.entity_type for f in findings]
|
|
424
|
+
}
|
|
425
|
+
|
|
426
|
+
# Integration with LLM pipeline
|
|
427
|
+
pii_guard = PIIGuardrail()
|
|
428
|
+
|
|
429
|
+
def safe_llm_call_with_pii_protection(prompt: str):
|
|
430
|
+
# Check input
|
|
431
|
+
input_result = pii_guard.validate_and_sanitize(prompt, mode="redact")
|
|
432
|
+
|
|
433
|
+
if not input_result["safe"]:
|
|
434
|
+
return {"blocked": True, "reason": input_result["reason"]}
|
|
435
|
+
|
|
436
|
+
# Make LLM call with sanitized prompt
|
|
437
|
+
response = llm_call(input_result["sanitized_text"])
|
|
438
|
+
|
|
439
|
+
# Check output
|
|
440
|
+
output_result = pii_guard.validate_and_sanitize(response, mode="mask")
|
|
441
|
+
|
|
442
|
+
return {
|
|
443
|
+
"response": output_result["sanitized_text"],
|
|
444
|
+
"pii_found": input_result["pii_found"] or output_result["pii_found"]
|
|
445
|
+
}
|
|
446
|
+
```
|
|
447
|
+
|
|
448
|
+
### Prompt Injection Defense
|
|
449
|
+
```python
|
|
450
|
+
from prompt_injection_detector import PromptInjectionDetector # from sa-08
|
|
451
|
+
|
|
452
|
+
class PromptInjectionGuardrail:
|
|
453
|
+
def __init__(self):
|
|
454
|
+
self.detector = PromptInjectionDetector()
|
|
455
|
+
self.attack_patterns = [
|
|
456
|
+
r"ignore previous instructions",
|
|
457
|
+
r"disregard all prior",
|
|
458
|
+
r"new instructions:",
|
|
459
|
+
r"system:",
|
|
460
|
+
r"<\|im_start\|>",
|
|
461
|
+
# Add more patterns
|
|
462
|
+
]
|
|
463
|
+
|
|
464
|
+
def validate(self, prompt: str):
|
|
465
|
+
"""Detect prompt injection attempts."""
|
|
466
|
+
# Rule-based detection (fast)
|
|
467
|
+
for pattern in self.attack_patterns:
|
|
468
|
+
if re.search(pattern, prompt, re.IGNORECASE):
|
|
469
|
+
return {
|
|
470
|
+
"safe": False,
|
|
471
|
+
"threat": "prompt_injection",
|
|
472
|
+
"confidence": 0.95,
|
|
473
|
+
"pattern_matched": pattern
|
|
474
|
+
}
|
|
475
|
+
|
|
476
|
+
# ML-based detection (comprehensive)
|
|
477
|
+
ml_result = self.detector.analyze(prompt)
|
|
478
|
+
|
|
479
|
+
if ml_result.injection_score > 0.7:
|
|
480
|
+
return {
|
|
481
|
+
"safe": False,
|
|
482
|
+
"threat": "prompt_injection",
|
|
483
|
+
"confidence": ml_result.injection_score,
|
|
484
|
+
"attack_type": ml_result.attack_type
|
|
485
|
+
}
|
|
486
|
+
|
|
487
|
+
return {"safe": True, "confidence": 1 - ml_result.injection_score}
|
|
488
|
+
|
|
489
|
+
def sanitize(self, prompt: str):
|
|
490
|
+
"""Sanitize potentially malicious prompts."""
|
|
491
|
+
# Remove special tokens
|
|
492
|
+
sanitized = re.sub(r'<\|.*?\|>', '', prompt)
|
|
493
|
+
|
|
494
|
+
# Escape problematic characters
|
|
495
|
+
sanitized = sanitized.replace('\\n\\n', ' ')
|
|
496
|
+
|
|
497
|
+
# Enforce length limits
|
|
498
|
+
if len(sanitized) > 2000:
|
|
499
|
+
sanitized = sanitized[:2000]
|
|
500
|
+
|
|
501
|
+
return sanitized
|
|
502
|
+
|
|
503
|
+
# Usage
|
|
504
|
+
injection_guard = PromptInjectionGuardrail()
|
|
505
|
+
|
|
506
|
+
def protected_llm_call(user_prompt: str):
|
|
507
|
+
# Validate input
|
|
508
|
+
validation = injection_guard.validate(user_prompt)
|
|
509
|
+
|
|
510
|
+
if not validation["safe"]:
|
|
511
|
+
# Log attack attempt
|
|
512
|
+
security_logger.warning(
|
|
513
|
+
f"Prompt injection blocked: {validation['attack_type']} "
|
|
514
|
+
f"(confidence: {validation['confidence']:.2%})"
|
|
515
|
+
)
|
|
516
|
+
return {"blocked": True, "reason": "Security violation detected"}
|
|
517
|
+
|
|
518
|
+
# Sanitize and proceed
|
|
519
|
+
safe_prompt = injection_guard.sanitize(user_prompt)
|
|
520
|
+
return llm_call(safe_prompt)
|
|
521
|
+
```
|
|
522
|
+
|
|
523
|
+
### Hallucination Detection
|
|
524
|
+
```python
|
|
525
|
+
from hallucination_detector import HallucinationDetector
|
|
526
|
+
|
|
527
|
+
class HallucinationGuardrail:
|
|
528
|
+
def __init__(self):
|
|
529
|
+
self.detector = HallucinationDetector()
|
|
530
|
+
|
|
531
|
+
def validate_output(self, output: str, context: dict, sources: List[str]):
|
|
532
|
+
"""Check if LLM output is grounded in provided sources."""
|
|
533
|
+
# Extract claims from output
|
|
534
|
+
claims = self.detector.extract_claims(output)
|
|
535
|
+
|
|
536
|
+
unverified_claims = []
|
|
537
|
+
for claim in claims:
|
|
538
|
+
# Check if claim is supported by sources
|
|
539
|
+
verification = self.detector.verify_claim(
|
|
540
|
+
claim=claim,
|
|
541
|
+
sources=sources,
|
|
542
|
+
context=context
|
|
543
|
+
)
|
|
544
|
+
|
|
545
|
+
if not verification.supported:
|
|
546
|
+
unverified_claims.append({
|
|
547
|
+
"claim": claim,
|
|
548
|
+
"confidence": verification.confidence,
|
|
549
|
+
"reason": verification.reason
|
|
550
|
+
})
|
|
551
|
+
|
|
552
|
+
# Calculate hallucination score
|
|
553
|
+
hallucination_score = len(unverified_claims) / len(claims) if claims else 0
|
|
554
|
+
|
|
555
|
+
if hallucination_score > 0.3: # More than 30% unverified
|
|
556
|
+
return {
|
|
557
|
+
"safe": False,
|
|
558
|
+
"reason": "High hallucination risk",
|
|
559
|
+
"score": hallucination_score,
|
|
560
|
+
"unverified_claims": unverified_claims
|
|
561
|
+
}
|
|
562
|
+
|
|
563
|
+
return {
|
|
564
|
+
"safe": True,
|
|
565
|
+
"score": hallucination_score,
|
|
566
|
+
"verified_claims": len(claims) - len(unverified_claims)
|
|
567
|
+
}
|
|
568
|
+
|
|
569
|
+
# Integration with RAG pipeline
|
|
570
|
+
hallucination_guard = HallucinationGuardrail()
|
|
571
|
+
|
|
572
|
+
def rag_with_hallucination_check(query: str):
|
|
573
|
+
# Retrieve context
|
|
574
|
+
sources = rag_pipeline.retrieve(query, top_k=5)
|
|
575
|
+
|
|
576
|
+
# Generate response
|
|
577
|
+
response = llm_generate(query, sources)
|
|
578
|
+
|
|
579
|
+
# Validate response
|
|
580
|
+
validation = hallucination_guard.validate_output(
|
|
581
|
+
output=response,
|
|
582
|
+
context={"query": query},
|
|
583
|
+
sources=sources
|
|
584
|
+
)
|
|
585
|
+
|
|
586
|
+
if not validation["safe"]:
|
|
587
|
+
# Return conservative response or request more sources
|
|
588
|
+
return {
|
|
589
|
+
"response": "I don't have enough reliable information to answer this.",
|
|
590
|
+
"reason": validation["reason"]
|
|
591
|
+
}
|
|
592
|
+
|
|
593
|
+
return {"response": response, "safety_score": 1 - validation["score"]}
|
|
594
|
+
```
|
|
595
|
+
|
|
596
|
+
## 📊 Enhanced Metrics & Monitoring
|
|
597
|
+
|
|
598
|
+
| Metric Category | Metric | Target | Tool |
|
|
599
|
+
|-----------------|--------|--------|------|
|
|
600
|
+
| **Content Safety** | Toxic content block rate | 100% | Azure Content Safety |
|
|
601
|
+
| | False positive rate | <2% | Custom evaluator |
|
|
602
|
+
| | Filter accuracy | >0.98 | MLflow |
|
|
603
|
+
| | Response time (p95) | <200ms | Azure Monitor |
|
|
604
|
+
| **PII Protection** | PII detection recall | >0.99 | Custom evaluator |
|
|
605
|
+
| | PII detection precision | >0.95 | Custom evaluator |
|
|
606
|
+
| | Redaction accuracy | >0.98 | Audit logs |
|
|
607
|
+
| **Prompt Injection** | Injection detection rate | >0.95 | Security monitor |
|
|
608
|
+
| | False positive rate | <5% | Security logs |
|
|
609
|
+
| | Attack pattern coverage | >90% | Security audit |
|
|
610
|
+
| **Hallucination** | Hallucination detection rate | >0.85 | Custom evaluator |
|
|
611
|
+
| | Fact-check accuracy | >0.90 | MLflow |
|
|
612
|
+
| | Source grounding score | >0.85 | Custom metric |
|
|
613
|
+
| **Costs** | Cost per guardrail check | <$0.002 | FinOps dashboard |
|
|
614
|
+
| | Cache hit rate | >60% | App Insights |
|
|
615
|
+
| **Compliance** | GDPR compliance rate | 100% | Compliance tracker |
|
|
616
|
+
| | HIPAA violation prevention | 100% | Audit logs |
|
|
617
|
+
|
|
618
|
+
## 🚀 Deployment Pipeline
|
|
619
|
+
|
|
620
|
+
### CI/CD for Guardrail System
|
|
621
|
+
```yaml
|
|
622
|
+
# .github/workflows/guardrails-deployment.yml
|
|
623
|
+
name: Guardrails Deployment
|
|
624
|
+
|
|
625
|
+
on:
|
|
626
|
+
push:
|
|
627
|
+
paths:
|
|
628
|
+
- 'guardrails/**'
|
|
629
|
+
branches:
|
|
630
|
+
- main
|
|
631
|
+
|
|
632
|
+
jobs:
|
|
633
|
+
test-guardrails:
|
|
634
|
+
runs-on: ubuntu-latest
|
|
635
|
+
steps:
|
|
636
|
+
- name: Unit test all guardrails
|
|
637
|
+
run: pytest tests/test_guardrails.py -v
|
|
638
|
+
|
|
639
|
+
- name: Test PII detection accuracy
|
|
640
|
+
run: pytest tests/test_pii_detection.py --min-recall 0.99
|
|
641
|
+
|
|
642
|
+
- name: Test prompt injection detection
|
|
643
|
+
run: pytest tests/test_injection_detection.py --min-accuracy 0.95
|
|
644
|
+
|
|
645
|
+
- name: Benchmark guardrail performance
|
|
646
|
+
run: python scripts/benchmark_guardrails.py
|
|
647
|
+
|
|
648
|
+
- name: Test false positive rates
|
|
649
|
+
run: pytest tests/test_false_positives.py --max-fp-rate 0.02
|
|
650
|
+
|
|
651
|
+
security-validation:
|
|
652
|
+
runs-on: ubuntu-latest
|
|
653
|
+
steps:
|
|
654
|
+
- name: Validate security policies
|
|
655
|
+
run: python scripts/validate_security_policies.py
|
|
656
|
+
|
|
657
|
+
- name: Test adversarial examples
|
|
658
|
+
run: pytest tests/test_adversarial.py
|
|
659
|
+
|
|
660
|
+
- name: Compliance check
|
|
661
|
+
run: python scripts/check_compliance.py --standards gdpr,hipaa
|
|
662
|
+
|
|
663
|
+
deploy-guardrails:
|
|
664
|
+
needs: [test-guardrails, security-validation]
|
|
665
|
+
runs-on: ubuntu-latest
|
|
666
|
+
steps:
|
|
667
|
+
- name: Build guardrail service
|
|
668
|
+
run: docker build -t guardrails-service:${{ github.sha }} .
|
|
669
|
+
|
|
670
|
+
- name: Push to Azure Container Registry
|
|
671
|
+
run: |
|
|
672
|
+
az acr login --name myregistry
|
|
673
|
+
docker push myregistry.azurecr.io/guardrails-service:${{ github.sha }}
|
|
674
|
+
|
|
675
|
+
- name: Deploy to AKS
|
|
676
|
+
run: |
|
|
677
|
+
kubectl set image deployment/guardrails-service \
|
|
678
|
+
guardrails=myregistry.azurecr.io/guardrails-service:${{ github.sha }}
|
|
679
|
+
|
|
680
|
+
- name: Run smoke tests
|
|
681
|
+
run: python scripts/smoke_test_guardrails.py
|
|
682
|
+
|
|
683
|
+
- name: Monitor guardrail metrics
|
|
684
|
+
run: python scripts/monitor_guardrails.py --duration 1h --alert-on-regression
|
|
685
|
+
```
|
|
686
|
+
|
|
687
|
+
## 🔄 Integration Workflow
|
|
688
|
+
|
|
689
|
+
### End-to-End Guardrail Pipeline with All Roles
|
|
690
|
+
```
|
|
691
|
+
1. User Input Received
|
|
692
|
+
↓
|
|
693
|
+
2. Input Length & Format Validation
|
|
694
|
+
↓
|
|
695
|
+
3. Prompt Injection Detection (sa-08)
|
|
696
|
+
↓
|
|
697
|
+
4. PII Detection in Input (sa-01)
|
|
698
|
+
↓
|
|
699
|
+
5. Input Sanitization
|
|
700
|
+
↓
|
|
701
|
+
6. Cost-Optimized Filter Selection (fo-07)
|
|
702
|
+
↓
|
|
703
|
+
7. LLM Processing with Caching (ai-01)
|
|
704
|
+
↓
|
|
705
|
+
8. Output Content Safety Check (sa-08)
|
|
706
|
+
↓
|
|
707
|
+
9. Hallucination Detection (ai-04)
|
|
708
|
+
↓
|
|
709
|
+
10. Bias Detection (ds-01)
|
|
710
|
+
↓
|
|
711
|
+
11. PII Detection in Output (sa-01)
|
|
712
|
+
↓
|
|
713
|
+
12. Compliance Validation (sa-06)
|
|
714
|
+
↓
|
|
715
|
+
13. Output Sanitization
|
|
716
|
+
↓
|
|
717
|
+
14. Safety Metrics Logging (mo-04)
|
|
718
|
+
↓
|
|
719
|
+
15. Cost Attribution (fo-01)
|
|
720
|
+
↓
|
|
721
|
+
16. Guardrail Performance Monitoring (mo-04)
|
|
722
|
+
↓
|
|
723
|
+
17. Safe Response Delivery
|
|
724
|
+
```
|
|
725
|
+
|
|
726
|
+
## 🎯 Quick Wins
|
|
727
|
+
|
|
728
|
+
1. **Enable Azure Content Safety** - Instant toxic content filtering with managed service
|
|
729
|
+
2. **Implement PII detection** - Prevent data leakage and compliance violations
|
|
730
|
+
3. **Add prompt injection defense** - Block jailbreaking and adversarial attacks
|
|
731
|
+
4. **Cache guardrail results** - 60%+ cost reduction on repeated checks
|
|
732
|
+
5. **Use tiered filtering** - Balance cost and safety based on risk level
|
|
733
|
+
6. **Set up safety monitoring** - Real-time alerts on guardrail failures
|
|
734
|
+
7. **Implement hallucination detection** - Improve output factuality for RAG systems
|
|
735
|
+
8. **Enable compliance validation** - Automated GDPR/HIPAA checks before deployment
|