@zimezone/z-command 1.1.0 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (182) hide show
  1. package/package.json +4 -1
  2. package/templates/agents/api-documenter.agent.md +161 -0
  3. package/templates/agents/architect-review.agent.md +146 -0
  4. package/templates/agents/arm-cortex-expert.agent.md +288 -0
  5. package/templates/agents/backend-architect.agent.md +309 -0
  6. package/templates/agents/backend-security-coder.agent.md +152 -0
  7. package/templates/agents/bash-pro.agent.md +285 -0
  8. package/templates/agents/c-pro.agent.md +35 -0
  9. package/templates/agents/c4-code.agent.md +320 -0
  10. package/templates/agents/c4-component.agent.md +227 -0
  11. package/templates/agents/c4-container.agent.md +248 -0
  12. package/templates/agents/c4-context.agent.md +235 -0
  13. package/templates/agents/conductor-validator.agent.md +245 -0
  14. package/templates/agents/csharp-pro.agent.md +38 -0
  15. package/templates/agents/customer-support.agent.md +148 -0
  16. package/templates/agents/database-admin.agent.md +142 -0
  17. package/templates/agents/database-architect.agent.md +238 -0
  18. package/templates/agents/database-optimizer.agent.md +144 -0
  19. package/templates/agents/debugger.agent.md +30 -0
  20. package/templates/agents/deployment-engineer.agent.md +0 -0
  21. package/templates/agents/devops-troubleshooter.agent.md +138 -0
  22. package/templates/agents/django-pro.agent.md +159 -0
  23. package/templates/agents/docs-architect.agent.md +77 -0
  24. package/templates/agents/dotnet-architect.agent.md +175 -0
  25. package/templates/agents/dx-optimizer.agent.md +63 -0
  26. package/templates/agents/elixir-pro.agent.md +38 -0
  27. package/templates/agents/error-detective.agent.md +32 -0
  28. package/templates/agents/event-sourcing-architect.agent.md +42 -0
  29. package/templates/agents/fastapi-pro.agent.md +171 -0
  30. package/templates/agents/firmware-analyst.agent.md +330 -0
  31. package/templates/agents/frontend-security-coder.agent.md +149 -0
  32. package/templates/agents/haskell-pro.agent.md +37 -0
  33. package/templates/agents/hr-pro.agent.md +105 -0
  34. package/templates/agents/incident-responder.agent.md +190 -0
  35. package/templates/agents/ios-developer.agent.md +198 -0
  36. package/templates/agents/java-pro.agent.md +156 -0
  37. package/templates/agents/javascript-pro.agent.md +35 -0
  38. package/templates/agents/julia-pro.agent.md +187 -0
  39. package/templates/agents/legal-advisor.agent.md +49 -0
  40. package/templates/agents/malware-analyst.agent.md +272 -0
  41. package/templates/agents/mermaid-expert.agent.md +39 -0
  42. package/templates/agents/minecraft-bukkit-pro.agent.md +104 -0
  43. package/templates/agents/mobile-security-coder.agent.md +163 -0
  44. package/templates/agents/monorepo-architect.agent.md +44 -0
  45. package/templates/agents/observability-engineer.agent.md +228 -0
  46. package/templates/agents/performance-engineer.agent.md +167 -0
  47. package/templates/agents/php-pro.agent.md +43 -0
  48. package/templates/agents/posix-shell-pro.agent.md +284 -0
  49. package/templates/agents/quant-analyst.agent.md +32 -0
  50. package/templates/agents/reference-builder.agent.md +167 -0
  51. package/templates/agents/reverse-engineer.agent.md +202 -0
  52. package/templates/agents/risk-manager.agent.md +41 -0
  53. package/templates/agents/ruby-pro.agent.md +35 -0
  54. package/templates/agents/rust-pro.agent.md +156 -0
  55. package/templates/agents/sales-automator.agent.md +35 -0
  56. package/templates/agents/scala-pro.agent.md +60 -0
  57. package/templates/agents/search-specialist.agent.md +59 -0
  58. package/templates/agents/security-auditor.agent.md +138 -0
  59. package/templates/agents/seo-authority-builder.agent.md +116 -0
  60. package/templates/agents/seo-cannibalization-detector.agent.md +103 -0
  61. package/templates/agents/seo-content-auditor.agent.md +63 -0
  62. package/templates/agents/seo-content-planner.agent.md +88 -0
  63. package/templates/agents/seo-content-refresher.agent.md +98 -0
  64. package/templates/agents/seo-content-writer.agent.md +76 -0
  65. package/templates/agents/seo-keyword-strategist.agent.md +75 -0
  66. package/templates/agents/seo-meta-optimizer.agent.md +72 -0
  67. package/templates/agents/seo-snippet-hunter.agent.md +94 -0
  68. package/templates/agents/seo-structure-architect.agent.md +88 -0
  69. package/templates/agents/service-mesh-expert.agent.md +41 -0
  70. package/templates/agents/sql-pro.agent.md +146 -0
  71. package/templates/agents/tdd-orchestrator.agent.md +183 -0
  72. package/templates/agents/temporal-python-pro.agent.md +349 -0
  73. package/templates/agents/terraform-specialist.agent.md +137 -0
  74. package/templates/agents/test-automator.agent.md +203 -0
  75. package/templates/agents/threat-modeling-expert.agent.md +44 -0
  76. package/templates/agents/tutorial-engineer.agent.md +118 -0
  77. package/templates/agents/ui-ux-designer.agent.md +188 -0
  78. package/templates/agents/ui-visual-validator.agent.md +192 -0
  79. package/templates/agents/vector-database-engineer.agent.md +43 -0
  80. package/templates/skills/angular-migration/SKILL.md +410 -0
  81. package/templates/skills/api-design-principles/SKILL.md +528 -0
  82. package/templates/skills/api-design-principles/assets/api-design-checklist.md +155 -0
  83. package/templates/skills/api-design-principles/assets/rest-api-template.py +182 -0
  84. package/templates/skills/api-design-principles/references/graphql-schema-design.md +583 -0
  85. package/templates/skills/api-design-principles/references/rest-best-practices.md +408 -0
  86. package/templates/skills/architecture-decision-records/SKILL.md +428 -0
  87. package/templates/skills/architecture-patterns/SKILL.md +494 -0
  88. package/templates/skills/async-python-patterns/SKILL.md +694 -0
  89. package/templates/skills/auth-implementation-patterns/SKILL.md +634 -0
  90. package/templates/skills/changelog-automation/SKILL.md +552 -0
  91. package/templates/skills/code-review-excellence/SKILL.md +520 -0
  92. package/templates/skills/competitive-landscape/SKILL.md +479 -0
  93. package/templates/skills/context-driven-development/SKILL.md +385 -0
  94. package/templates/skills/cost-optimization/SKILL.md +274 -0
  95. package/templates/skills/cqrs-implementation/SKILL.md +554 -0
  96. package/templates/skills/data-quality-frameworks/SKILL.md +587 -0
  97. package/templates/skills/data-storytelling/SKILL.md +453 -0
  98. package/templates/skills/database-migration/SKILL.md +424 -0
  99. package/templates/skills/dbt-transformation-patterns/SKILL.md +561 -0
  100. package/templates/skills/debugging-strategies/SKILL.md +527 -0
  101. package/templates/skills/defi-protocol-templates/SKILL.md +454 -0
  102. package/templates/skills/dependency-upgrade/SKILL.md +409 -0
  103. package/templates/skills/deployment-pipeline-design/SKILL.md +359 -0
  104. package/templates/skills/distributed-tracing/SKILL.md +438 -0
  105. package/templates/skills/dotnet-backend-patterns/SKILL.md +815 -0
  106. package/templates/skills/dotnet-backend-patterns/assets/repository-template.cs +523 -0
  107. package/templates/skills/dotnet-backend-patterns/assets/service-template.cs +336 -0
  108. package/templates/skills/dotnet-backend-patterns/references/dapper-patterns.md +544 -0
  109. package/templates/skills/dotnet-backend-patterns/references/ef-core-best-practices.md +355 -0
  110. package/templates/skills/e2e-testing-patterns/SKILL.md +547 -0
  111. package/templates/skills/employment-contract-templates/SKILL.md +507 -0
  112. package/templates/skills/error-handling-patterns/SKILL.md +636 -0
  113. package/templates/skills/event-store-design/SKILL.md +437 -0
  114. package/templates/skills/fastapi-templates/SKILL.md +567 -0
  115. package/templates/skills/git-advanced-workflows/SKILL.md +400 -0
  116. package/templates/skills/github-actions-templates/SKILL.md +333 -0
  117. package/templates/skills/go-concurrency-patterns/SKILL.md +655 -0
  118. package/templates/skills/grafana-dashboards/SKILL.md +369 -0
  119. package/templates/skills/helm-chart-scaffolding/SKILL.md +544 -0
  120. package/templates/skills/helm-chart-scaffolding/assets/Chart.yaml.template +42 -0
  121. package/templates/skills/helm-chart-scaffolding/assets/values.yaml.template +185 -0
  122. package/templates/skills/helm-chart-scaffolding/references/chart-structure.md +500 -0
  123. package/templates/skills/helm-chart-scaffolding/scripts/validate-chart.sh +244 -0
  124. package/templates/skills/javascript-testing-patterns/SKILL.md +1025 -0
  125. package/templates/skills/langchain-architecture/SKILL.md +338 -0
  126. package/templates/skills/llm-evaluation/SKILL.md +471 -0
  127. package/templates/skills/microservices-patterns/SKILL.md +595 -0
  128. package/templates/skills/modern-javascript-patterns/SKILL.md +911 -0
  129. package/templates/skills/monorepo-management/SKILL.md +622 -0
  130. package/templates/skills/nextjs-app-router-patterns/SKILL.md +544 -0
  131. package/templates/skills/nodejs-backend-patterns/SKILL.md +1020 -0
  132. package/templates/skills/nx-workspace-patterns/SKILL.md +452 -0
  133. package/templates/skills/openapi-spec-generation/SKILL.md +1028 -0
  134. package/templates/skills/paypal-integration/SKILL.md +467 -0
  135. package/templates/skills/pci-compliance/SKILL.md +466 -0
  136. package/templates/skills/postgresql/SKILL.md +204 -0
  137. package/templates/skills/projection-patterns/SKILL.md +490 -0
  138. package/templates/skills/prometheus-configuration/SKILL.md +392 -0
  139. package/templates/skills/prompt-engineering-patterns/SKILL.md +201 -0
  140. package/templates/skills/prompt-engineering-patterns/assets/few-shot-examples.json +106 -0
  141. package/templates/skills/prompt-engineering-patterns/assets/prompt-template-library.md +246 -0
  142. package/templates/skills/prompt-engineering-patterns/references/chain-of-thought.md +399 -0
  143. package/templates/skills/prompt-engineering-patterns/references/few-shot-learning.md +369 -0
  144. package/templates/skills/prompt-engineering-patterns/references/prompt-optimization.md +414 -0
  145. package/templates/skills/prompt-engineering-patterns/references/prompt-templates.md +470 -0
  146. package/templates/skills/prompt-engineering-patterns/references/system-prompts.md +189 -0
  147. package/templates/skills/prompt-engineering-patterns/scripts/optimize-prompt.py +279 -0
  148. package/templates/skills/python-packaging/SKILL.md +870 -0
  149. package/templates/skills/python-performance-optimization/SKILL.md +869 -0
  150. package/templates/skills/python-testing-patterns/SKILL.md +907 -0
  151. package/templates/skills/rag-implementation/SKILL.md +403 -0
  152. package/templates/skills/react-modernization/SKILL.md +513 -0
  153. package/templates/skills/react-native-architecture/SKILL.md +671 -0
  154. package/templates/skills/react-state-management/SKILL.md +429 -0
  155. package/templates/skills/risk-metrics-calculation/SKILL.md +555 -0
  156. package/templates/skills/rust-async-patterns/SKILL.md +517 -0
  157. package/templates/skills/secrets-management/SKILL.md +346 -0
  158. package/templates/skills/security-requirement-extraction/SKILL.md +677 -0
  159. package/templates/skills/shellcheck-configuration/SKILL.md +454 -0
  160. package/templates/skills/similarity-search-patterns/SKILL.md +558 -0
  161. package/templates/skills/slo-implementation/SKILL.md +329 -0
  162. package/templates/skills/sql-optimization-patterns/SKILL.md +493 -0
  163. package/templates/skills/stripe-integration/SKILL.md +442 -0
  164. package/templates/skills/tailwind-design-system/SKILL.md +666 -0
  165. package/templates/skills/temporal-python-testing/SKILL.md +158 -0
  166. package/templates/skills/temporal-python-testing/resources/integration-testing.md +455 -0
  167. package/templates/skills/temporal-python-testing/resources/local-setup.md +553 -0
  168. package/templates/skills/temporal-python-testing/resources/replay-testing.md +462 -0
  169. package/templates/skills/temporal-python-testing/resources/unit-testing.md +328 -0
  170. package/templates/skills/terraform-module-library/SKILL.md +249 -0
  171. package/templates/skills/terraform-module-library/references/aws-modules.md +63 -0
  172. package/templates/skills/threat-mitigation-mapping/SKILL.md +745 -0
  173. package/templates/skills/track-management/SKILL.md +593 -0
  174. package/templates/skills/typescript-advanced-types/SKILL.md +717 -0
  175. package/templates/skills/uv-package-manager/SKILL.md +831 -0
  176. package/templates/skills/vector-index-tuning/SKILL.md +521 -0
  177. package/templates/skills/wcag-audit-patterns/SKILL.md +555 -0
  178. package/templates/skills/workflow-orchestration-patterns/SKILL.md +316 -0
  179. package/templates/skills/workflow-patterns/SKILL.md +623 -0
  180. package/templates/agents/game-developer.agent.md +0 -57
  181. package/templates/agents/kubernetes-specialist.agent.md +0 -56
  182. package/templates/agents/market-researcher.agent.md +0 -47
@@ -0,0 +1,471 @@
1
+ ---
2
+ name: llm-evaluation
3
+ description: Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
4
+ ---
5
+
6
+ # LLM Evaluation
7
+
8
+ Master comprehensive evaluation strategies for LLM applications, from automated metrics to human evaluation and A/B testing.
9
+
10
+ ## When to Use This Skill
11
+
12
+ - Measuring LLM application performance systematically
13
+ - Comparing different models or prompts
14
+ - Detecting performance regressions before deployment
15
+ - Validating improvements from prompt changes
16
+ - Building confidence in production systems
17
+ - Establishing baselines and tracking progress over time
18
+ - Debugging unexpected model behavior
19
+
20
+ ## Core Evaluation Types
21
+
22
+ ### 1. Automated Metrics
23
+ Fast, repeatable, scalable evaluation using computed scores.
24
+
25
+ **Text Generation:**
26
+ - **BLEU**: N-gram overlap (translation)
27
+ - **ROUGE**: Recall-oriented (summarization)
28
+ - **METEOR**: Semantic similarity
29
+ - **BERTScore**: Embedding-based similarity
30
+ - **Perplexity**: Language model confidence
31
+
32
+ **Classification:**
33
+ - **Accuracy**: Percentage correct
34
+ - **Precision/Recall/F1**: Class-specific performance
35
+ - **Confusion Matrix**: Error patterns
36
+ - **AUC-ROC**: Ranking quality
37
+
38
+ **Retrieval (RAG):**
39
+ - **MRR**: Mean Reciprocal Rank
40
+ - **NDCG**: Normalized Discounted Cumulative Gain
41
+ - **Precision@K**: Relevant in top K
42
+ - **Recall@K**: Coverage in top K
43
+
44
+ ### 2. Human Evaluation
45
+ Manual assessment for quality aspects difficult to automate.
46
+
47
+ **Dimensions:**
48
+ - **Accuracy**: Factual correctness
49
+ - **Coherence**: Logical flow
50
+ - **Relevance**: Answers the question
51
+ - **Fluency**: Natural language quality
52
+ - **Safety**: No harmful content
53
+ - **Helpfulness**: Useful to the user
54
+
55
+ ### 3. LLM-as-Judge
56
+ Use stronger LLMs to evaluate weaker model outputs.
57
+
58
+ **Approaches:**
59
+ - **Pointwise**: Score individual responses
60
+ - **Pairwise**: Compare two responses
61
+ - **Reference-based**: Compare to gold standard
62
+ - **Reference-free**: Judge without ground truth
63
+
64
+ ## Quick Start
65
+
66
+ ```python
67
+ from llm_eval import EvaluationSuite, Metric
68
+
69
+ # Define evaluation suite
70
+ suite = EvaluationSuite([
71
+ Metric.accuracy(),
72
+ Metric.bleu(),
73
+ Metric.bertscore(),
74
+ Metric.custom(name="groundedness", fn=check_groundedness)
75
+ ])
76
+
77
+ # Prepare test cases
78
+ test_cases = [
79
+ {
80
+ "input": "What is the capital of France?",
81
+ "expected": "Paris",
82
+ "context": "France is a country in Europe. Paris is its capital."
83
+ },
84
+ # ... more test cases
85
+ ]
86
+
87
+ # Run evaluation
88
+ results = suite.evaluate(
89
+ model=your_model,
90
+ test_cases=test_cases
91
+ )
92
+
93
+ print(f"Overall Accuracy: {results.metrics['accuracy']}")
94
+ print(f"BLEU Score: {results.metrics['bleu']}")
95
+ ```
96
+
97
+ ## Automated Metrics Implementation
98
+
99
+ ### BLEU Score
100
+ ```python
101
+ from nltk.translate.bleu_score import sentence_bleu, SmoothingFunction
102
+
103
+ def calculate_bleu(reference, hypothesis):
104
+ """Calculate BLEU score between reference and hypothesis."""
105
+ smoothie = SmoothingFunction().method4
106
+
107
+ return sentence_bleu(
108
+ [reference.split()],
109
+ hypothesis.split(),
110
+ smoothing_function=smoothie
111
+ )
112
+
113
+ # Usage
114
+ bleu = calculate_bleu(
115
+ reference="The cat sat on the mat",
116
+ hypothesis="A cat is sitting on the mat"
117
+ )
118
+ ```
119
+
120
+ ### ROUGE Score
121
+ ```python
122
+ from rouge_score import rouge_scorer
123
+
124
+ def calculate_rouge(reference, hypothesis):
125
+ """Calculate ROUGE scores."""
126
+ scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
127
+ scores = scorer.score(reference, hypothesis)
128
+
129
+ return {
130
+ 'rouge1': scores['rouge1'].fmeasure,
131
+ 'rouge2': scores['rouge2'].fmeasure,
132
+ 'rougeL': scores['rougeL'].fmeasure
133
+ }
134
+ ```
135
+
136
+ ### BERTScore
137
+ ```python
138
+ from bert_score import score
139
+
140
+ def calculate_bertscore(references, hypotheses):
141
+ """Calculate BERTScore using pre-trained BERT."""
142
+ P, R, F1 = score(
143
+ hypotheses,
144
+ references,
145
+ lang='en',
146
+ model_type='microsoft/deberta-xlarge-mnli'
147
+ )
148
+
149
+ return {
150
+ 'precision': P.mean().item(),
151
+ 'recall': R.mean().item(),
152
+ 'f1': F1.mean().item()
153
+ }
154
+ ```
155
+
156
+ ### Custom Metrics
157
+ ```python
158
+ def calculate_groundedness(response, context):
159
+ """Check if response is grounded in provided context."""
160
+ # Use NLI model to check entailment
161
+ from transformers import pipeline
162
+
163
+ nli = pipeline("text-classification", model="microsoft/deberta-large-mnli")
164
+
165
+ result = nli(f"{context} [SEP] {response}")[0]
166
+
167
+ # Return confidence that response is entailed by context
168
+ return result['score'] if result['label'] == 'ENTAILMENT' else 0.0
169
+
170
+ def calculate_toxicity(text):
171
+ """Measure toxicity in generated text."""
172
+ from detoxify import Detoxify
173
+
174
+ results = Detoxify('original').predict(text)
175
+ return max(results.values()) # Return highest toxicity score
176
+
177
+ def calculate_factuality(claim, knowledge_base):
178
+ """Verify factual claims against knowledge base."""
179
+ # Implementation depends on your knowledge base
180
+ # Could use retrieval + NLI, or fact-checking API
181
+ pass
182
+ ```
183
+
184
+ ## LLM-as-Judge Patterns
185
+
186
+ ### Single Output Evaluation
187
+ ```python
188
+ def llm_judge_quality(response, question):
189
+ """Use GPT-5 to judge response quality."""
190
+ prompt = f"""Rate the following response on a scale of 1-10 for:
191
+ 1. Accuracy (factually correct)
192
+ 2. Helpfulness (answers the question)
193
+ 3. Clarity (well-written and understandable)
194
+
195
+ Question: {question}
196
+ Response: {response}
197
+
198
+ Provide ratings in JSON format:
199
+ {{
200
+ "accuracy": <1-10>,
201
+ "helpfulness": <1-10>,
202
+ "clarity": <1-10>,
203
+ "reasoning": "<brief explanation>"
204
+ }}
205
+ """
206
+
207
+ result = openai.ChatCompletion.create(
208
+ model="gpt-5",
209
+ messages=[{"role": "user", "content": prompt}],
210
+ temperature=0
211
+ )
212
+
213
+ return json.loads(result.choices[0].message.content)
214
+ ```
215
+
216
+ ### Pairwise Comparison
217
+ ```python
218
+ def compare_responses(question, response_a, response_b):
219
+ """Compare two responses using LLM judge."""
220
+ prompt = f"""Compare these two responses to the question and determine which is better.
221
+
222
+ Question: {question}
223
+
224
+ Response A: {response_a}
225
+
226
+ Response B: {response_b}
227
+
228
+ Which response is better and why? Consider accuracy, helpfulness, and clarity.
229
+
230
+ Answer with JSON:
231
+ {{
232
+ "winner": "A" or "B" or "tie",
233
+ "reasoning": "<explanation>",
234
+ "confidence": <1-10>
235
+ }}
236
+ """
237
+
238
+ result = openai.ChatCompletion.create(
239
+ model="gpt-5",
240
+ messages=[{"role": "user", "content": prompt}],
241
+ temperature=0
242
+ )
243
+
244
+ return json.loads(result.choices[0].message.content)
245
+ ```
246
+
247
+ ## Human Evaluation Frameworks
248
+
249
+ ### Annotation Guidelines
250
+ ```python
251
+ class AnnotationTask:
252
+ """Structure for human annotation task."""
253
+
254
+ def __init__(self, response, question, context=None):
255
+ self.response = response
256
+ self.question = question
257
+ self.context = context
258
+
259
+ def get_annotation_form(self):
260
+ return {
261
+ "question": self.question,
262
+ "context": self.context,
263
+ "response": self.response,
264
+ "ratings": {
265
+ "accuracy": {
266
+ "scale": "1-5",
267
+ "description": "Is the response factually correct?"
268
+ },
269
+ "relevance": {
270
+ "scale": "1-5",
271
+ "description": "Does it answer the question?"
272
+ },
273
+ "coherence": {
274
+ "scale": "1-5",
275
+ "description": "Is it logically consistent?"
276
+ }
277
+ },
278
+ "issues": {
279
+ "factual_error": False,
280
+ "hallucination": False,
281
+ "off_topic": False,
282
+ "unsafe_content": False
283
+ },
284
+ "feedback": ""
285
+ }
286
+ ```
287
+
288
+ ### Inter-Rater Agreement
289
+ ```python
290
+ from sklearn.metrics import cohen_kappa_score
291
+
292
+ def calculate_agreement(rater1_scores, rater2_scores):
293
+ """Calculate inter-rater agreement."""
294
+ kappa = cohen_kappa_score(rater1_scores, rater2_scores)
295
+
296
+ interpretation = {
297
+ kappa < 0: "Poor",
298
+ kappa < 0.2: "Slight",
299
+ kappa < 0.4: "Fair",
300
+ kappa < 0.6: "Moderate",
301
+ kappa < 0.8: "Substantial",
302
+ kappa <= 1.0: "Almost Perfect"
303
+ }
304
+
305
+ return {
306
+ "kappa": kappa,
307
+ "interpretation": interpretation[True]
308
+ }
309
+ ```
310
+
311
+ ## A/B Testing
312
+
313
+ ### Statistical Testing Framework
314
+ ```python
315
+ from scipy import stats
316
+ import numpy as np
317
+
318
+ class ABTest:
319
+ def __init__(self, variant_a_name="A", variant_b_name="B"):
320
+ self.variant_a = {"name": variant_a_name, "scores": []}
321
+ self.variant_b = {"name": variant_b_name, "scores": []}
322
+
323
+ def add_result(self, variant, score):
324
+ """Add evaluation result for a variant."""
325
+ if variant == "A":
326
+ self.variant_a["scores"].append(score)
327
+ else:
328
+ self.variant_b["scores"].append(score)
329
+
330
+ def analyze(self, alpha=0.05):
331
+ """Perform statistical analysis."""
332
+ a_scores = self.variant_a["scores"]
333
+ b_scores = self.variant_b["scores"]
334
+
335
+ # T-test
336
+ t_stat, p_value = stats.ttest_ind(a_scores, b_scores)
337
+
338
+ # Effect size (Cohen's d)
339
+ pooled_std = np.sqrt((np.std(a_scores)**2 + np.std(b_scores)**2) / 2)
340
+ cohens_d = (np.mean(b_scores) - np.mean(a_scores)) / pooled_std
341
+
342
+ return {
343
+ "variant_a_mean": np.mean(a_scores),
344
+ "variant_b_mean": np.mean(b_scores),
345
+ "difference": np.mean(b_scores) - np.mean(a_scores),
346
+ "relative_improvement": (np.mean(b_scores) - np.mean(a_scores)) / np.mean(a_scores),
347
+ "p_value": p_value,
348
+ "statistically_significant": p_value < alpha,
349
+ "cohens_d": cohens_d,
350
+ "effect_size": self.interpret_cohens_d(cohens_d),
351
+ "winner": "B" if np.mean(b_scores) > np.mean(a_scores) else "A"
352
+ }
353
+
354
+ @staticmethod
355
+ def interpret_cohens_d(d):
356
+ """Interpret Cohen's d effect size."""
357
+ abs_d = abs(d)
358
+ if abs_d < 0.2:
359
+ return "negligible"
360
+ elif abs_d < 0.5:
361
+ return "small"
362
+ elif abs_d < 0.8:
363
+ return "medium"
364
+ else:
365
+ return "large"
366
+ ```
367
+
368
+ ## Regression Testing
369
+
370
+ ### Regression Detection
371
+ ```python
372
+ class RegressionDetector:
373
+ def __init__(self, baseline_results, threshold=0.05):
374
+ self.baseline = baseline_results
375
+ self.threshold = threshold
376
+
377
+ def check_for_regression(self, new_results):
378
+ """Detect if new results show regression."""
379
+ regressions = []
380
+
381
+ for metric in self.baseline.keys():
382
+ baseline_score = self.baseline[metric]
383
+ new_score = new_results.get(metric)
384
+
385
+ if new_score is None:
386
+ continue
387
+
388
+ # Calculate relative change
389
+ relative_change = (new_score - baseline_score) / baseline_score
390
+
391
+ # Flag if significant decrease
392
+ if relative_change < -self.threshold:
393
+ regressions.append({
394
+ "metric": metric,
395
+ "baseline": baseline_score,
396
+ "current": new_score,
397
+ "change": relative_change
398
+ })
399
+
400
+ return {
401
+ "has_regression": len(regressions) > 0,
402
+ "regressions": regressions
403
+ }
404
+ ```
405
+
406
+ ## Benchmarking
407
+
408
+ ### Running Benchmarks
409
+ ```python
410
+ class BenchmarkRunner:
411
+ def __init__(self, benchmark_dataset):
412
+ self.dataset = benchmark_dataset
413
+
414
+ def run_benchmark(self, model, metrics):
415
+ """Run model on benchmark and calculate metrics."""
416
+ results = {metric.name: [] for metric in metrics}
417
+
418
+ for example in self.dataset:
419
+ # Generate prediction
420
+ prediction = model.predict(example["input"])
421
+
422
+ # Calculate each metric
423
+ for metric in metrics:
424
+ score = metric.calculate(
425
+ prediction=prediction,
426
+ reference=example["reference"],
427
+ context=example.get("context")
428
+ )
429
+ results[metric.name].append(score)
430
+
431
+ # Aggregate results
432
+ return {
433
+ metric: {
434
+ "mean": np.mean(scores),
435
+ "std": np.std(scores),
436
+ "min": min(scores),
437
+ "max": max(scores)
438
+ }
439
+ for metric, scores in results.items()
440
+ }
441
+ ```
442
+
443
+ ## Resources
444
+
445
+ - **references/metrics.md**: Comprehensive metric guide
446
+ - **references/human-evaluation.md**: Annotation best practices
447
+ - **references/benchmarking.md**: Standard benchmarks
448
+ - **references/a-b-testing.md**: Statistical testing guide
449
+ - **references/regression-testing.md**: CI/CD integration
450
+ - **assets/evaluation-framework.py**: Complete evaluation harness
451
+ - **assets/benchmark-dataset.jsonl**: Example datasets
452
+ - **scripts/evaluate-model.py**: Automated evaluation runner
453
+
454
+ ## Best Practices
455
+
456
+ 1. **Multiple Metrics**: Use diverse metrics for comprehensive view
457
+ 2. **Representative Data**: Test on real-world, diverse examples
458
+ 3. **Baselines**: Always compare against baseline performance
459
+ 4. **Statistical Rigor**: Use proper statistical tests for comparisons
460
+ 5. **Continuous Evaluation**: Integrate into CI/CD pipeline
461
+ 6. **Human Validation**: Combine automated metrics with human judgment
462
+ 7. **Error Analysis**: Investigate failures to understand weaknesses
463
+ 8. **Version Control**: Track evaluation results over time
464
+
465
+ ## Common Pitfalls
466
+
467
+ - **Single Metric Obsession**: Optimizing for one metric at the expense of others
468
+ - **Small Sample Size**: Drawing conclusions from too few examples
469
+ - **Data Contamination**: Testing on training data
470
+ - **Ignoring Variance**: Not accounting for statistical uncertainty
471
+ - **Metric Mismatch**: Using metrics not aligned with business goals