@modular-prompt/experiment 0.3.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,412 +1,75 @@
1
1
  # @modular-prompt/experiment
2
2
 
3
- Experiment framework for comparing and evaluating modular prompt modules.
3
+ プロンプトモジュールの比較・評価フレームワーク。
4
4
 
5
- ## Overview
6
-
7
- This framework provides tools to compare and evaluate different prompt module variations under identical conditions. It integrates with the `@modular-prompt/core` system to test multiple prompt variations and evaluate their output quality.
8
-
9
- ### Use Cases
10
-
11
- - **Prompt Engineering**: Validate the effectiveness of new prompt structures
12
- - **Module Separation**: Verify that modularized prompts produce equivalent outputs
13
- - **Quality Evaluation**: Assess output stability and consistency through repeated executions
14
- - **Multi-Model Testing**: Test across different LLM providers (MLX, VertexAI, GoogleGenAI, etc.)
15
-
16
- ## Features
17
-
18
- - ✅ **Dynamic Module Loading**: Load prompt modules from external files or inline definitions
19
- - ✅ **Flexible Evaluators**: Support both code-based and AI-based evaluation
20
- - ✅ **Statistical Analysis**: Analyze success rates, execution times, and output consistency
21
- - ✅ **Prompt Diff Detection**: Automatically detect differences between module outputs
22
- - ✅ **Driver Caching**: Reuse drivers for improved memory efficiency
23
- - ✅ **Detailed Logging**: Comprehensive logging of all executions
24
-
25
- ## Installation
5
+ ## インストール
26
6
 
27
7
  ```bash
28
- pnpm add @modular-prompt/experiment
8
+ npm install @modular-prompt/experiment
29
9
  ```
30
10
 
31
- ## Quick Start
11
+ ## 概要
32
12
 
33
- ### 1. Create Configuration File
13
+ 複数のプロンプトモジュールを同一条件下で比較・評価する。YAML設定で実験を定義し、CLIで実行。
34
14
 
35
- You can use either YAML or TypeScript format.
15
+ - **プロンプト比較**: 異なるプロンプト構造の効果を定量的に比較
16
+ - **マルチモデルテスト**: 異なるLLMプロバイダーでの動作比較
17
+ - **品質評価**: 繰り返し実行による安定性・一貫性の評価
18
+ - **柔軟な評価器**: コードベース・AIベースの評価をサポート
36
19
 
37
- #### Option A: YAML Configuration (Recommended for static configurations)
20
+ ## クイックスタート
38
21
 
39
- Create `examples/experiment.yaml`:
22
+ ### 1. 設定ファイルを作成
40
23
 
41
24
  ```yaml
25
+ # experiment.yaml
42
26
  models:
43
- gemini-fast:
44
- provider: vertexai
45
- model: gemini-2.0-flash-exp
46
- capabilities: ["tools", "fast"]
47
- enabled: true
27
+ gpt4o:
28
+ provider: openai
29
+ model: gpt-4o
48
30
 
49
31
  drivers:
50
- vertexai:
51
- projectId: your-project-id
52
- location: us-central1
53
- # Paths are resolved relative to this config file
54
- # Can use ~/ for home directory or absolute paths
55
- credentialsPath: ./credentials.json
32
+ openai:
33
+ apiKey: ${OPENAI_API_KEY}
56
34
 
57
35
  modules:
58
36
  - name: my-module
59
37
  path: ./my-module.ts
60
- description: My custom prompt module
61
38
 
62
39
  testCases:
63
- - name: Basic Test
64
- description: Test basic functionality
65
- input: # Structured context object (passed to module.compile)
66
- query: user question
67
- context: additional information
68
- models: # Optional: specify which models to test (uses all enabled if not specified)
69
- - gemini-fast
40
+ - name: 基本テスト
41
+ input:
42
+ query: "TypeScriptについて説明して"
70
43
 
71
- evaluators:
72
- # Built-in evaluators (name only)
73
- - name: structured-output-presence
74
- - name: llm-requirement-fulfillment
75
- # Or external evaluator (with path)
76
- - name: custom-validator
77
- path: ./evaluators/custom-validator.ts
78
- # Or inline prompt evaluator
79
- - name: quality-check
80
- prompt:
81
- objective:
82
- - Evaluate output quality
83
- instructions:
84
- - Check clarity and accuracy
85
-
86
- evaluation:
87
- enabled: true
88
- model: gemini-fast # Reference by model name
44
+ evaluators: []
89
45
  ```
90
46
 
91
- #### Option B: TypeScript Configuration (For dynamic configurations)
92
-
93
- Create `examples/experiment.ts`:
94
-
95
- ```typescript
96
- export default {
97
- models: {
98
- 'gemini-fast': {
99
- provider: 'vertexai',
100
- model: 'gemini-2.0-flash-exp',
101
- capabilities: ['tools', 'fast'],
102
- enabled: true,
103
- },
104
- },
105
- drivers: {
106
- vertexai: {
107
- projectId: 'your-project-id',
108
- location: 'us-central1',
109
- credentialsPath: './credentials.json',
110
- },
111
- },
112
- modules: [
113
- {
114
- name: 'my-module',
115
- path: './my-module.ts',
116
- description: 'My custom prompt module',
117
- },
118
- ],
119
- testCases: [
120
- {
121
- name: 'Basic Test',
122
- description: 'Test basic functionality',
123
- input: { // Structured context object
124
- query: 'user question',
125
- options: { temperature: 0.7 },
126
- },
127
- models: ['gemini-fast'], // Optional
128
- },
129
- ],
130
- evaluators: [
131
- // Built-in evaluators (name only)
132
- { name: 'structured-output-presence' },
133
- { name: 'llm-requirement-fulfillment' },
134
- // Or external evaluator (with path)
135
- {
136
- name: 'custom-validator',
137
- path: './evaluators/custom-validator.ts',
138
- },
139
- ],
140
- evaluation: {
141
- enabled: true,
142
- model: 'gemini-fast', // Reference by model name
143
- },
144
- };
145
- ```
146
-
147
- **TypeScript Support**: TypeScript configuration files are automatically transpiled using [jiti](https://github.com/unjs/jiti). You can use TypeScript syntax directly without pre-compilation. Type annotations are stripped automatically, and the file is executed as JavaScript.
148
-
149
- **Important**: All file paths in the configuration (modules, evaluators, credentials) are resolved relative to the config file location.
150
-
151
- ### 2. Run Experiment
152
-
153
- ```bash
154
- # Validate configuration and display execution plan (recommended first step)
155
- npx modular-experiment examples/experiment.yaml --dry-run
156
-
157
- # Run with YAML config
158
- npx modular-experiment examples/experiment.yaml
159
-
160
- # Run with TypeScript config
161
- npx modular-experiment examples/experiment.ts
162
-
163
- # Run specific module
164
- npx modular-experiment examples/experiment.yaml --modules my-module
165
-
166
- # Run with evaluation
167
- npx modular-experiment examples/experiment.yaml --evaluate
168
-
169
- # Run multiple times for statistics
170
- npx modular-experiment examples/experiment.yaml --repeat 10
171
-
172
- # Run with detailed logging to JSONL file
173
- npx modular-experiment examples/experiment.yaml --log-file experiment.jsonl
174
-
175
- # Run with verbose output (show internal operations)
176
- npx modular-experiment examples/experiment.yaml --verbose
177
-
178
- # Combine options
179
- npx modular-experiment examples/experiment.yaml --evaluate --log-file experiment.jsonl --verbose
180
- ```
181
-
182
- ## Configuration
183
-
184
- ### Module Definition
185
-
186
- Modules can be defined inline or loaded from external files:
187
-
188
- ```typescript
189
- // External file
190
- export const modules: ModuleReference[] = [
191
- {
192
- name: 'my-module',
193
- path: './modules/my-module.ts',
194
- description: 'Description',
195
- },
196
- ];
197
- ```
198
-
199
- A module file should export a default object with:
47
+ ### 2. モジュールファイルを作成
200
48
 
201
49
  ```typescript
50
+ // my-module.ts
202
51
  import { compile } from '@modular-prompt/core';
203
- import { myPromptModule } from './prompts.js';
204
52
 
205
53
  export default {
206
54
  name: 'My Module',
207
- description: 'Module description',
208
55
  compile: (context: any) => compile(myPromptModule, context),
209
56
  };
210
57
  ```
211
58
 
212
- ### Evaluator Definition
213
-
214
- Two types of evaluators are supported:
215
-
216
- #### 1. Code Evaluator
217
-
218
- Programmatic validation (e.g., JSON structure validation):
219
-
220
- ```typescript
221
- import type { CodeEvaluator, EvaluationContext, EvaluationResult } from '@modular-prompt/experiment';
222
-
223
- export default {
224
- name: 'JSON Validator',
225
- description: 'Validates JSON structure in output',
226
-
227
- async evaluate(context: EvaluationContext): Promise<EvaluationResult> {
228
- // Validation logic
229
- return {
230
- evaluator: 'json-validator',
231
- moduleName: context.moduleName,
232
- score: 10,
233
- reasoning: 'Valid JSON structure',
234
- };
235
- },
236
- } satisfies CodeEvaluator;
237
- ```
238
-
239
- #### 2. Prompt Evaluator
240
-
241
- AI-based evaluation using LLM:
242
-
243
- ```typescript
244
- import type { PromptEvaluator, EvaluationContext } from '@modular-prompt/experiment';
245
- import type { PromptModule } from '@modular-prompt/core';
246
-
247
- const evaluationModule: PromptModule<EvaluationContext> = {
248
- createContext: (): EvaluationContext => ({
249
- moduleName: '',
250
- prompt: '',
251
- runs: [],
252
- }),
253
-
254
- objective: [
255
- '- Assess output quality',
256
- ],
257
-
258
- instructions: [
259
- '- Evaluate clarity and accuracy',
260
- ],
261
- };
262
-
263
- export default {
264
- name: 'Quality Evaluator',
265
- description: 'Evaluates output quality',
266
- module: evaluationModule,
267
- } satisfies PromptEvaluator;
268
- ```
269
-
270
- All prompt evaluators are automatically merged with the base evaluation module.
271
-
272
- ## Built-in Evaluators
273
-
274
- The framework includes built-in evaluators that can be referenced by name only (no path required):
275
-
276
- ### structured-output-presence
277
-
278
- - **Type**: Code Evaluator
279
- - **What it measures**: Checks if `structuredOutput` exists and is a valid object
280
- - **Evaluation logic**:
281
- - Verifies presence of `structuredOutput` in query result
282
- - Confirms it's a non-null object type
283
- - **Score**: `(validCount / totalRuns) * 10`
284
- - **Use case**: Verify that the model returns structured JSON output (essential for structured output workflows)
285
- - **Usage**:
286
- ```yaml
287
- evaluators:
288
- - name: "structured-output-presence"
289
- ```
290
-
291
- ### llm-requirement-fulfillment
292
-
293
- - **Type**: Prompt Evaluator (uses LLM for evaluation)
294
- - **What it measures**: Uses LLM to comprehensively evaluate whether output meets functional requirements
295
- - **Evaluation criteria**:
296
- 1. **Requirement Fulfillment**: Does it satisfy the intent described in the prompt?
297
- 2. **Parameter Correctness**: Are all required parameters present and correct?
298
- 3. **Parameter Completeness**: Are optional parameters appropriately used or omitted?
299
- 4. **Logical Consistency**: Is the output logically consistent with the facts?
300
- - **Score**: 0-10 overall score with detailed sub-scores for each criterion
301
- - **Use case**: Comprehensive quality assessment of output (requires evaluation model to be configured)
302
- - **Usage**:
303
- ```yaml
304
- evaluators:
305
- - name: "llm-requirement-fulfillment"
306
-
307
- evaluation:
308
- enabled: true
309
- model: "gemini-fast" # Model used for evaluation
310
- ```
311
-
312
- **Note**: `llm-requirement-fulfillment` requires an evaluation model to be configured in the `evaluation` section.
313
-
314
- ## Architecture
315
-
316
- ```
317
- ┌─────────────────────────────────────────┐
318
- │ run-comparison.ts (CLI Entry Point) │
319
- └─────────────────────────────────────────┘
320
-
321
- ┌─────────┼─────────┐
322
- ▼ ▼ ▼
323
- ┌────────┐ ┌────────┐ ┌────────┐
324
- │ Config │ │ Runner │ │Reporter│
325
- │ Loader │ │ │ │ │
326
- └────────┘ └────────┘ └────────┘
327
- │ │
328
- ▼ ▼
329
- ┌────────┐ ┌────────┐
330
- │Dynamic │ │Driver │
331
- │Loader │ │Manager │
332
- └────────┘ └────────┘
333
- ```
334
-
335
- ### Components
336
-
337
- | Component | Responsibility |
338
- |-----------|----------------|
339
- | `config/loader.ts` | Load YAML configuration |
340
- | `config/dynamic-loader.ts` | Dynamic module/evaluator loading |
341
- | `runner/experiment.ts` | Orchestrate experiment execution |
342
- | `runner/evaluator.ts` | Execute evaluations |
343
- | `runner/driver-manager.ts` | Cache and manage AI drivers |
344
- | `reporter/statistics.ts` | Generate statistical reports |
345
- | `base-evaluation-module.ts` | Base evaluation prompt module |
346
- | `evaluators/index.ts` | Built-in evaluator registry |
347
-
348
- ## Examples
349
-
350
- See `examples/experiment.yaml` for a complete configuration template with:
351
- - Model definitions (MLX, Vertex AI, Google GenAI)
352
- - Driver configurations with credential paths
353
- - Evaluation settings
354
- - Empty sections for modules, test cases, and evaluators (ready for your content)
355
-
356
- ## API
357
-
358
- ### Programmatic Usage
359
-
360
- ```typescript
361
- import {
362
- loadExperimentConfig,
363
- loadModules,
364
- loadEvaluators,
365
- ExperimentRunner,
366
- DriverManager,
367
- } from '@modular-prompt/experiment';
368
-
369
- const { serverConfig, aiService } = loadExperimentConfig('config.yaml');
370
- const modules = await loadModules(moduleRefs, basePath);
371
- const evaluators = await loadEvaluators(evaluatorRefs, basePath);
372
-
373
- const driverManager = new DriverManager();
374
- const runner = new ExperimentRunner(
375
- aiService,
376
- driverManager,
377
- modules,
378
- testCases,
379
- models,
380
- repeatCount,
381
- evaluators,
382
- evaluatorModel
383
- );
384
-
385
- const results = await runner.run();
386
- await driverManager.cleanup();
387
- ```
388
-
389
- ## CLI Options
59
+ ### 3. 実行
390
60
 
61
+ ```bash
62
+ npx modular-experiment experiment.yaml --dry-run # 確認
63
+ npx modular-experiment experiment.yaml # 実行
64
+ npx modular-experiment experiment.yaml --evaluate # 評価付き
65
+ npx modular-experiment experiment.yaml --repeat 10 # 複数回実行
391
66
  ```
392
- Usage: modular-experiment <config> [options]
393
67
 
394
- Arguments:
395
- <config> Config file path (YAML or TypeScript)
68
+ 設定ファイルの詳細、評価器の書き方、プログラマティックAPIについては `skills/experiment/SKILL.md` を参照。
396
69
 
397
- Options:
398
- --test-case <name> Test case name filter
399
- --model <provider> Model provider filter
400
- --modules <names> Comma-separated module names (default: all)
401
- --repeat <count> Number of repetitions (default: 1)
402
- --evaluate Enable evaluation phase
403
- --evaluators <names> Comma-separated evaluator names (default: all)
404
- --dry-run Display execution plan without running the experiment
405
- --log-file <path> Log file path for JSONL output (detailed logs)
406
- --verbose Enable verbose output (show detailed internal operations)
407
- ```
70
+ ## Skills (for Claude Code)
408
71
 
409
- **Note**: All paths specified in the config file are resolved relative to the config file's directory.
72
+ This package includes `skills/experiment/SKILL.md`. It can be used as a Claude Code skill to guide experiment framework usage.
410
73
 
411
74
  ## License
412
75
 
@@ -12,6 +12,23 @@ models:
12
12
  provider: "mlx"
13
13
  capabilities: ["local", "tools"]
14
14
  priority: 20
15
+ lfm2.5-jp:
16
+ model: LiquidAI/LFM2.5-1.2B-JP-MLX-8bit
17
+ provider: "mlx"
18
+ capabilities: ["local", "fast", "japanese"]
19
+ priority: 20
20
+ lfm2.5-instruct:
21
+ model: LiquidAI/LFM2.5-1.2B-Instruct-MLX-8bit
22
+ provider: "mlx"
23
+ capabilities: ["local", "fast", "japanese"]
24
+ priority: 20
25
+ disabled: true
26
+ lfm2.5-thinking:
27
+ model: LiquidAI/LFM2.5-1.2B-Thinking-MLX-8bit
28
+ provider: "mlx"
29
+ capabilities: ["local", "fast", "japanese"]
30
+ priority: 20
31
+ disabled: true
15
32
 
16
33
  drivers:
17
34
  mlx: {}
@@ -55,17 +72,32 @@ testCases:
55
72
  tools: *tools_def
56
73
  toolChoice: auto
57
74
 
58
- # --- qwen3-4b ---
59
- - name: "[qwen3] 天気ツール呼び出し"
75
+ # # --- qwen3-4b ---
76
+ # - name: "[qwen3] 天気ツール呼び出し"
77
+ # description: "get_weatherツールを呼び出すことを期待"
78
+ # models: ["qwen3-4b"]
79
+ # input:
80
+ # question: "東京の天気を教えてください。"
81
+ # queryOptions: *tool_weather
82
+
83
+ # - name: "[qwen3] ツール不要の質問"
84
+ # description: "ツールを呼び出さずにテキストで回答することを期待"
85
+ # models: ["qwen3-4b"]
86
+ # input:
87
+ # question: "1 + 1 は何ですか?"
88
+ # queryOptions: *tool_math
89
+
90
+ # --- lfm2.5 jp ---
91
+ - name: "[lfm2.5] 天気ツール呼び出し"
60
92
  description: "get_weatherツールを呼び出すことを期待"
61
- models: ["qwen3-4b"]
93
+ models: ["lfm2.5-instruct"]
62
94
  input:
63
95
  question: "東京の天気を教えてください。"
64
96
  queryOptions: *tool_weather
65
97
 
66
- - name: "[qwen3] ツール不要の質問"
98
+ - name: "[lfm2.5] ツール不要の質問"
67
99
  description: "ツールを呼び出さずにテキストで回答することを期待"
68
- models: ["qwen3-4b"]
100
+ models: ["lfm2.5-instruct"]
69
101
  input:
70
102
  question: "1 + 1 は何ですか?"
71
103
  queryOptions: *tool_math
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@modular-prompt/experiment",
3
- "version": "0.3.0",
3
+ "version": "0.3.1",
4
4
  "description": "Experiment framework for comparing and evaluating prompt modules",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
@@ -17,6 +17,7 @@
17
17
  "files": [
18
18
  "dist",
19
19
  "examples",
20
+ "skills",
20
21
  "README.md"
21
22
  ],
22
23
  "dependencies": {
@@ -24,9 +25,9 @@
24
25
  "jiti": "^2.4.2",
25
26
  "yaml": "^2.3.4",
26
27
  "zod": "^3.22.4",
27
- "@modular-prompt/driver": "0.8.0",
28
- "@modular-prompt/core": "0.1.12",
29
- "@modular-prompt/utils": "0.2.3"
28
+ "@modular-prompt/core": "0.1.13",
29
+ "@modular-prompt/driver": "0.8.1",
30
+ "@modular-prompt/utils": "0.2.4"
30
31
  },
31
32
  "devDependencies": {
32
33
  "@eslint/js": "^9.34.0",
@@ -64,7 +65,8 @@
64
65
  "test": "vitest",
65
66
  "test:ui": "vitest --ui",
66
67
  "test:run": "vitest run",
67
- "clean": "rm -rf dist",
68
+ "copy-skills": "mkdir -p skills/experiment && cp ../../skills/experiment/SKILL.md skills/experiment/SKILL.md",
69
+ "clean": "rm -rf dist skills",
68
70
  "lint": "eslint src",
69
71
  "typecheck": "tsc --noEmit"
70
72
  }
@@ -0,0 +1,292 @@
1
+ ---
2
+ name: experiment
3
+ description: modular-promptの実験フレームワーク(@modular-prompt/experiment)の使い方ガイド。プロンプトモジュールの比較・評価実験の設定、実行、評価器の定義を参照する。
4
+ ---
5
+
6
+ # 実験フレームワーク使い方ガイド
7
+
8
+ ## 実験フレームワークとは
9
+
10
+ `@modular-prompt/experiment` は、複数のプロンプトモジュールを同一条件下で比較・評価するためのフレームワーク。YAML設定で実験を定義し、CLIまたはプログラマティックに実行できる。
11
+
12
+ ### ユースケース
13
+
14
+ - **プロンプト比較**: 異なるプロンプト構造の効果を比較検証
15
+ - **モジュール分離検証**: モジュール化したプロンプトが同等の出力を生成するか確認
16
+ - **品質評価**: 繰り返し実行による出力の安定性・一貫性の評価
17
+ - **マルチモデルテスト**: 異なるLLMプロバイダーでの動作比較
18
+
19
+ ## CLI
20
+
21
+ ```bash
22
+ # 設定検証・実行計画表示(まずこれで確認)
23
+ npx modular-experiment config.yaml --dry-run
24
+
25
+ # 実験実行
26
+ npx modular-experiment config.yaml
27
+
28
+ # 評価付き実行
29
+ npx modular-experiment config.yaml --evaluate
30
+
31
+ # 複数回実行(統計用)
32
+ npx modular-experiment config.yaml --repeat 10
33
+
34
+ # 特定モジュール・テストケースのみ
35
+ npx modular-experiment config.yaml --modules my-module --test-case "Basic Test"
36
+
37
+ # 詳細ログ出力
38
+ npx modular-experiment config.yaml --log-file experiment.jsonl --verbose
39
+ ```
40
+
41
+ ### CLIオプション
42
+
43
+ | オプション | 説明 |
44
+ |-----------|------|
45
+ | `<config>` | 設定ファイルパス(YAML or TypeScript) |
46
+ | `--dry-run` | 実行計画のみ表示 |
47
+ | `--evaluate` | 評価フェーズを有効化 |
48
+ | `--repeat <count>` | 繰り返し回数(デフォルト: 1) |
49
+ | `--modules <names>` | カンマ区切りモジュール名フィルター |
50
+ | `--test-case <name>` | テストケース名フィルター |
51
+ | `--model <provider>` | モデルプロバイダーフィルター |
52
+ | `--evaluators <names>` | カンマ区切り評価器名フィルター |
53
+ | `--log-file <path>` | JSONLログファイルパス |
54
+ | `--verbose` | 詳細な内部操作を表示 |
55
+
56
+ ## 設定ファイル(YAML)
57
+
58
+ ```yaml
59
+ # モデル定義
60
+ models:
61
+ gpt4o:
62
+ provider: openai
63
+ model: gpt-4o
64
+ capabilities: ["streaming", "tools", "structured"]
65
+ enabled: true
66
+ gemini:
67
+ provider: vertexai
68
+ model: gemini-2.0-flash-001
69
+ capabilities: ["tools", "fast"]
70
+ enabled: true
71
+
72
+ # ドライバー認証設定
73
+ drivers:
74
+ openai:
75
+ apiKey: ${OPENAI_API_KEY} # 環境変数
76
+ vertexai:
77
+ projectId: my-gcp-project
78
+ location: us-central1
79
+
80
+ # デフォルトオプション
81
+ defaultOptions:
82
+ temperature: 0.7
83
+ maxTokens: 2048
84
+
85
+ # テスト対象モジュール
86
+ modules:
87
+ - name: baseline
88
+ path: ./modules/baseline.ts # 設定ファイルからの相対パス
89
+ description: ベースラインプロンプト
90
+ - name: optimized
91
+ path: ./modules/optimized.ts
92
+ description: 最適化版プロンプト
93
+
94
+ # テストケース
95
+ testCases:
96
+ - name: 基本テスト
97
+ description: 基本的な動作確認
98
+ input: # module.compile に渡すコンテキスト
99
+ query: "TypeScriptの型推論について説明して"
100
+ models: [gpt4o] # オプション: 未指定時は全有効モデル
101
+ queryOptions: # オプション
102
+ temperature: 0.5
103
+
104
+ - name: ツール呼び出しテスト
105
+ input:
106
+ query: "東京の天気を調べて"
107
+ queryOptions:
108
+ tools:
109
+ - name: get_weather
110
+ description: 天気を取得
111
+ parameters:
112
+ type: object
113
+ properties:
114
+ city: { type: string }
115
+ required: [city]
116
+
117
+ # 評価器
118
+ evaluators:
119
+ - name: structured-output-presence # ビルトイン
120
+ - name: llm-requirement-fulfillment # ビルトイン
121
+ - name: custom-eval # 外部ファイル
122
+ path: ./evaluators/custom-eval.ts
123
+
124
+ # 評価設定
125
+ evaluation:
126
+ enabled: true
127
+ model: gpt4o # 評価に使うモデル
128
+ ```
129
+
130
+ ### パス解決
131
+
132
+ 設定ファイル内のパス(modules, evaluators等)は設定ファイルのディレクトリからの相対パスで解決される。`~/` でホームディレクトリ、絶対パスも使用可能。
133
+
134
+ ## モジュール定義
135
+
136
+ テスト対象のモジュールファイル:
137
+
138
+ ```typescript
139
+ import { compile } from '@modular-prompt/core';
140
+ import { myPromptModule } from './prompts.js';
141
+
142
+ export default {
143
+ name: 'My Module',
144
+ description: 'テスト対象のプロンプトモジュール',
145
+ compile: (context: any) => compile(myPromptModule, context),
146
+ };
147
+ ```
148
+
149
+ `compile` 関数はテストケースの `input` をコンテキストとして受け取り、CompiledPrompt を返す。
150
+
151
+ ## 評価器
152
+
153
+ ### ビルトイン評価器
154
+
155
+ **structured-output-presence** - コード評価器
156
+ - `structuredOutput` の存在と有効性を検証
157
+ - スコア: `(validCount / totalRuns) * 10`
158
+
159
+ **llm-requirement-fulfillment** - プロンプト評価器
160
+ - LLMが要件充足度を包括的に評価
161
+ - 評価基準: 要件充足度、パラメータ正確性、パラメータ完全性、論理的一貫性
162
+ - 評価用モデルの設定が必要(`evaluation.model`)
163
+
164
+ ### カスタム評価器(コード)
165
+
166
+ ```typescript
167
+ import type { CodeEvaluator, EvaluationContext, EvaluationResult } from '@modular-prompt/experiment';
168
+
169
+ export default {
170
+ name: 'json-validator',
171
+ description: 'JSON構造を検証',
172
+
173
+ async evaluate(context: EvaluationContext): Promise<EvaluationResult> {
174
+ const allValid = context.runs.every(run =>
175
+ run.queryResult.structuredOutput != null
176
+ );
177
+ return {
178
+ evaluator: 'json-validator',
179
+ moduleName: context.moduleName,
180
+ score: allValid ? 10 : 0,
181
+ reasoning: allValid ? '全実行でJSON出力あり' : 'JSON出力なし',
182
+ };
183
+ },
184
+ } satisfies CodeEvaluator;
185
+ ```
186
+
187
+ ### カスタム評価器(プロンプト)
188
+
189
+ LLMに評価させる場合:
190
+
191
+ ```typescript
192
+ import type { PromptEvaluator, EvaluationContext } from '@modular-prompt/experiment';
193
+ import type { PromptModule } from '@modular-prompt/core';
194
+
195
+ const evaluationModule: PromptModule<EvaluationContext> = {
196
+ createContext: () => ({ moduleName: '', prompt: '', runs: [] }),
197
+ objective: ['出力の品質を0-10で評価する'],
198
+ instructions: [
199
+ '- 明確さ、正確さ、完全性を基準にする',
200
+ (ctx) => `対象モジュール: ${ctx.moduleName}`,
201
+ (ctx) => ctx.runs.map((run, i) =>
202
+ `実行${i + 1}: ${run.queryResult.content.slice(0, 500)}`
203
+ ),
204
+ ],
205
+ };
206
+
207
+ export default {
208
+ name: 'quality-evaluator',
209
+ description: '出力品質を評価',
210
+ module: evaluationModule, // baseEvaluationModuleと自動マージされる
211
+ } satisfies PromptEvaluator;
212
+ ```
213
+
214
+ ## 主要な型
215
+
216
+ ### TestCase
217
+
218
+ ```typescript
219
+ interface TestCase {
220
+ name: string;
221
+ description?: string;
222
+ input: any; // module.compileに渡すコンテキスト
223
+ models?: string[]; // 未指定時は全有効モデル
224
+ queryOptions?: Partial<QueryOptions>;
225
+ }
226
+ ```
227
+
228
+ ### EvaluationContext
229
+
230
+ ```typescript
231
+ interface EvaluationContext {
232
+ moduleName: string;
233
+ prompt: string; // コンパイル済みプロンプト(文字列化)
234
+ runs: Array<{
235
+ queryResult: QueryResult;
236
+ }>;
237
+ }
238
+ ```
239
+
240
+ ### EvaluationResult
241
+
242
+ ```typescript
243
+ interface EvaluationResult {
244
+ evaluator: string;
245
+ moduleName: string;
246
+ score?: number; // 0-10
247
+ reasoning?: string;
248
+ details?: Record<string, any>;
249
+ error?: string;
250
+ }
251
+ ```
252
+
253
+ ## プログラマティック使用
254
+
255
+ ```typescript
256
+ import {
257
+ loadExperimentConfig,
258
+ loadModules,
259
+ loadEvaluators,
260
+ ExperimentRunner,
261
+ DriverManager,
262
+ } from '@modular-prompt/experiment';
263
+
264
+ const { serverConfig, aiService, configDir } = loadExperimentConfig('config.yaml');
265
+ const modules = await loadModules(serverConfig.modules, configDir);
266
+ const evaluators = await loadEvaluators(serverConfig.evaluators, configDir);
267
+
268
+ const driverManager = new DriverManager();
269
+ const runner = new ExperimentRunner(
270
+ aiService,
271
+ driverManager,
272
+ modules,
273
+ serverConfig.testCases,
274
+ serverConfig.models,
275
+ 5, // repeat count
276
+ evaluators,
277
+ evaluatorModel
278
+ );
279
+
280
+ const results = await runner.run();
281
+ await driverManager.cleanup();
282
+ ```
283
+
284
+ ## 実験フロー
285
+
286
+ ```
287
+ 設定ロード → モジュール・評価器ロード → テストケースごとに:
288
+ 全モジュールをコンパイル → プロンプト比較 → 各モデルで実行(繰り返し対応)
289
+ → 評価フェーズ(オプション) → 統計レポート生成 → クリーンアップ
290
+ ```
291
+
292
+ DriverManagerがモデル名をキーにドライバーをキャッシュし、同じモデルであればドライバーを再利用する。異なるモデルに切り替わると前のドライバーをcloseできる。これはローカルLLM(MLX等)のメモリ消費を抑えるための設計。