peerbench 0.0.1 → 0.0.2-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. package/README.md +308 -2
  2. package/dist/abstract-Dec9Sc5O.d.ts +12 -0
  3. package/dist/benchmarks/index.d.ts +1698 -0
  4. package/dist/benchmarks/index.js +915 -0
  5. package/dist/benchmarks/index.js.map +1 -0
  6. package/dist/catalogs/index.d.ts +75 -0
  7. package/dist/catalogs/index.js +88 -0
  8. package/dist/catalogs/index.js.map +1 -0
  9. package/dist/chunk-22HU24QF.js +8 -0
  10. package/dist/chunk-22HU24QF.js.map +1 -0
  11. package/dist/chunk-232PY7K3.js +50 -0
  12. package/dist/chunk-232PY7K3.js.map +1 -0
  13. package/dist/chunk-7TREBPSJ.js +26 -0
  14. package/dist/chunk-7TREBPSJ.js.map +1 -0
  15. package/dist/chunk-DUBKY73H.js +128 -0
  16. package/dist/chunk-DUBKY73H.js.map +1 -0
  17. package/dist/chunk-GVF4YZF3.js +15 -0
  18. package/dist/chunk-GVF4YZF3.js.map +1 -0
  19. package/dist/chunk-HJH3SW3L.js +103 -0
  20. package/dist/chunk-HJH3SW3L.js.map +1 -0
  21. package/dist/chunk-IUN2IUCS.js +58 -0
  22. package/dist/chunk-IUN2IUCS.js.map +1 -0
  23. package/dist/chunk-PZ5AY32C.js +10 -0
  24. package/dist/chunk-PZ5AY32C.js.map +1 -0
  25. package/dist/chunk-VBOM2YEG.js +47 -0
  26. package/dist/chunk-VBOM2YEG.js.map +1 -0
  27. package/dist/chunk-ZJWSK4VO.js +11 -0
  28. package/dist/chunk-ZJWSK4VO.js.map +1 -0
  29. package/dist/data-BmN5WjZ4.d.ts +57 -0
  30. package/dist/generic-array-DLHWSvf1.d.ts +22 -0
  31. package/dist/index-WiPjF2AL.d.ts +15 -0
  32. package/dist/index.d.ts +38 -3845
  33. package/dist/index.js +40 -3557
  34. package/dist/index.js.map +1 -1
  35. package/dist/llm-DNj_tp2T.d.ts +22 -0
  36. package/dist/llm-judge-DIG1f1Az.d.ts +67 -0
  37. package/dist/provider-BDjGp2y-.d.ts +10 -0
  38. package/dist/providers/index.d.ts +72 -0
  39. package/dist/providers/index.js +263 -0
  40. package/dist/providers/index.js.map +1 -0
  41. package/dist/rate-limiter-CSmVIRsM.d.ts +60 -0
  42. package/dist/schemas/extensions/index.d.ts +14 -0
  43. package/dist/schemas/extensions/index.js +13 -0
  44. package/dist/schemas/extensions/index.js.map +1 -0
  45. package/dist/schemas/index.d.ts +233 -0
  46. package/dist/schemas/index.js +27 -0
  47. package/dist/schemas/index.js.map +1 -0
  48. package/dist/schemas/llm/index.d.ts +98 -0
  49. package/dist/schemas/llm/index.js +37 -0
  50. package/dist/schemas/llm/index.js.map +1 -0
  51. package/dist/scorers/index.d.ts +63 -0
  52. package/dist/scorers/index.js +494 -0
  53. package/dist/scorers/index.js.map +1 -0
  54. package/dist/simple-system-prompt-CzPYuvo0.d.ts +49 -0
  55. package/dist/system-prompt--0FdPWqK.d.ts +58 -0
  56. package/dist/utilities-BrRH32rD.d.ts +30 -0
  57. package/package.json +39 -21
  58. package/LICENSE +0 -21
package/README.md CHANGED
@@ -1,3 +1,309 @@
1
- # peerBench SDK
1
+ # `peerbench` SDK
2
2
 
3
- A TypeScript SDK for building AI evaluation benchmarks and data processing pipelines.
3
+ This package is the shared “domain core” for _building benchmarks_ in a standardized, portable way. It gives you a consistent set of _persistable entities_ (schemas + types), and a consistent set of _runtime contracts_ (loaders, runners, scorers, providers) so the same benchmark can run in a CLI, a web app, a worker, or anything else.
4
+
5
+ > _Runtime_ refers to the codebase (a CLI, a webapp, a background service etc.) that uses the SDK.
6
+
7
+ If you’re implementing a new benchmark, the SDK is the part that keeps it portable instead of glued to one runtime. If you’re integrating Peerbench into a runtime, the SDK is the part you don’t want to rewrite in every repo.
8
+
9
+ > This package does not support CommonJS
10
+
11
+ ## What is a benchmark?
12
+
13
+ A benchmark is a structured way to ask: “How well does a system perform on a set of tasks, under a set of rules?”
14
+
15
+ If you look at widely-used benchmarks, the pattern is always the same even when the tasks are different:
16
+
17
+ - In MMLU-Pro, each item is a question (often multiple choice) and the score is about correctness across categories.
18
+ - In BIG-bench style task suites, you have many different task types and you want a consistent way to run and score them.
19
+ - In HELM-style evaluations, you care about not only “did it answer correctly”, but also how you ran it (prompting setup, constraints, metadata) and how you report results.
20
+
21
+ Those benchmarks differ in details, but they all boil down to the same building blocks: a dataset of test cases, a way to run a system on each test case, and a way to score the output. The Peerbench SDK is designed so these patterns can be represented with the same portable shape.
22
+
23
+ ## The mental model
24
+
25
+ Now that we agree on what a benchmark is, we can talk about how Peerbench represents it.
26
+
27
+ Peerbench is deliberately boring here. It doesn’t try to invent a new “benchmark framework”. It gives you a small set of building blocks that you can compose. If you understand these pieces, you can read any benchmark implementation and know where to look.
28
+
29
+ ### Entities (the things you store)
30
+
31
+ When you run an evaluation, you end up with data that you want to store, query, re-score, and share. Peerbench standardizes that output by modeling it as a small set of entities.
32
+
33
+ This SDK assumes four core entities:
34
+
35
+ - `BenchmarkSpec`: optional benchmark-level configuration (think: “applies to the whole dataset/run”).
36
+ - `TestCase`: a single input/task.
37
+ - `Response`: the model output for a specific test case (`testCaseId` points to `TestCase.id`).
38
+ - `Score`: an evaluation result for a specific response (`responseId` points to `Response.id`).
39
+
40
+ Everything else in the SDK exists to create these entities in a predictable way.
41
+
42
+ Two fields show up everywhere:
43
+
44
+ - `kind` tells you _what type_ of entity something is. It is a stable string you pick (descriptive).
45
+ - `schemaVersion` tells you _which version_ of that entity shape you’re looking at.
46
+
47
+ This is why Peerbench leans on [Zod](https://zod.dev) schemas: it keeps the persisted data contract explicit and runtime-validated.
48
+
49
+ ### Loader (how raw data becomes test cases)
50
+
51
+ In real projects, test cases live in many places: JSON files, JSONL streams, a database, Parquet, an API, etc.
52
+
53
+ A loader is the piece that reads that raw data and returns `TestCase[]` (and optionally existing `Response[]` / `Score[]`). The important point is not the file format. The important point is that the loader is where your “raw input → Peerbench entities” mapping lives.
54
+
55
+ ### Provider (how you talk to a model)
56
+
57
+ A provider is the runtime bridge to a model endpoint.
58
+
59
+ Runners do not talk to models directly. They call a provider abstraction (today that’s `AbstractLLMProvider` for message-based LLM communication). That gives you a clean seam:
60
+
61
+ - benchmark code doesn’t care where the model lives
62
+ - runtimes can swap providers without rewriting benchmark code
63
+
64
+ If you already have your own service in front of the model, you can still model it as a provider. The example in `packages/sdk-0.2/src/providers/example/restapi.ts` shows this pattern.
65
+
66
+ ### Runner (how you execute one test case)
67
+
68
+ A runner is the execution part of a benchmark. A runner function takes whatever inputs it needs, calls a provider, and produces a `Response`. It may also produce a `Score` (directly, or via a scorer).
69
+
70
+ Runners are indented to be “per test case” because it keeps the benchmark logic small and easy to compose. Running a whole dataset is orchestration, and orchestration is where runtimes differ (parallelism, retries, persistence, budgets, progress UI).
71
+
72
+ There is no restriction that a benchmark must have exactly one runner. You can export multiple runner functions (different modes, different prompts, different providers, different scoring strategies). The runtime just needs to pick the runner it wants to use.
73
+
74
+ One practical convention you will see in the examples is `runConfig`. It’s runner-specific, and it’s usually kept as a simple JSON-serializable object so you can store it alongside your run and reproduce it later. This is a best practice, not a hard restriction: if something doesn’t belong in `runConfig`, you can pass it as a normal parameter next to it.
75
+
76
+ ### Scorer (how you judge a response)
77
+
78
+ A scorer produces a numeric result. Some scorers are deterministic (same input → same output). Some scorers are non-deterministic (for example "LLM as a judge").
79
+
80
+ A scorer takes what it needs. Sometimes it’s “expected + actual strings”. Sometimes it’s “a list of required fields + a JSON output”. The runner decides what to pass into the scorer, because the runner is the piece that knows how the benchmark is structured.
81
+
82
+ If your benchmark can be scored in multiple ways, a runner can accept multiple scorer implementations and choose between them based on `scorer.kind`. The examples in `packages/sdk-0.2/src/benchmarks/example/` show what that looks like in code.
83
+
84
+ ## What the SDK does vs what the runtime does
85
+
86
+ It’s easy to accidentally push “too much responsibility” to the SDK and end up with a framework you can’t escape. It’s also easy to push “too much responsibility” to the runtime and end up with copy-pasted benchmark logic.
87
+
88
+ This SDK tries to draw a clean line:
89
+
90
+ The SDK is responsible for:
91
+
92
+ - defining and validating entity shapes (Zod schemas are the source of truth)
93
+ - providing base contracts and reusable building blocks (schemas + loaders + runners + scorers)
94
+ - defining provider/scorer contracts so you can swap backends without rewriting benchmarks
95
+
96
+ The runtime is responsible for:
97
+
98
+ - orchestration across many test cases (parallelism, retries, persistence, resuming, progress UI)
99
+ - deciding how/where entities are stored (DB schema, file layout, caching)
100
+ - secrets and private content (API keys, redacted prompts, access control)
101
+ - version migration strategies when `schemaVersion` changes
102
+
103
+ If you keep that boundary, benchmarks stay portable and runtimes stay free to evolve.
104
+
105
+ ## If you’re implementing a benchmark
106
+
107
+ The easiest way to think about “implementing a benchmark” is: you are implementing a small domain module that can be imported by multiple runtimes. That means your job is mostly about making your benchmark _self-contained and explicit_.
108
+
109
+ In practice, the benchmark implementer is responsible for:
110
+
111
+ - choosing stable `kind` strings (namespaced, descriptive) and bumping `schemaVersion` on breaking changes
112
+ - defining the schemas that are safe to store and share (and keeping secrets out of them)
113
+ - deciding how raw datasets map into `TestCase` entities (loader)
114
+ - deciding how a test case is executed (runner) and how it becomes a `Response`
115
+ - deciding how scoring works (inline in runner, a separate scorer, or multiple scorers)
116
+
117
+ Once those are in place, runtimes can focus on orchestration and product concerns without rewriting the benchmark logic.
118
+
119
+ Peerbench does not assume your new benchmarks will be part of the SDK itself. The normal expectation is that your benchmark code lives in your runtime (or in its own package), and it uses `peerbench` as a dependency for schemas, base types, and contracts.
120
+
121
+ Benchmarks can implement everything themselves, but they can also reuse the SDK’s predefined building blocks. If it is possible, it is recommended to stick with SDK base types (e.g `AbstractLLMProvider`) and implementations, because it increases compatibility with other tooling that speaks “Peerbench entities”.
122
+
123
+ ## A benchmark, step by step
124
+
125
+ A “benchmark” in this SDK is not a magical object. It is a small folder that exports a few well-known pieces. The simplest complete benchmark usually includes:
126
+
127
+ 1. schemas (test case / response / score)
128
+ 2. a loader (how test cases are read from disk/DB/etc.)
129
+ 3. a runner (how a single test case is executed)
130
+ 4. one or more scorers (optional)
131
+
132
+ You can see a compact, end-to-end reference in:
133
+
134
+ - `packages/sdk-0.2/src/benchmarks/example/basic/`
135
+
136
+ ### 1) Schemas: the source of truth
137
+
138
+ Schemas are the core of a benchmark. They are the entities that hold the data.
139
+
140
+ In `packages/sdk-0.2/src/benchmarks/example/basic/test-cases/echo.v1.ts` you can see the pattern:
141
+
142
+ - define a test case schema (`kind` + `schemaVersion` + benchmark fields)
143
+ - define a response schema for that test case
144
+ - define a score schema for that response
145
+
146
+ The hierarchy starts from test case → response → score, and we keep the relationship by storing IDs (`testCaseId`, `responseId`). That relationship is “real data”, so the runtime is usually the one that persists it and queries it.
147
+
148
+ Here is what “defining a test case schema” looks like in practice (trimmed to the idea):
149
+
150
+ ```ts
151
+ import { z } from "zod";
152
+ import { BaseTestCaseSchemaV1, defineTestCaseSchema } from "peerbench/schemas";
153
+
154
+ export const MyTestCaseSchemaV1 = defineTestCaseSchema({
155
+ baseSchema: BaseTestCaseSchemaV1,
156
+ kind: "mybench.ts.someTask",
157
+ schemaVersion: 1,
158
+ fields: {
159
+ prompt: z.string(),
160
+ },
161
+ });
162
+ ```
163
+
164
+ ### 2) Loader: how test cases become entities
165
+
166
+ A loader reads external data and returns in-memory entities:
167
+
168
+ ```ts
169
+ type LoaderResult<TTestCase> = {
170
+ testCases: TTestCase[];
171
+ responses: [];
172
+ scores: [];
173
+ };
174
+ ```
175
+
176
+ In the basic example (`packages/sdk-0.2/src/benchmarks/example/basic/loader.ts`) the loader reads a JSON array and maps it into `TestCase` entities.
177
+
178
+ ### 3) Provider: how runners talk to models
179
+
180
+ Runners communicate with models through a provider implementation. That’s how the same benchmark can run against different backends without rewriting the benchmark.
181
+
182
+ There are also example providers meant to be read as reference implementations:
183
+
184
+ - `packages/sdk-0.2/src/providers/example/echo.ts` (no network calls; returns deterministic content)
185
+ - `packages/sdk-0.2/src/providers/example/restapi.ts` (calls your own REST “agent service”)
186
+
187
+ If you already have a service in front of your model, the REST API provider example shows the pattern: accept the SDK’s `messages + model` input, translate it to an HTTP request, and translate the HTTP response back into a single string. Nothing else is required.
188
+
189
+ ### 4) Runner: run one test case
190
+
191
+ A runner function typically executes one test case and returns `{ response, score? }`.
192
+
193
+ This is intentional. Running many test cases is orchestration, and orchestration is where runtimes differ the most (parallelism, retries, persistence, resuming, UI, cost limits). The runner is the small, portable unit.
194
+
195
+ In the basic example runner (`packages/sdk-0.2/src/benchmarks/example/basic/runner.ts`) you can see the responsibilities:
196
+
197
+ - format a test case into provider-friendly input (for chat models, `messages[]`)
198
+ - call `provider.forward(...)`
199
+ - map provider output into a `Response` entity
200
+ - if a scorer is provided, turn scorer output into a `Score` entity
201
+
202
+ Here is the idea in a minimal form:
203
+
204
+ ```ts
205
+ const providerResponse = await provider.forward({ model, messages });
206
+
207
+ const response = ResponseSchemaV1.new({
208
+ id: "runtime-generates-id",
209
+ testCaseId: testCase.id,
210
+ data: providerResponse.data,
211
+ startedAt: providerResponse.startedAt,
212
+ completedAt: providerResponse.completedAt,
213
+ modelSlug: model,
214
+ provider: provider.kind,
215
+ });
216
+ ```
217
+
218
+ ### 5) Scorers: optional, but powerful
219
+
220
+ Some benchmarks are easy to score deterministically (string match, regex extraction, set coverage). Some benchmarks need semantic judgment. Some benchmarks want both.
221
+
222
+ That’s why scorers are separate objects and why runners can accept more than one scorer implementation.
223
+
224
+ The examples show:
225
+
226
+ - a deterministic scorer (`packages/sdk-0.2/src/benchmarks/example/basic/scorer.ts`)
227
+ - a non-deterministic scorer (`packages/sdk-0.2/src/scorers/llm-judge.ts`)
228
+ - a runner that can switch based on `scorer.kind` (`packages/sdk-0.2/src/benchmarks/example/basic/runner.ts`)
229
+
230
+ ## Usage: run a single test case end-to-end
231
+
232
+ First, pick a benchmark and a provider:
233
+
234
+ ```ts
235
+ import { example } from "peerbench/benchmarks";
236
+ import { ExampleEchoLLMProvider } from "peerbench/providers";
237
+
238
+ const provider = new ExampleEchoLLMProvider();
239
+ ```
240
+
241
+ Then build a test case entity and run it:
242
+
243
+ ```ts
244
+ const testCase = example.ExampleEchoTestCaseSchemaV1.new({
245
+ id: "tc-1",
246
+ instruction: "Repeat the input exactly",
247
+ input: "hello",
248
+ expectedOutput: "hello",
249
+ });
250
+
251
+ const scorer = new example.ExampleExactMatchScorer();
252
+
253
+ const { response, score } = await example.runTestCase({
254
+ testCase,
255
+ provider,
256
+ scorer,
257
+ runConfig: { model: "example-model" },
258
+ });
259
+ ```
260
+
261
+ If you want to load test cases instead of constructing them manually, use the loader:
262
+
263
+ ```ts
264
+ const loader = new example.ExampleJSONDataLoader();
265
+ const { testCases } = await loader.loadData({
266
+ content: new TextEncoder().encode(
267
+ JSON.stringify([
268
+ {
269
+ id: "tc-1",
270
+ kind: "example.ts.echo",
271
+ schemaVersion: 1,
272
+ instruction: "Repeat the input exactly",
273
+ input: "hello",
274
+ expectedOutput: "hello",
275
+ },
276
+ ])
277
+ ),
278
+ });
279
+ ```
280
+
281
+ ## Usage: what the runtime adds (orchestration)
282
+
283
+ Once you have `runTestCase(...)`, the runtime’s job is mostly about repetition and persistence.
284
+
285
+ For example, a very small orchestrator might do:
286
+
287
+ ```ts
288
+ for (const testCase of testCases) {
289
+ const result = await example.runTestCase({ testCase, provider, runConfig });
290
+ // store `result.response` and `result.score` somewhere durable
291
+ // decide how to handle errors, retries, progress, and budgets
292
+ }
293
+ ```
294
+
295
+ That loop is where your product decisions live. The SDK is intentionally not opinionated about it.
296
+
297
+ ## More examples to read
298
+
299
+ The `example` benchmark is split into folders that each teach one idea:
300
+
301
+ - `packages/sdk-0.2/src/benchmarks/example/basic/`: the simplest complete example
302
+ - `packages/sdk-0.2/src/benchmarks/example/multi-kind/`: one runner, multiple test case kinds
303
+ - `packages/sdk-0.2/src/benchmarks/example/multi-scorer/`: one runner, multiple scorer implementations
304
+
305
+ ## Design notes
306
+
307
+ - Schemas are runtime-validated (Zod) so “type-only drift” doesn’t silently corrupt stored data.
308
+ - Runners are per-test-case so they stay small and portable; runtimes keep orchestration control.
309
+ - Kinds are namespaced strings (e.g. `example.ts.echo`) to avoid collisions across benchmarks.
@@ -0,0 +1,12 @@
1
+ declare abstract class AbstractScorer {
2
+ abstract readonly kind: string;
3
+ abstract score(params: any): Promise<BaseScorerResult | null>;
4
+ }
5
+ type BaseScorerResult = {
6
+ value: number;
7
+ explanation?: string;
8
+ metadata?: Record<string, unknown>;
9
+ [key: string]: unknown;
10
+ };
11
+
12
+ export { AbstractScorer as A, type BaseScorerResult as B };