pearmut 0.2.8__py3-none-any.whl → 0.2.10__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: pearmut
3
- Version: 0.2.8
3
+ Version: 0.2.10
4
4
  Summary: A tool for evaluation of model outputs, primarily MT.
5
5
  Author-email: Vilém Zouhar <vilem.zouhar@gmail.com>
6
6
  License: MIT
@@ -47,9 +47,13 @@ Dynamic: license-file
47
47
  - [Hosting Assets](#hosting-assets)
48
48
  - [Campaign Management](#campaign-management)
49
49
  - [CLI Commands](#cli-commands)
50
+ - [Terminology](#terminology)
50
51
  - [Development](#development)
51
52
  - [Citation](#citation)
52
53
 
54
+
55
+ **Error Span** — A highlighted segment of text marked as containing an error, with optional severity (`minor`, `major`, `neutral`) and MQM category labels.
56
+
53
57
  ## Quick Start
54
58
 
55
59
  Install and run locally without cloning:
@@ -278,12 +282,54 @@ Management link (shown when adding campaigns or running server) provides:
278
282
  - Task progress reset (data preserved)
279
283
  - Download progress and annotations
280
284
 
281
- <img width="800" alt="Management dashboard" src="https://github.com/user-attachments/assets/800a1741-5f41-47ac-9d5d-5cbf6abfc0e6" />
285
+ <img width="1000" alt="Management dashboard" src="https://github.com/user-attachments/assets/8953252c-d7b1-428c-a974-5bc7501457c7" />
282
286
 
283
287
  Completion tokens are shown at annotation end for verification (download correct tokens from dashboard). Incorrect tokens can be shown if quality control fails.
284
288
 
285
289
  <img width="500" alt="Token on completion" src="https://github.com/user-attachments/assets/40eb904c-f47a-4011-aa63-9a4f1c501549" />
286
290
 
291
+ ### Model Results Display
292
+
293
+ Add `&results` to dashboard URL to show model rankings (requires valid token).
294
+ Items need `model` field (pointwise) or `models` field (listwise) and the `protocol_score` needs to be enable such that the `score` can be used for the ranking:
295
+ ```python
296
+ {"doc_id": "1", "model": "CommandA", "src": "...", "tgt": "..."}
297
+ {"doc_id": "2", "models": ["CommandA", "Claude"], "src": "...", "tgt": ["...", "..."]}
298
+ ```
299
+ See an example in [Campaign Management](#campaign-management)
300
+
301
+
302
+ ## Terminology
303
+
304
+ - **Campaign**: An annotation project that contains configuration, data, and user assignments. Each campaign has a unique identifier and is defined in a JSON file.
305
+ - **Campaign File**: A JSON file that defines the campaign configuration, including the campaign ID, assignment type, protocol settings, and annotation data.
306
+ - **Campaign ID**: A unique identifier for a campaign (e.g., `"wmt25_#_en-cs_CZ"`). Used to reference and manage specific campaigns.
307
+ - **Task**: A unit of work assigned to a user. In task-based assignment, each task consists of a predefined set of items for a specific user.
308
+ - **Item** — A single annotation unit within a task. For translation evaluation, an item typically represents a document (source text and target translation). Items can contain text, images, audio, or video.
309
+ - **Document** — A collection of one or more segments (sentence pairs or text units) that are evaluated together as a single item.
310
+ - **User** / **Annotator**: A person who performs annotations in a campaign. Each user is identified by a unique user ID and accesses the campaign through a unique URL.
311
+ - **Attention Check** — A validation item with known correct answers used to ensure annotator quality. Can be:
312
+ - **Loud**: Shows warning message and forces retry on failure
313
+ - **Silent**: Logs failures without notifying the user (for quality control analysis)
314
+ - **Token** — A completion code shown to users when they finish their annotations. Tokens verify the completion and whether the user passed quality control checks:
315
+ - **Pass Token** (`token_pass`): Shown when user meets validation thresholds
316
+ - **Fail Token** (`token_fail`): Shown when user fails to meet validation requirements
317
+ - **Tutorial**: An instructional validation item that teaches users how to annotate. Includes `allow_skip: true` to let users skip if they have seen it before.
318
+ - **Validation**: Quality control rules attached to items that check if annotations match expected criteria (score ranges, error span locations, etc.). Used for tutorials and attention checks.
319
+ - **Model**: The system or model that generated the output being evaluated (e.g., `"GPT-4"`, `"Claude"`). Used for tracking and ranking model performance.
320
+ - **Dashboard**: The management interface that shows campaign progress, annotator statistics, access links, and allows downloading annotations. Accessed via a special management URL with token authentication.
321
+ - **Protocol**: The annotation scheme defining what data is collected:
322
+ - **Score**: Numeric quality rating (0-100)
323
+ - **Error Spans**: Text highlights marking errors
324
+ - **Error Categories**: MQM taxonomy labels for errors
325
+ - **Template**: The annotation interface type:
326
+ - **Pointwise**: Evaluate one output at a time
327
+ - **Listwise**: Compare multiple outputs simultaneously
328
+ - **Assignment**: The method for distributing items to users:
329
+ - **Task-based**: Each user has predefined items
330
+ - **Single-stream**: Users draw from a shared pool with random assignment
331
+ - **Dynamic**: Work in progress
332
+
287
333
  ## Development
288
334
 
289
335
  Server responds to data-only requests from frontend (no template coupling). Frontend served from pre-built `static/` on install.
@@ -295,7 +341,7 @@ cd pearmut
295
341
  npm install web/ --prefix web/
296
342
  npm run build --prefix web/
297
343
  # optionally keep running indefinitely to auto-rebuild
298
- npm watch build --prefix web/
344
+ npm run watch --prefix web/
299
345
 
300
346
  # Install as editable
301
347
  pip3 install -e .
@@ -323,7 +369,7 @@ If you use this work in your paper, please cite as following.
323
369
  author={Vilém Zouhar},
324
370
  title={Pearmut: Platform for Evaluating and Reviewing of Multilingual Tasks},
325
371
  url={https://github.com/zouharvi/pearmut/},
326
- year={2025},
372
+ year={2026},
327
373
  }
328
374
  ```
329
375
 
@@ -0,0 +1,19 @@
1
+ pearmut/app.py,sha256=fDec9vOX-ExgnlctoHlI19btxmPlOoSvFURiAHzGMmw,10437
2
+ pearmut/assignment.py,sha256=GvulwsPEguA_rNZB58bDKYy1wVZX9j4vnmbrKH4m0Mo,10963
3
+ pearmut/cli.py,sha256=oaViD9vlvFW3WIs2Ncm5DSUWRqObDZ1iTcAt1HAdaYg,17686
4
+ pearmut/utils.py,sha256=TWcbdTehg4CNwCpc5FuEOszpQM464LY0IQHHE_Sq1Zg,5293
5
+ pearmut/static/dashboard.bundle.js,sha256=IWoc8wETOX53ibPvcdEcnNUsJtugAJhn6NGYr9KB7OQ,100268
6
+ pearmut/static/dashboard.html,sha256=w1xNgLakDMxzp9iDp18SOoKHO10kB7ldvzuuwsC0zxk,2694
7
+ pearmut/static/index.html,sha256=SC5M-NSTnJh1UNHCC5VOP0TKkmhNn6MHlY6L4GDacpA,849
8
+ pearmut/static/listwise.bundle.js,sha256=vX5lMUxxjll7wT1NsZjZEIiruKuSkrCY6Vt6YIKeJ58,109635
9
+ pearmut/static/listwise.html,sha256=4A0a_GMVIjJmqT3lhJMT9huqvwgvrRfztt0KA0lJxKI,5308
10
+ pearmut/static/pointwise.bundle.js,sha256=ylaXUnTp_v2h8LeQthqR22nlIoKEF-ryoyqQFRoC-nk,108974
11
+ pearmut/static/pointwise.html,sha256=2NZYyjpznXP2b4GMeDcrjRYI5hZ45l7QgI-RQjkRUqs,5024
12
+ pearmut/static/assets/favicon.svg,sha256=gVPxdBlyfyJVkiMfh8WLaiSyH4lpwmKZs8UiOeX8YW4,7347
13
+ pearmut/static/assets/style.css,sha256=BrPnXTDr8hQ0M8T-EJlExddChzIFotlerBYMx2B8GDk,4136
14
+ pearmut-0.2.10.dist-info/licenses/LICENSE,sha256=GtR6RcTdRn-P23h5pKFuWSLZrLPD0ytHAwSOBt7aLpI,1071
15
+ pearmut-0.2.10.dist-info/METADATA,sha256=FLuvt03haoPPaDBBSzh9dKYFCBGVkP8JJ5udkOOc3yA,15676
16
+ pearmut-0.2.10.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
17
+ pearmut-0.2.10.dist-info/entry_points.txt,sha256=eEA9LVWsS3neQbMvL_nMvEw8I0oFudw8nQa1iqxOiWM,45
18
+ pearmut-0.2.10.dist-info/top_level.txt,sha256=CdgtUM-SKQDt6o5g0QreO-_7XTBP9_wnHMS1P-Rl5Go,8
19
+ pearmut-0.2.10.dist-info/RECORD,,
@@ -1,19 +0,0 @@
1
- pearmut/app.py,sha256=QrAAMQI8L922AD6biirItlwDB6gT90e1HTBH_txkPyM,8298
2
- pearmut/assignment.py,sha256=GvulwsPEguA_rNZB58bDKYy1wVZX9j4vnmbrKH4m0Mo,10963
3
- pearmut/cli.py,sha256=gZRmniwOHHtWS1rInUU1_zuuq3nybkW3dlMjJtyvlBM,17620
4
- pearmut/utils.py,sha256=TWcbdTehg4CNwCpc5FuEOszpQM464LY0IQHHE_Sq1Zg,5293
5
- pearmut/static/dashboard.bundle.js,sha256=9eHVyQd65pu1AJUyipKu8fWyOw5x1Jmf0KvbtAWuZm4,98636
6
- pearmut/static/dashboard.html,sha256=fN-B0jyeezMZP4qisGA7lmQem-FqvfDP1i5ziErQK2M,2120
7
- pearmut/static/index.html,sha256=SC5M-NSTnJh1UNHCC5VOP0TKkmhNn6MHlY6L4GDacpA,849
8
- pearmut/static/listwise.bundle.js,sha256=dR-CQ9r8OBhz71Usv9Lfg1RT9OMndiKZEvqPMBuDUho,109029
9
- pearmut/static/listwise.html,sha256=4A0a_GMVIjJmqT3lhJMT9huqvwgvrRfztt0KA0lJxKI,5308
10
- pearmut/static/pointwise.bundle.js,sha256=gX6bfzPqOK0xMTqcZtbrLM9TbICVbiicD5c4b2bw-AM,111274
11
- pearmut/static/pointwise.html,sha256=2NZYyjpznXP2b4GMeDcrjRYI5hZ45l7QgI-RQjkRUqs,5024
12
- pearmut/static/assets/favicon.svg,sha256=gVPxdBlyfyJVkiMfh8WLaiSyH4lpwmKZs8UiOeX8YW4,7347
13
- pearmut/static/assets/style.css,sha256=BrPnXTDr8hQ0M8T-EJlExddChzIFotlerBYMx2B8GDk,4136
14
- pearmut-0.2.8.dist-info/licenses/LICENSE,sha256=GtR6RcTdRn-P23h5pKFuWSLZrLPD0ytHAwSOBt7aLpI,1071
15
- pearmut-0.2.8.dist-info/METADATA,sha256=qrMERkMaAMsYmGjJm0Dny2tG6AWw3m3YVg5b7aGRVJ4,11934
16
- pearmut-0.2.8.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
17
- pearmut-0.2.8.dist-info/entry_points.txt,sha256=eEA9LVWsS3neQbMvL_nMvEw8I0oFudw8nQa1iqxOiWM,45
18
- pearmut-0.2.8.dist-info/top_level.txt,sha256=CdgtUM-SKQDt6o5g0QreO-_7XTBP9_wnHMS1P-Rl5Go,8
19
- pearmut-0.2.8.dist-info/RECORD,,