evalscope 0.5.2__py3-none-any.whl → 0.5.4__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of evalscope might be problematic. Click here for more details.

Files changed (32) hide show
  1. evalscope/backend/opencompass/backend_manager.py +2 -0
  2. evalscope/backend/opencompass/tasks/eval_datasets.py +1 -0
  3. evalscope/backend/vlm_eval_kit/backend_manager.py +12 -7
  4. evalscope/backend/vlm_eval_kit/custom_dataset.py +47 -0
  5. evalscope/benchmarks/benchmark.py +1 -1
  6. evalscope/config.py +1 -0
  7. evalscope/evaluator/evaluator.py +3 -3
  8. evalscope/models/api/__init__.py +3 -0
  9. evalscope/models/api/openai_api.py +228 -0
  10. evalscope/models/model_adapter.py +6 -0
  11. evalscope/perf/http_client.py +5 -5
  12. evalscope/run_arena.py +5 -3
  13. evalscope/summarizer.py +10 -4
  14. evalscope/third_party/longbench_write/__init__.py +3 -0
  15. evalscope/third_party/longbench_write/eval.py +284 -0
  16. evalscope/third_party/longbench_write/infer.py +217 -0
  17. evalscope/third_party/longbench_write/longbench_write.py +88 -0
  18. evalscope/third_party/longbench_write/resources/__init__.py +1 -0
  19. evalscope/third_party/longbench_write/resources/judge.txt +31 -0
  20. evalscope/third_party/longbench_write/resources/longbench_write.jsonl +120 -0
  21. evalscope/third_party/longbench_write/resources/longbench_write_en.jsonl +60 -0
  22. evalscope/third_party/longbench_write/resources/longwrite_ruler.jsonl +48 -0
  23. evalscope/third_party/longbench_write/tools/__init__.py +1 -0
  24. evalscope/third_party/longbench_write/tools/data_etl.py +155 -0
  25. evalscope/third_party/longbench_write/utils.py +37 -0
  26. evalscope/version.py +2 -2
  27. evalscope-0.5.4.dist-info/METADATA +399 -0
  28. {evalscope-0.5.2.dist-info → evalscope-0.5.4.dist-info}/RECORD +31 -16
  29. evalscope-0.5.2.dist-info/METADATA +0 -578
  30. {evalscope-0.5.2.dist-info → evalscope-0.5.4.dist-info}/WHEEL +0 -0
  31. {evalscope-0.5.2.dist-info → evalscope-0.5.4.dist-info}/entry_points.txt +0 -0
  32. {evalscope-0.5.2.dist-info → evalscope-0.5.4.dist-info}/top_level.txt +0 -0
@@ -1,578 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: evalscope
3
- Version: 0.5.2
4
- Summary: EvalScope: Lightweight LLMs Evaluation Framework
5
- Home-page: https://github.com/modelscope/evalscope
6
- Author: ModelScope team
7
- Author-email: contact@modelscope.cn
8
- Keywords: python,llm,evaluation
9
- Classifier: Development Status :: 4 - Beta
10
- Classifier: License :: OSI Approved :: Apache Software License
11
- Classifier: Operating System :: OS Independent
12
- Classifier: Programming Language :: Python :: 3
13
- Classifier: Programming Language :: Python :: 3.8
14
- Classifier: Programming Language :: Python :: 3.9
15
- Classifier: Programming Language :: Python :: 3.10
16
- Requires-Python: >=3.8
17
- Description-Content-Type: text/markdown
18
- Requires-Dist: torch
19
- Requires-Dist: absl-py
20
- Requires-Dist: accelerate
21
- Requires-Dist: cachetools
22
- Requires-Dist: editdistance
23
- Requires-Dist: jsonlines
24
- Requires-Dist: matplotlib
25
- Requires-Dist: modelscope[framework]
26
- Requires-Dist: nltk
27
- Requires-Dist: openai
28
- Requires-Dist: pandas
29
- Requires-Dist: plotly
30
- Requires-Dist: pyarrow
31
- Requires-Dist: pympler
32
- Requires-Dist: pyyaml
33
- Requires-Dist: regex
34
- Requires-Dist: requests
35
- Requires-Dist: requests-toolbelt
36
- Requires-Dist: rouge-score
37
- Requires-Dist: sacrebleu
38
- Requires-Dist: scikit-learn
39
- Requires-Dist: seaborn
40
- Requires-Dist: sentencepiece
41
- Requires-Dist: simple-ddl-parser
42
- Requires-Dist: tabulate
43
- Requires-Dist: tiktoken
44
- Requires-Dist: tqdm
45
- Requires-Dist: transformers (<4.43,>=4.33)
46
- Requires-Dist: transformers-stream-generator
47
- Requires-Dist: jieba
48
- Requires-Dist: rouge-chinese
49
- Provides-Extra: all
50
- Requires-Dist: torch ; extra == 'all'
51
- Requires-Dist: absl-py ; extra == 'all'
52
- Requires-Dist: accelerate ; extra == 'all'
53
- Requires-Dist: cachetools ; extra == 'all'
54
- Requires-Dist: editdistance ; extra == 'all'
55
- Requires-Dist: jsonlines ; extra == 'all'
56
- Requires-Dist: matplotlib ; extra == 'all'
57
- Requires-Dist: modelscope[framework] ; extra == 'all'
58
- Requires-Dist: nltk ; extra == 'all'
59
- Requires-Dist: openai ; extra == 'all'
60
- Requires-Dist: pandas ; extra == 'all'
61
- Requires-Dist: plotly ; extra == 'all'
62
- Requires-Dist: pyarrow ; extra == 'all'
63
- Requires-Dist: pympler ; extra == 'all'
64
- Requires-Dist: pyyaml ; extra == 'all'
65
- Requires-Dist: regex ; extra == 'all'
66
- Requires-Dist: requests ; extra == 'all'
67
- Requires-Dist: requests-toolbelt ; extra == 'all'
68
- Requires-Dist: rouge-score ; extra == 'all'
69
- Requires-Dist: sacrebleu ; extra == 'all'
70
- Requires-Dist: scikit-learn ; extra == 'all'
71
- Requires-Dist: seaborn ; extra == 'all'
72
- Requires-Dist: sentencepiece ; extra == 'all'
73
- Requires-Dist: simple-ddl-parser ; extra == 'all'
74
- Requires-Dist: tabulate ; extra == 'all'
75
- Requires-Dist: tiktoken ; extra == 'all'
76
- Requires-Dist: tqdm ; extra == 'all'
77
- Requires-Dist: transformers (<4.43,>=4.33) ; extra == 'all'
78
- Requires-Dist: transformers-stream-generator ; extra == 'all'
79
- Requires-Dist: jieba ; extra == 'all'
80
- Requires-Dist: rouge-chinese ; extra == 'all'
81
- Requires-Dist: ms-opencompass (>=0.0.5) ; extra == 'all'
82
- Requires-Dist: ms-vlmeval (>=0.0.5) ; extra == 'all'
83
- Provides-Extra: inner
84
- Requires-Dist: absl-py ; extra == 'inner'
85
- Requires-Dist: accelerate ; extra == 'inner'
86
- Requires-Dist: alibaba-itag-sdk ; extra == 'inner'
87
- Requires-Dist: dashscope ; extra == 'inner'
88
- Requires-Dist: editdistance ; extra == 'inner'
89
- Requires-Dist: jsonlines ; extra == 'inner'
90
- Requires-Dist: nltk ; extra == 'inner'
91
- Requires-Dist: openai ; extra == 'inner'
92
- Requires-Dist: pandas (==1.5.3) ; extra == 'inner'
93
- Requires-Dist: plotly ; extra == 'inner'
94
- Requires-Dist: pyarrow ; extra == 'inner'
95
- Requires-Dist: pyodps ; extra == 'inner'
96
- Requires-Dist: pyyaml ; extra == 'inner'
97
- Requires-Dist: regex ; extra == 'inner'
98
- Requires-Dist: requests (==2.28.1) ; extra == 'inner'
99
- Requires-Dist: requests-toolbelt (==0.10.1) ; extra == 'inner'
100
- Requires-Dist: rouge-score ; extra == 'inner'
101
- Requires-Dist: sacrebleu ; extra == 'inner'
102
- Requires-Dist: scikit-learn ; extra == 'inner'
103
- Requires-Dist: seaborn ; extra == 'inner'
104
- Requires-Dist: simple-ddl-parser ; extra == 'inner'
105
- Requires-Dist: streamlit ; extra == 'inner'
106
- Requires-Dist: tqdm ; extra == 'inner'
107
- Requires-Dist: transformers (<4.43,>=4.33) ; extra == 'inner'
108
- Requires-Dist: transformers-stream-generator ; extra == 'inner'
109
- Provides-Extra: opencompass
110
- Requires-Dist: ms-opencompass (>=0.0.5) ; extra == 'opencompass'
111
- Provides-Extra: vlmeval
112
- Requires-Dist: ms-vlmeval (>=0.0.5) ; extra == 'vlmeval'
113
-
114
- English | [简体中文](README_zh.md)
115
-
116
- <p align="center">
117
- <a href="https://pypi.org/project/evalscope"><img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/evalscope">
118
- </a>
119
- <a href="https://github.com/modelscope/evalscope/pulls"><img src="https://img.shields.io/badge/PR-welcome-55EB99.svg"></a>
120
- <p>
121
-
122
- ## 📖 Table of Content
123
- - [Introduction](#introduction)
124
- - [News](#News)
125
- - [Installation](#installation)
126
- - [Quick Start](#quick-start)
127
- - [Dataset List](#datasets-list)
128
- - [Leaderboard](#leaderboard)
129
- - [Experiments and Results](#Experiments-and-Results)
130
- - [Model Serving Performance Evaluation](#Model-Serving-Performance-Evaluation)
131
-
132
- ## 📝 Introduction
133
-
134
- Large Language Model (LLMs) evaluation has become a critical process for assessing and improving LLMs. To better support the evaluation of large models, we propose the EvalScope framework, which includes the following components and features:
135
-
136
- - Pre-configured common benchmark datasets, including: MMLU, CMMLU, C-Eval, GSM8K, ARC, HellaSwag, TruthfulQA, MATH, HumanEval, etc.
137
- - Implementation of common evaluation metrics
138
- - Unified model integration, compatible with the generate and chat interfaces of multiple model series
139
- - Automatic evaluation (evaluator):
140
- - Automatic evaluation for objective questions
141
- - Implementation of complex task evaluation using expert models
142
- - Reports of evaluation generating
143
- - Arena mode
144
- - Visualization tools
145
- - Model Inference Performance Evaluation [Tutorial](evalscope/perf/README.md)
146
- - Support for OpenCompass as an Evaluation Backend, featuring advanced encapsulation and task simplification to easily submit tasks to OpenCompass for evaluation.
147
- - Supports VLMEvalKit as the evaluation backend. It initiates VLMEvalKit's multimodal evaluation tasks through EvalScope, supporting various multimodal models and datasets.
148
- - Full pipeline support: Seamlessly integrate with SWIFT to easily train and deploy model services, initiate evaluation tasks, view evaluation reports, and achieve an end-to-end large model development process.
149
-
150
-
151
- **Features**
152
- - Lightweight, minimizing unnecessary abstractions and configurations
153
- - Easy to customize
154
- - New datasets can be integrated by simply implementing a single class
155
- - Models can be hosted on [ModelScope](https://modelscope.cn), and evaluations can be initiated with just a model id
156
- - Supports deployment of locally hosted models
157
- - Visualization of evaluation reports
158
- - Rich evaluation metrics
159
- - Model-based automatic evaluation process, supporting multiple evaluation modes
160
- - Single mode: Expert models score individual models
161
- - Pairwise-baseline mode: Comparison with baseline models
162
- - Pairwise (all) mode: Pairwise comparison of all models
163
-
164
- ## 🎉 News
165
- - **[2024.07.31]** Breaking change: The sdk name has been changed from `llmuses` to `evalscope`, please update the sdk name in your code.
166
- - **[2024.07.26]** Supports **VLMEvalKit** as a third-party evaluation framework, initiating multimodal model evaluation tasks. [User Guide](#vlmevalkit-evaluation-backend) 🔥🔥🔥
167
- - **[2024.06.29]** Supports **OpenCompass** as a third-party evaluation framework. We have provided a high-level wrapper, supporting installation via pip and simplifying the evaluation task configuration. [User Guide](#opencompass-evaluation-backend) 🔥🔥🔥
168
- - **[2024.06.13]** EvalScope has been updated to version 0.3.x, which supports the ModelScope SWIFT framework for LLMs evaluation. 🚀🚀🚀
169
- - **[2024.06.13]** We have supported the ToolBench as a third-party evaluation backend for Agents evaluation. 🚀🚀🚀
170
-
171
-
172
-
173
- ## 🛠️ Installation
174
- ### Install with pip
175
- 1. create conda environment [Optional]
176
- ```shell
177
- conda create -n evalscope python=3.10
178
- conda activate evalscope
179
- ```
180
-
181
- 2. Install EvalScope
182
- ```shell
183
- pip install evalscope # Installation with Native backend (by default)
184
-
185
- pip install evalscope[opencompass] # Installation with OpenCompass backend
186
- pip install evalscope[vlmeval] # Installation with VLMEvalKit backend
187
- pip install evalscope[all] # Installation with all backends (Native, OpenCompass, VLMEvalKit)
188
- ```
189
-
190
- DEPRECATION WARNING: For 0.4.3 or older versions, please use the following command to install:
191
- ```shell
192
- pip install llmuses<=0.4.3
193
-
194
- # Usage:
195
- from llmuses.run import run_task
196
- ...
197
-
198
- ```
199
-
200
-
201
- ### Install from source code
202
- 1. Download source code
203
- ```shell
204
- git clone https://github.com/modelscope/evalscope.git
205
- ```
206
-
207
- 2. Install dependencies
208
- ```shell
209
- cd evalscope/
210
- pip install -e .
211
- ```
212
-
213
-
214
- ## 🚀 Quick Start
215
-
216
- ### Simple Evaluation
217
- command line with pip installation:
218
- ```shell
219
- python -m evalscope.run --model ZhipuAI/chatglm3-6b --template-type chatglm3 --datasets arc --limit 100
220
- ```
221
- command line with source code:
222
- ```shell
223
- python evalscope/run.py --model ZhipuAI/chatglm3-6b --template-type chatglm3 --datasets mmlu ceval --limit 10
224
- ```
225
- Parameters:
226
- - --model: ModelScope model id, model link: [ZhipuAI/chatglm3-6b](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary)
227
-
228
- ### Evaluation with Model Arguments
229
- ```shell
230
- python evalscope/run.py --model ZhipuAI/chatglm3-6b --template-type chatglm3 --model-args revision=v1.0.2,precision=torch.float16,device_map=auto --datasets mmlu ceval --use-cache true --limit 10
231
- ```
232
- ```shell
233
- python evalscope/run.py --model qwen/Qwen-1_8B --generation-config do_sample=false,temperature=0.0 --datasets ceval --dataset-args '{"ceval": {"few_shot_num": 0, "few_shot_random": false}}' --limit 10
234
- ```
235
- Parameters:
236
- - --model-args: Parameters of model: revision, precision, device_map, in format of key=value,key=value
237
- - --datasets: datasets list, separated by space
238
- - --use-cache: `true` or `false`, whether to use cache, default is `false`
239
- - --dataset-args: evaluation settings,json format,key is the dataset name,value should be args for the dataset
240
- - --few_shot_num: few-shot data number
241
- - --few_shot_random: whether to use random few-shot data, default is `true`
242
- - --local_path: local dataset path
243
- - --limit: maximum number of samples to evaluate for each sub-dataset
244
- - --template-type: model template type, see [Template Type List](https://github.com/modelscope/swift/blob/main/docs/source_en/LLM/Supported-models-datasets.md)
245
-
246
- Note: you can use following command to check the template type list of the model:
247
- ```shell
248
- from evalscope.models.template import TemplateType
249
- print(TemplateType.get_template_name_list())
250
- ```
251
-
252
- ### Evaluation Backend
253
- EvalScope supports using third-party evaluation frameworks to initiate evaluation tasks, which we call Evaluation Backend. Currently supported Evaluation Backend includes:
254
- - **Native**: EvalScope's own **default evaluation framework**, supporting various evaluation modes including single model evaluation, arena mode, and baseline model comparison mode.
255
- - [OpenCompass](https://github.com/open-compass/opencompass): Initiate OpenCompass evaluation tasks through EvalScope. Lightweight, easy to customize, supports seamless integration with the LLM fine-tuning framework [ModelScope Swift](https://github.com/modelscope/swift).
256
- - [VLMEvalKit](https://github.com/open-compass/VLMEvalKit): Initiate VLMEvalKit multimodal evaluation tasks through EvalScope. Supports various multimodal models and datasets, and offers seamless integration with the LLM fine-tuning framework [ModelScope Swift](https://github.com/modelscope/swift).
257
- - **ThirdParty**: The third-party task, e.g. [ToolBench](evalscope/thirdparty/toolbench/README.md), you can contribute your own evaluation task to EvalScope as third-party backend.
258
-
259
- #### OpenCompass Eval-Backend
260
-
261
- To facilitate the use of the OpenCompass evaluation backend, we have customized the OpenCompass source code and named it `ms-opencompass`. This version includes optimizations for evaluation task configuration and execution based on the original version, and it supports installation via PyPI. This allows users to initiate lightweight OpenCompass evaluation tasks through EvalScope. Additionally, we have initially opened up API-based evaluation tasks in the OpenAI API format. You can deploy model services using [ModelScope Swift](https://github.com/modelscope/swift), where [swift deploy](https://swift.readthedocs.io/en/latest/LLM/VLLM-inference-acceleration-and-deployment.html) supports using vLLM to launch model inference services.
262
-
263
-
264
- ##### Installation
265
- ```shell
266
- # Install with extra option
267
- pip install evalscope[opencompass]
268
- ```
269
-
270
- ##### Data Preparation
271
- Available datasets from OpenCompass backend:
272
- ```text
273
- 'obqa', 'AX_b', 'siqa', 'nq', 'mbpp', 'winogrande', 'mmlu', 'BoolQ', 'cluewsc', 'ocnli', 'lambada', 'CMRC', 'ceval', 'csl', 'cmnli', 'bbh', 'ReCoRD', 'math', 'humaneval', 'eprstmt', 'WSC', 'storycloze', 'MultiRC', 'RTE', 'chid', 'gsm8k', 'AX_g', 'bustm', 'afqmc', 'piqa', 'lcsts', 'strategyqa', 'Xsum', 'agieval', 'ocnli_fc', 'C3', 'tnews', 'race', 'triviaqa', 'CB', 'WiC', 'hellaswag', 'summedits', 'GaokaoBench', 'ARC_e', 'COPA', 'ARC_c', 'DRCD'
274
- ```
275
- Refer to [OpenCompass datasets](https://hub.opencompass.org.cn/home)
276
-
277
- You can use the following code to list all available datasets:
278
- ```python
279
- from evalscope.backend.opencompass import OpenCompassBackendManager
280
- print(f'** All datasets from OpenCompass backend: {OpenCompassBackendManager.list_datasets()}')
281
- ```
282
-
283
- Dataset download:
284
- - Option1: Download from ModelScope
285
- ```shell
286
- git clone https://www.modelscope.cn/datasets/swift/evalscope_resource.git
287
- ```
288
-
289
- - Option2: Download from OpenCompass GitHub
290
- ```shell
291
- wget https://github.com/open-compass/opencompass/releases/download/0.2.2.rc1/OpenCompassData-complete-20240207.zip
292
- ```
293
-
294
- Unzip the file and set the path to the `data` directory in current work directory.
295
-
296
-
297
- ##### Model Serving
298
- We use ModelScope swift to deploy model services, see: [ModelScope Swift](hhttps://swift.readthedocs.io/en/latest/LLM/VLLM-inference-acceleration-and-deployment.html)
299
- ```shell
300
- # Install ms-swift
301
- pip install ms-swift
302
-
303
- # Deploy model
304
- CUDA_VISIBLE_DEVICES=0 swift deploy --model_type llama3-8b-instruct --port 8000
305
- ```
306
-
307
-
308
- ##### Model Evaluation
309
-
310
- Refer to example: [example_eval_swift_openai_api](examples/example_eval_swift_openai_api.py) to configure and execute the evaluation task:
311
- ```shell
312
- python examples/example_eval_swift_openai_api.py
313
- ```
314
-
315
- #### VLMEvalKit Evaluation Backend
316
-
317
- To facilitate the use of the VLMEvalKit evaluation backend, we have customized the VLMEvalKit source code and named it `ms-vlmeval`. This version encapsulates the configuration and execution of evaluation tasks based on the original version and supports installation via PyPI, allowing users to initiate lightweight VLMEvalKit evaluation tasks through EvalScope. Additionally, we support API-based evaluation tasks in the OpenAI API format. You can deploy multimodal model services using ModelScope [swift](https://github.com/modelscope/swift).
318
-
319
- ##### Installation
320
- ```shell
321
- # Install with additional options
322
- pip install evalscope[vlmeval]
323
- ```
324
-
325
- ##### Data Preparation
326
- Currently supported datasets include:
327
- ```text
328
- 'COCO_VAL', 'MME', 'HallusionBench', 'POPE', 'MMBench_DEV_EN', 'MMBench_TEST_EN', 'MMBench_DEV_CN', 'MMBench_TEST_CN', 'MMBench', 'MMBench_CN', 'MMBench_DEV_EN_V11', 'MMBench_TEST_EN_V11', 'MMBench_DEV_CN_V11', 'MMBench_TEST_CN_V11', 'MMBench_V11', 'MMBench_CN_V11', 'SEEDBench_IMG', 'SEEDBench2', 'SEEDBench2_Plus', 'ScienceQA_VAL', 'ScienceQA_TEST', 'MMT-Bench_ALL_MI', 'MMT-Bench_ALL', 'MMT-Bench_VAL_MI', 'MMT-Bench_VAL', 'AesBench_VAL', 'AesBench_TEST', 'CCBench', 'AI2D_TEST', 'MMStar', 'RealWorldQA', 'MLLMGuard_DS', 'BLINK', 'OCRVQA_TEST', 'OCRVQA_TESTCORE', 'TextVQA_VAL', 'DocVQA_VAL', 'DocVQA_TEST', 'InfoVQA_ VAL', 'InfoVQA_TEST', 'ChartQA_VAL', 'ChartQA_TEST', 'MathVision', 'MathVision_MINI', 'MMMU_DEV_VAL', 'MMMU_TEST', 'OCRBench', 'MathVista_MINI', 'LLaVABench', 'MMVet', 'MTVQA_TEST', 'MMLongBench_DOC', 'VCR_EN_EASY_500', 'VCR_EN_EASY_100', 'VCR_EN_EASY_ALL', 'VCR_EN_HARD_500', 'VCR_EN_HARD_100', 'VCR_EN_HARD_ALL', 'VCR_ZH_EASY_500', 'VCR_ZH_EASY_100', 'VCR_Z H_EASY_ALL', 'VCR_ZH_HARD_500', 'VCR_ZH_HARD_100', 'VCR_ZH_HARD_ALL', 'MMBench-Video', 'Video-MME', 'MMBench_DEV_EN', 'MMBench_TEST_EN', 'MMBench_DEV_CN', 'MMBench_TEST_CN', 'MMBench', 'MMBench_CN', 'MMBench_DEV_EN_V11', 'MMBench_TEST_EN_V11', 'MMBench_DEV_CN_V11', 'MMBench_TEST_CN_V11', 'MM Bench_V11', 'MMBench_CN_V11', 'SEEDBench_IMG', 'SEEDBench2', 'SEEDBench2_Plus', 'ScienceQA_VAL', 'ScienceQA_TEST', 'MMT-Bench_ALL_MI', 'MMT-Bench_ALL', 'MMT-Bench_VAL_MI', 'MMT-Bench_VAL', 'AesBench_VAL', 'AesBench_TEST', 'CCBench', 'AI2D_TEST', 'MMStar', 'RealWorldQA', 'MLLMGuard_DS', 'BLINK'
329
- ```
330
- For detailed information about the datasets, please refer to [VLMEvalKit Supported Multimodal Evaluation Sets](https://github.com/open-compass/VLMEvalKit/tree/main#-datasets-models-and-evaluation-results).
331
-
332
- You can use the following to view the list of dataset names:
333
- ```python
334
- from evalscope.backend.vlm_eval_kit import VLMEvalKitBackendManager
335
- print(f'** All models from VLMEvalKit backend: {VLMEvalKitBackendManager.list_supported_models().keys()}')
336
-
337
- ```
338
- If the dataset file does not exist locally when loading the dataset, it will be automatically downloaded to the `~/LMUData/` directory.
339
-
340
-
341
- ##### Model Evaluation
342
- There are two ways to evaluate the model:
343
-
344
- ###### 1. ModelScope Swift Deployment for Model Evaluation
345
- **Model Deployment**
346
- Deploy the model service using ModelScope Swift. For detailed instructions, refer to: [ModelScope Swift MLLM Deployment Guide](https://swift.readthedocs.io/en/latest/Multi-Modal/mutlimodal-deployment.html)
347
- ```shell
348
- # Install ms-swift
349
- pip install ms-swift
350
- # Deploy the qwen-vl-chat multi-modal model service
351
- CUDA_VISIBLE_DEVICES=0 swift deploy --model_type qwen-vl-chat --model_id_or_path models/Qwen-VL-Chat
352
- ```
353
- **Model Evaluation**
354
- Refer to the example file: [example_eval_vlm_swift](examples/example_eval_vlm_swift.py) to configure the evaluation task.
355
- Execute the evaluation task:
356
- ```shell
357
- python examples/example_eval_vlm_swift.py
358
- ```
359
-
360
- ###### 2. Local Model Inference Evaluation
361
- **Model Inference Evaluation**
362
- Skip the model service deployment and perform inference directly on the local machine. Refer to the example file: [example_eval_vlm_local](examples/example_eval_vlm_local.py) to configure the evaluation task.
363
- Execute the evaluation task:
364
- ```shell
365
- python examples/example_eval_vlm_local.py
366
- ```
367
-
368
-
369
- ##### (Optional) Deploy Judge Model
370
- Deploy the local language model as a judge/extractor using ModelScope swift. For details, refer to: [ModelScope Swift LLM Deployment Guide](https://swift.readthedocs.io/en/latest/LLM/VLLM-inference-acceleration-and-deployment.html). If no judge model is deployed, exact matching will be used.
371
-
372
- ```shell
373
- # Deploy qwen2-7b as a judge
374
- CUDA_VISIBLE_DEVICES=1 swift deploy --model_type qwen2-7b-instruct --model_id_or_path models/Qwen2-7B-Instruct --port 8866
375
- ```
376
-
377
- You **must configure the following environment variables for the judge model to be correctly invoked**:
378
- ```
379
- OPENAI_API_KEY=EMPTY
380
- OPENAI_API_BASE=http://127.0.0.1:8866/v1/chat/completions # api_base for the judge model
381
- LOCAL_LLM=qwen2-7b-instruct # model_id for the judge model
382
- ```
383
-
384
- ##### Model Evaluation
385
- Refer to the example file: [example_eval_vlm_swift](examples/example_eval_vlm_swift.py) to configure the evaluation task.
386
-
387
- Execute the evaluation task:
388
-
389
- ```shell
390
- python examples/example_eval_vlm_swift.py
391
- ```
392
-
393
-
394
- ### Local Dataset
395
- You can use local dataset to evaluate the model without internet connection.
396
- #### 1. Download and unzip the dataset
397
- ```shell
398
- # set path to /path/to/workdir
399
- wget https://modelscope.oss-cn-beijing.aliyuncs.com/open_data/benchmark/data.zip
400
- unzip data.zip
401
- ```
402
-
403
-
404
- #### 2. Use local dataset to evaluate the model
405
- ```shell
406
- python evalscope/run.py --model ZhipuAI/chatglm3-6b --template-type chatglm3 --datasets arc --dataset-hub Local --dataset-args '{"arc": {"local_path": "/path/to/workdir/data/arc"}}' --limit 10
407
-
408
- # Parameters:
409
- # --dataset-hub: dataset sources: `ModelScope`, `Local`, `HuggingFace` (TO-DO) default to `ModelScope`
410
- # --dataset-args: json format, key is the dataset name, value should be args for the dataset
411
- ```
412
-
413
- #### 3. (Optional) Use local mode to submit evaluation task
414
-
415
- ```shell
416
- # 1. Prepare the model local folder, the folder structure refers to chatglm3-6b, link: https://modelscope.cn/models/ZhipuAI/chatglm3-6b/files
417
- # For example, download the model folder to the local path /path/to/ZhipuAI/chatglm3-6b
418
-
419
- # 2. Execute the offline evaluation task
420
- python evalscope/run.py --model /path/to/ZhipuAI/chatglm3-6b --template-type chatglm3 --datasets arc --dataset-hub Local --dataset-args '{"arc": {"local_path": "/path/to/workdir/data/arc"}}' --limit 10
421
- ```
422
-
423
-
424
- ### Use run_task function
425
-
426
- #### 1. Configuration
427
- ```python
428
- import torch
429
- from evalscope.constants import DEFAULT_ROOT_CACHE_DIR
430
-
431
- # Example configuration
432
- your_task_cfg = {
433
- 'model_args': {'revision': None, 'precision': torch.float16, 'device_map': 'auto'},
434
- 'generation_config': {'do_sample': False, 'repetition_penalty': 1.0, 'max_new_tokens': 512},
435
- 'dataset_args': {},
436
- 'dry_run': False,
437
- 'model': 'ZhipuAI/chatglm3-6b',
438
- 'template_type': 'chatglm3',
439
- 'datasets': ['arc', 'hellaswag'],
440
- 'work_dir': DEFAULT_ROOT_CACHE_DIR,
441
- 'outputs': DEFAULT_ROOT_CACHE_DIR,
442
- 'mem_cache': False,
443
- 'dataset_hub': 'ModelScope',
444
- 'dataset_dir': DEFAULT_ROOT_CACHE_DIR,
445
- 'stage': 'all',
446
- 'limit': 10,
447
- 'debug': False
448
- }
449
-
450
- ```
451
-
452
- #### 2. Execute the task
453
- ```python
454
- from evalscope.run import run_task
455
-
456
- run_task(task_cfg=your_task_cfg)
457
- ```
458
-
459
-
460
- ### Arena Mode
461
- The Arena mode allows multiple candidate models to be evaluated through pairwise battles, and can choose to use the AI Enhanced Auto-Reviewer (AAR) automatic evaluation process or manual evaluation to obtain the evaluation report. The process is as follows:
462
- #### 1. Env preparation
463
- ```text
464
- a. Data preparation, the question data format refers to: evalscope/registry/data/question.jsonl
465
- b. If you need to use the automatic evaluation process (AAR), you need to configure the relevant environment variables. Taking the GPT-4 based auto-reviewer process as an example, you need to configure the following environment variables:
466
- > export OPENAI_API_KEY=YOUR_OPENAI_API_KEY
467
- ```
468
-
469
- #### 2. Configuration files
470
- ```text
471
- Refer to : evalscope/registry/config/cfg_arena.yaml
472
- Parameters:
473
- questions_file: question data path
474
- answers_gen: candidate model prediction result generation, supports multiple models, can control whether to enable the model through the enable parameter
475
- reviews_gen: evaluation result generation, currently defaults to using GPT-4 as the Auto-reviewer, can control whether to enable this step through the enable parameter
476
- elo_rating: ELO rating algorithm, can control whether to enable this step through the enable parameter, note that this step depends on the review_file must exist
477
- ```
478
-
479
- #### 3. Execute the script
480
- ```shell
481
- #Usage:
482
- cd evalscope
483
-
484
- # dry-run mode
485
- python evalscope/run_arena.py -c registry/config/cfg_arena.yaml --dry-run
486
-
487
- # Execute the script
488
- python evalscope/run_arena.py --c registry/config/cfg_arena.yaml
489
- ```
490
-
491
- #### 4. Visualization
492
-
493
- ```shell
494
- # Usage:
495
- streamlit run viz.py -- --review-file evalscope/registry/data/qa_browser/battle.jsonl --category-file evalscope/registry/data/qa_browser/category_mapping.yaml
496
- ```
497
-
498
-
499
- ### Single Model Evaluation Mode
500
-
501
- In this mode, we only score the output of a single model, without pairwise comparison.
502
- #### 1. Configuration file
503
- ```text
504
- Refer to: evalscope/registry/config/cfg_single.yaml
505
- Parameters:
506
- questions_file: question data path
507
- answers_gen: candidate model prediction result generation, supports multiple models, can control whether to enable the model through the enable parameter
508
- reviews_gen: evaluation result generation, currently defaults to using GPT-4 as the Auto-reviewer, can control whether to enable this step through the enable parameter
509
- rating_gen: rating algorithm, can control whether to enable this step through the enable parameter, note that this step depends on the review_file must exist
510
- ```
511
- #### 2. Execute the script
512
- ```shell
513
- #Example:
514
- python evalscope/run_arena.py --c registry/config/cfg_single.yaml
515
- ```
516
-
517
- ### Baseline Model Comparison Mode
518
-
519
- In this mode, we select the baseline model, and compare other models with the baseline model for scoring. This mode can easily add new models to the Leaderboard (just need to run the scoring with the new model and the baseline model).
520
-
521
- #### 1. Configuration file
522
- ```text
523
- Refer to: evalscope/registry/config/cfg_pairwise_baseline.yaml
524
- Parameters:
525
- questions_file: question data path
526
- answers_gen: candidate model prediction result generation, supports multiple models, can control whether to enable the model through the enable parameter
527
- reviews_gen: evaluation result generation, currently defaults to using GPT-4 as the Auto-reviewer, can control whether to enable this step through the enable parameter
528
- rating_gen: rating algorithm, can control whether to enable this step through the enable parameter, note that this step depends on the review_file must exist
529
- ```
530
- #### 2. Execute the script
531
- ```shell
532
- # Example:
533
- python evalscope/run_arena.py --c registry/config/cfg_pairwise_baseline.yaml
534
- ```
535
-
536
-
537
- ## Datasets list
538
-
539
- | DatasetName | Link | Status | Note |
540
- |--------------------|----------------------------------------------------------------------------------------|--------|------|
541
- | `mmlu` | [mmlu](https://modelscope.cn/datasets/modelscope/mmlu/summary) | Active | |
542
- | `ceval` | [ceval](https://modelscope.cn/datasets/modelscope/ceval-exam/summary) | Active | |
543
- | `gsm8k` | [gsm8k](https://modelscope.cn/datasets/modelscope/gsm8k/summary) | Active | |
544
- | `arc` | [arc](https://modelscope.cn/datasets/modelscope/ai2_arc/summary) | Active | |
545
- | `hellaswag` | [hellaswag](https://modelscope.cn/datasets/modelscope/hellaswag/summary) | Active | |
546
- | `truthful_qa` | [truthful_qa](https://modelscope.cn/datasets/modelscope/truthful_qa/summary) | Active | |
547
- | `competition_math` | [competition_math](https://modelscope.cn/datasets/modelscope/competition_math/summary) | Active | |
548
- | `humaneval` | [humaneval](https://modelscope.cn/datasets/modelscope/humaneval/summary) | Active | |
549
- | `bbh` | [bbh](https://modelscope.cn/datasets/modelscope/bbh/summary) | Active | |
550
- | `race` | [race](https://modelscope.cn/datasets/modelscope/race/summary) | Active | |
551
- | `trivia_qa` | [trivia_qa](https://modelscope.cn/datasets/modelscope/trivia_qa/summary) | To be intergrated | |
552
-
553
-
554
- ## Leaderboard
555
- The LLM Leaderboard aims to provide an objective and comprehensive evaluation standard and platform to help researchers and developers understand and compare the performance of models on various tasks on ModelScope.
556
-
557
- [Leaderboard](https://modelscope.cn/leaderboard/58/ranking?type=free)
558
-
559
-
560
-
561
- ## Experiments and Results
562
- [Experiments](./resources/experiments.md)
563
-
564
- ## Model Serving Performance Evaluation
565
- [Perf](evalscope/perf/README.md)
566
-
567
- ## TO-DO List
568
- - ✅Agents evaluation
569
- - [ ] vLLM
570
- - [ ] Distributed evaluating
571
- - ✅ Multi-modal evaluation
572
- - [ ] Benchmarks
573
- - [ ] GAIA
574
- - [ ] GPQA
575
- - ✅ MBPP
576
- - [ ] Auto-reviewer
577
- - [ ] Qwen-max
578
-