evalscope 0.5.2__tar.gz → 0.5.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of evalscope might be problematic. Click here for more details.

Files changed (189) hide show
  1. evalscope-0.5.4/PKG-INFO +400 -0
  2. evalscope-0.5.4/README.md +284 -0
  3. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/opencompass/backend_manager.py +2 -0
  4. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/opencompass/tasks/eval_datasets.py +1 -0
  5. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/vlm_eval_kit/backend_manager.py +12 -7
  6. evalscope-0.5.4/evalscope/backend/vlm_eval_kit/custom_dataset.py +47 -0
  7. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/benchmark.py +1 -1
  8. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/config.py +1 -0
  9. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/evaluator/evaluator.py +3 -3
  10. evalscope-0.5.4/evalscope/models/api/__init__.py +3 -0
  11. evalscope-0.5.4/evalscope/models/api/openai_api.py +228 -0
  12. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/model_adapter.py +6 -0
  13. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/http_client.py +5 -5
  14. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/run_arena.py +5 -3
  15. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/summarizer.py +10 -4
  16. evalscope-0.5.4/evalscope/third_party/longbench_write/__init__.py +3 -0
  17. evalscope-0.5.4/evalscope/third_party/longbench_write/eval.py +284 -0
  18. evalscope-0.5.4/evalscope/third_party/longbench_write/infer.py +217 -0
  19. evalscope-0.5.4/evalscope/third_party/longbench_write/longbench_write.py +88 -0
  20. evalscope-0.5.4/evalscope/third_party/longbench_write/resources/judge.txt +31 -0
  21. evalscope-0.5.4/evalscope/third_party/longbench_write/resources/longbench_write.jsonl +120 -0
  22. evalscope-0.5.4/evalscope/third_party/longbench_write/resources/longbench_write_en.jsonl +60 -0
  23. evalscope-0.5.4/evalscope/third_party/longbench_write/resources/longwrite_ruler.jsonl +48 -0
  24. evalscope-0.5.4/evalscope/third_party/longbench_write/tools/data_etl.py +155 -0
  25. evalscope-0.5.4/evalscope/third_party/longbench_write/utils.py +37 -0
  26. evalscope-0.5.4/evalscope/third_party/toolbench_static/llm/__init__.py +1 -0
  27. evalscope-0.5.4/evalscope/tools/__init__.py +1 -0
  28. evalscope-0.5.4/evalscope/version.py +4 -0
  29. evalscope-0.5.4/evalscope.egg-info/PKG-INFO +400 -0
  30. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope.egg-info/SOURCES.txt +15 -0
  31. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope.egg-info/requires.txt +6 -4
  32. evalscope-0.5.2/PKG-INFO +0 -579
  33. evalscope-0.5.2/README.md +0 -465
  34. evalscope-0.5.2/evalscope/version.py +0 -4
  35. evalscope-0.5.2/evalscope.egg-info/PKG-INFO +0 -579
  36. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/__init__.py +0 -0
  37. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/__init__.py +0 -0
  38. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/base.py +0 -0
  39. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/opencompass/__init__.py +0 -0
  40. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/opencompass/api_meta_template.py +0 -0
  41. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/opencompass/tasks/__init__.py +0 -0
  42. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/opencompass/tasks/eval_api.py +0 -0
  43. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/backend/vlm_eval_kit/__init__.py +0 -0
  44. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/__init__.py +0 -0
  45. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/arc/__init__.py +0 -0
  46. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/arc/ai2_arc.py +0 -0
  47. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/arc/arc_adapter.py +0 -0
  48. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/__init__.py +0 -0
  49. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/bbh_adapter.py +0 -0
  50. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/boolean_expressions.txt +0 -0
  51. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/causal_judgement.txt +0 -0
  52. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/date_understanding.txt +0 -0
  53. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/disambiguation_qa.txt +0 -0
  54. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/dyck_languages.txt +0 -0
  55. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/formal_fallacies.txt +0 -0
  56. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/geometric_shapes.txt +0 -0
  57. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/hyperbaton.txt +0 -0
  58. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/logical_deduction_five_objects.txt +0 -0
  59. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/logical_deduction_seven_objects.txt +0 -0
  60. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/logical_deduction_three_objects.txt +0 -0
  61. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/movie_recommendation.txt +0 -0
  62. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/multistep_arithmetic_two.txt +0 -0
  63. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/navigate.txt +0 -0
  64. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/object_counting.txt +0 -0
  65. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/penguins_in_a_table.txt +0 -0
  66. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/reasoning_about_colored_objects.txt +0 -0
  67. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/ruin_names.txt +0 -0
  68. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/salient_translation_error_detection.txt +0 -0
  69. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/snarks.txt +0 -0
  70. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/sports_understanding.txt +0 -0
  71. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/temporal_sequences.txt +0 -0
  72. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/tracking_shuffled_objects_five_objects.txt +0 -0
  73. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/tracking_shuffled_objects_seven_objects.txt +0 -0
  74. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/tracking_shuffled_objects_three_objects.txt +0 -0
  75. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/web_of_lies.txt +0 -0
  76. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/bbh/cot_prompts/word_sorting.txt +0 -0
  77. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/ceval/__init__.py +0 -0
  78. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/ceval/ceval_adapter.py +0 -0
  79. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/ceval/ceval_exam.py +0 -0
  80. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/cmmlu/__init__.py +0 -0
  81. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/cmmlu/cmmlu.py +0 -0
  82. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/cmmlu/cmmlu_adapter.py +0 -0
  83. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/competition_math/__init__.py +0 -0
  84. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/competition_math/competition_math.py +0 -0
  85. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/competition_math/competition_math_adapter.py +0 -0
  86. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/data_adapter.py +0 -0
  87. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/general_qa/__init__.py +0 -0
  88. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/general_qa/general_qa_adapter.py +0 -0
  89. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/gsm8k/__init__.py +0 -0
  90. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/gsm8k/gsm8k.py +0 -0
  91. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/gsm8k/gsm8k_adapter.py +0 -0
  92. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/hellaswag/__init__.py +0 -0
  93. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/hellaswag/hellaswag.py +0 -0
  94. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/hellaswag/hellaswag_adapter.py +0 -0
  95. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/humaneval/__init__.py +0 -0
  96. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/humaneval/humaneval.py +0 -0
  97. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/humaneval/humaneval_adapter.py +0 -0
  98. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/mmlu/__init__.py +0 -0
  99. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/mmlu/mmlu.py +0 -0
  100. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/mmlu/mmlu_adapter.py +0 -0
  101. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/race/__init__.py +0 -0
  102. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/race/race.py +0 -0
  103. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/race/race_adapter.py +0 -0
  104. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/trivia_qa/__init__.py +0 -0
  105. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/trivia_qa/trivia_qa.py +0 -0
  106. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/trivia_qa/trivia_qa_adapter.py +0 -0
  107. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/truthful_qa/__init__.py +0 -0
  108. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/truthful_qa/truthful_qa.py +0 -0
  109. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/benchmarks/truthful_qa/truthful_qa_adapter.py +0 -0
  110. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/cache.py +0 -0
  111. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/cli/__init__.py +0 -0
  112. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/cli/base.py +0 -0
  113. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/cli/cli.py +0 -0
  114. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/cli/start_perf.py +0 -0
  115. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/cli/start_server.py +0 -0
  116. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/constants.py +0 -0
  117. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/evaluator/__init__.py +0 -0
  118. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/evaluator/rating_eval.py +0 -0
  119. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/evaluator/reviewer/__init__.py +0 -0
  120. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/evaluator/reviewer/auto_reviewer.py +0 -0
  121. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/__init__.py +0 -0
  122. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/bundled_rouge_score/__init__.py +0 -0
  123. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/bundled_rouge_score/rouge_scorer.py +0 -0
  124. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/code_metric.py +0 -0
  125. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/math_accuracy.py +0 -0
  126. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/metrics.py +0 -0
  127. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/metrics/rouge_metric.py +0 -0
  128. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/__init__.py +0 -0
  129. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/custom/__init__.py +0 -0
  130. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/custom/custom_model.py +0 -0
  131. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/dummy_chat_model.py +0 -0
  132. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/model.py +0 -0
  133. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/openai_model.py +0 -0
  134. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/models/template.py +0 -0
  135. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/__init__.py +0 -0
  136. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/_logging.py +0 -0
  137. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/api_plugin_base.py +0 -0
  138. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/custom_api.py +0 -0
  139. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/dashscope_api.py +0 -0
  140. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/dataset_plugin_base.py +0 -0
  141. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/datasets/__init__.py +0 -0
  142. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/datasets/line_by_line.py +0 -0
  143. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/datasets/longalpaca_12k.py +0 -0
  144. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/datasets/openqa.py +0 -0
  145. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/how_to_analysis_result.py +0 -0
  146. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/openai_api.py +0 -0
  147. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/plugin_registry.py +0 -0
  148. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/query_parameters.py +0 -0
  149. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/perf/server_sent_event.py +0 -0
  150. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/preprocess/__init__.py +0 -0
  151. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/preprocess/tokenizers/__init__.py +0 -0
  152. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/preprocess/tokenizers/gpt2_tokenizer.py +0 -0
  153. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/__init__.py +0 -0
  154. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/arc.yaml +0 -0
  155. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/bbh.yaml +0 -0
  156. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/bbh_mini.yaml +0 -0
  157. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/ceval.yaml +0 -0
  158. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/ceval_mini.yaml +0 -0
  159. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/cmmlu.yaml +0 -0
  160. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/eval_qwen-7b-chat_v100.yaml +0 -0
  161. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/general_qa.yaml +0 -0
  162. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/gsm8k.yaml +0 -0
  163. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/mmlu.yaml +0 -0
  164. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/registry/tasks/mmlu_mini.yaml +0 -0
  165. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/run.py +0 -0
  166. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/run_ms.py +0 -0
  167. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/third_party/__init__.py +0 -0
  168. {evalscope-0.5.2/evalscope/third_party/toolbench_static/llm → evalscope-0.5.4/evalscope/third_party/longbench_write/resources}/__init__.py +0 -0
  169. {evalscope-0.5.2/evalscope → evalscope-0.5.4/evalscope/third_party/longbench_write}/tools/__init__.py +0 -0
  170. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/third_party/toolbench_static/__init__.py +0 -0
  171. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/third_party/toolbench_static/eval.py +0 -0
  172. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/third_party/toolbench_static/infer.py +0 -0
  173. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/third_party/toolbench_static/llm/swift_infer.py +0 -0
  174. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/third_party/toolbench_static/toolbench_static.py +0 -0
  175. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/tools/combine_reports.py +0 -0
  176. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/tools/gen_mmlu_subject_mapping.py +0 -0
  177. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/tools/rewrite_eval_results.py +0 -0
  178. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/__init__.py +0 -0
  179. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/arena_utils.py +0 -0
  180. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/completion_parsers.py +0 -0
  181. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/logger.py +0 -0
  182. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/task_cfg_parser.py +0 -0
  183. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/task_utils.py +0 -0
  184. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope/utils/utils.py +0 -0
  185. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope.egg-info/dependency_links.txt +0 -0
  186. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope.egg-info/entry_points.txt +0 -0
  187. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope.egg-info/not-zip-safe +0 -0
  188. {evalscope-0.5.2 → evalscope-0.5.4}/evalscope.egg-info/top_level.txt +0 -0
  189. {evalscope-0.5.2 → evalscope-0.5.4}/setup.cfg +0 -0
@@ -0,0 +1,400 @@
1
+ Metadata-Version: 2.1
2
+ Name: evalscope
3
+ Version: 0.5.4
4
+ Summary: EvalScope: Lightweight LLMs Evaluation Framework
5
+ Home-page: https://github.com/modelscope/evalscope
6
+ Author: ModelScope team
7
+ Author-email: contact@modelscope.cn
8
+ Keywords: python,llm,evaluation
9
+ Classifier: Development Status :: 4 - Beta
10
+ Classifier: License :: OSI Approved :: Apache Software License
11
+ Classifier: Operating System :: OS Independent
12
+ Classifier: Programming Language :: Python :: 3
13
+ Classifier: Programming Language :: Python :: 3.8
14
+ Classifier: Programming Language :: Python :: 3.9
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Requires-Python: >=3.8
17
+ Description-Content-Type: text/markdown
18
+ Requires-Dist: torch
19
+ Requires-Dist: absl-py
20
+ Requires-Dist: accelerate
21
+ Requires-Dist: cachetools
22
+ Requires-Dist: datasets<3.0.0,>=2.18.0
23
+ Requires-Dist: editdistance
24
+ Requires-Dist: jsonlines
25
+ Requires-Dist: matplotlib
26
+ Requires-Dist: modelscope[framework]
27
+ Requires-Dist: nltk
28
+ Requires-Dist: openai
29
+ Requires-Dist: pandas
30
+ Requires-Dist: plotly
31
+ Requires-Dist: pyarrow
32
+ Requires-Dist: pympler
33
+ Requires-Dist: pyyaml
34
+ Requires-Dist: regex
35
+ Requires-Dist: requests
36
+ Requires-Dist: requests-toolbelt
37
+ Requires-Dist: rouge-score
38
+ Requires-Dist: sacrebleu
39
+ Requires-Dist: scikit-learn
40
+ Requires-Dist: seaborn
41
+ Requires-Dist: sentencepiece
42
+ Requires-Dist: simple-ddl-parser
43
+ Requires-Dist: tabulate
44
+ Requires-Dist: tiktoken
45
+ Requires-Dist: tqdm
46
+ Requires-Dist: transformers>=4.33
47
+ Requires-Dist: transformers_stream_generator
48
+ Requires-Dist: jieba
49
+ Requires-Dist: rouge-chinese
50
+ Provides-Extra: opencompass
51
+ Requires-Dist: ms-opencompass>=0.1.0; extra == "opencompass"
52
+ Provides-Extra: vlmeval
53
+ Requires-Dist: ms-vlmeval>=0.0.5; extra == "vlmeval"
54
+ Provides-Extra: inner
55
+ Requires-Dist: absl-py; extra == "inner"
56
+ Requires-Dist: accelerate; extra == "inner"
57
+ Requires-Dist: alibaba_itag_sdk; extra == "inner"
58
+ Requires-Dist: dashscope; extra == "inner"
59
+ Requires-Dist: editdistance; extra == "inner"
60
+ Requires-Dist: jsonlines; extra == "inner"
61
+ Requires-Dist: jsonlines; extra == "inner"
62
+ Requires-Dist: nltk; extra == "inner"
63
+ Requires-Dist: openai; extra == "inner"
64
+ Requires-Dist: pandas==1.5.3; extra == "inner"
65
+ Requires-Dist: plotly; extra == "inner"
66
+ Requires-Dist: pyarrow; extra == "inner"
67
+ Requires-Dist: pyodps; extra == "inner"
68
+ Requires-Dist: pyyaml; extra == "inner"
69
+ Requires-Dist: regex; extra == "inner"
70
+ Requires-Dist: requests==2.28.1; extra == "inner"
71
+ Requires-Dist: requests-toolbelt==0.10.1; extra == "inner"
72
+ Requires-Dist: rouge-score; extra == "inner"
73
+ Requires-Dist: sacrebleu; extra == "inner"
74
+ Requires-Dist: scikit-learn; extra == "inner"
75
+ Requires-Dist: seaborn; extra == "inner"
76
+ Requires-Dist: simple-ddl-parser; extra == "inner"
77
+ Requires-Dist: streamlit; extra == "inner"
78
+ Requires-Dist: tqdm; extra == "inner"
79
+ Requires-Dist: transformers<4.43,>=4.33; extra == "inner"
80
+ Requires-Dist: transformers_stream_generator; extra == "inner"
81
+ Provides-Extra: all
82
+ Requires-Dist: torch; extra == "all"
83
+ Requires-Dist: absl-py; extra == "all"
84
+ Requires-Dist: accelerate; extra == "all"
85
+ Requires-Dist: cachetools; extra == "all"
86
+ Requires-Dist: datasets<3.0.0,>=2.18.0; extra == "all"
87
+ Requires-Dist: editdistance; extra == "all"
88
+ Requires-Dist: jsonlines; extra == "all"
89
+ Requires-Dist: matplotlib; extra == "all"
90
+ Requires-Dist: modelscope[framework]; extra == "all"
91
+ Requires-Dist: nltk; extra == "all"
92
+ Requires-Dist: openai; extra == "all"
93
+ Requires-Dist: pandas; extra == "all"
94
+ Requires-Dist: plotly; extra == "all"
95
+ Requires-Dist: pyarrow; extra == "all"
96
+ Requires-Dist: pympler; extra == "all"
97
+ Requires-Dist: pyyaml; extra == "all"
98
+ Requires-Dist: regex; extra == "all"
99
+ Requires-Dist: requests; extra == "all"
100
+ Requires-Dist: requests-toolbelt; extra == "all"
101
+ Requires-Dist: rouge-score; extra == "all"
102
+ Requires-Dist: sacrebleu; extra == "all"
103
+ Requires-Dist: scikit-learn; extra == "all"
104
+ Requires-Dist: seaborn; extra == "all"
105
+ Requires-Dist: sentencepiece; extra == "all"
106
+ Requires-Dist: simple-ddl-parser; extra == "all"
107
+ Requires-Dist: tabulate; extra == "all"
108
+ Requires-Dist: tiktoken; extra == "all"
109
+ Requires-Dist: tqdm; extra == "all"
110
+ Requires-Dist: transformers>=4.33; extra == "all"
111
+ Requires-Dist: transformers_stream_generator; extra == "all"
112
+ Requires-Dist: jieba; extra == "all"
113
+ Requires-Dist: rouge-chinese; extra == "all"
114
+ Requires-Dist: ms-opencompass>=0.1.0; extra == "all"
115
+ Requires-Dist: ms-vlmeval>=0.0.5; extra == "all"
116
+
117
+ English | [简体中文](README_zh.md)
118
+
119
+
120
+ ![](docs/en/_static/images/evalscope_logo.png)
121
+
122
+ <p align="center">
123
+ <a href="https://badge.fury.io/py/evalscope"><img src="https://badge.fury.io/py/evalscope.svg" alt="PyPI version" height="18"></a>
124
+ <a href="https://pypi.org/project/evalscope"><img alt="PyPI - Downloads" src="https://static.pepy.tech/badge/evalscope">
125
+ </a>
126
+ <a href='https://evalscope.readthedocs.io/en/latest/?badge=latest'>
127
+ <img src='https://readthedocs.org/projects/evalscope-en/badge/?version=latest' alt='Documentation Status' />
128
+ </a>
129
+ <br>
130
+ <a href="https://evalscope.readthedocs.io/en/latest/"><span style="font-size: 16px;">📖 Documents</span></a> &nbsp | &nbsp<a href="https://evalscope.readthedocs.io/zh-cn/latest/"><span style="font-size: 16px;"> 📖 中文文档</span></a>
131
+ <p>
132
+
133
+
134
+ ## 📋 Table of Contents
135
+ - [Introduction](#introduction)
136
+ - [News](#News)
137
+ - [Installation](#installation)
138
+ - [Quick Start](#quick-start)
139
+ - [Evaluation Backend](#evaluation-backend)
140
+ - [Custom Dataset Evaluation](#custom-dataset-evaluation)
141
+ - [Offline Evaluation](#offline-evaluation)
142
+ - [Arena Mode](#arena-mode)
143
+ - [Model Serving Performance Evaluation](#Model-Serving-Performance-Evaluation)
144
+ - [Leaderboard](#leaderboard)
145
+
146
+ ## 📝 Introduction
147
+
148
+ Large Model (including Large Language Models, Multi-modal Large Language Models) evaluation has become a critical process for assessing and improving LLMs. To better support the evaluation of large models, we propose the EvalScope framework.
149
+
150
+ ### Framework Features
151
+ - **Benchmark Datasets**: Preloaded with several commonly used test benchmarks, including MMLU, CMMLU, C-Eval, GSM8K, ARC, HellaSwag, TruthfulQA, MATH, HumanEval, etc.
152
+ - **Evaluation Metrics**: Implements various commonly used evaluation metrics.
153
+ - **Model Access**: A unified model access mechanism that is compatible with the Generate and Chat interfaces of multiple model families.
154
+ - **Automated Evaluation**: Includes automatic evaluation of objective questions and complex task evaluation using expert models.
155
+ - **Evaluation Reports**: Automatically generates evaluation reports.
156
+ - **Arena Mode**: Used for comparisons between models and objective evaluation of models, supporting various evaluation modes, including:
157
+ - **Single mode**: Scoring a single model.
158
+ - **Pairwise-baseline mode**: Comparing against a baseline model.
159
+ - **Pairwise (all) mode**: Pairwise comparison among all models.
160
+ - **Visualization Tools**: Provides intuitive displays of evaluation results.
161
+ - **Model Performance Evaluation**: Offers a performance testing tool for model inference services and detailed statistics, see [Model Performance Evaluation Documentation](https://evalscope.readthedocs.io/en/latest/user_guides/stress_test.html).
162
+ - **OpenCompass Integration**: Supports OpenCompass as the evaluation backend, providing advanced encapsulation and task simplification, allowing for easier task submission for evaluation.
163
+ - **VLMEvalKit Integration**: Supports VLMEvalKit as the evaluation backend, facilitating the initiation of multi-modal evaluation tasks, supporting various multi-modal models and datasets.
164
+ - **Full-Link Support**: Through seamless integration with the [ms-swift](https://github.com/modelscope/ms-swift) training framework, provides a one-stop development process for model training, model deployment, model evaluation, and report viewing, enhancing user development efficiency.
165
+
166
+
167
+ <details><summary>Overall Architecture</summary>
168
+
169
+ <p align="center">
170
+ <img src="docs/en/_static/images/evalscope_framework.png" width="70%">
171
+ <br>Fig 1. EvalScope Framework.
172
+ </p>
173
+
174
+ The architecture includes the following modules:
175
+ 1. **Model Adapter**: The model adapter is used to convert the outputs of specific models into the format required by the framework, supporting both API call models and locally run models.
176
+ 2. **Data Adapter**: The data adapter is responsible for converting and processing input data to meet various evaluation needs and formats.
177
+ 3. **Evaluation Backend**:
178
+ - **Native**: EvalScope’s own **default evaluation framework**, supporting various evaluation modes, including single model evaluation, arena mode, baseline model comparison mode, etc.
179
+ - **OpenCompass**: Supports [OpenCompass](https://github.com/open-compass/opencompass) as the evaluation backend, providing advanced encapsulation and task simplification, allowing you to submit tasks for evaluation more easily.
180
+ - **VLMEvalKit**: Supports [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) as the evaluation backend, enabling easy initiation of multi-modal evaluation tasks, supporting various multi-modal models and datasets.
181
+ - **ThirdParty**: Other third-party evaluation tasks, such as ToolBench.
182
+ 4. **Performance Evaluator**: Model performance evaluation, responsible for measuring model inference service performance, including performance testing, stress testing, performance report generation, and visualization.
183
+ 5. **Evaluation Report**: The final generated evaluation report summarizes the model's performance, which can be used for decision-making and further model optimization.
184
+ 6. **Visualization**: Visualization results help users intuitively understand evaluation results, facilitating analysis and comparison of different model performances.
185
+ </details>
186
+
187
+
188
+ ## 🎉 News
189
+ - 🔥 **[2024.09.18]** Our documentation has been updated to include a blog module, featuring some technical research and discussions related to evaluations. We invite you to [📖 read it](https://evalscope.readthedocs.io/en/refact_readme/blog/index.html).
190
+ - 🔥 **[2024.09.12]** Support for LongWriter evaluation, which supports 10,000+ word generation. You can use the benchmark [LongBench-Write](evalscope/third_party/longbench_write/README.md) to measure the long output quality as well as the output length.
191
+ - 🔥 **[2024.08.30]** Support for custom dataset evaluations, including text datasets and multimodal image-text datasets.
192
+ - 🔥 **[2024.08.20]** Updated the official documentation, including getting started guides, best practices, and FAQs. Feel free to [📖read it here](https://evalscope.readthedocs.io/en/latest/)!
193
+ - 🔥 **[2024.08.09]** Simplified the installation process, allowing for pypi installation of vlmeval dependencies; optimized the multimodal model evaluation experience, achieving up to 10x acceleration based on the OpenAI API evaluation chain.
194
+ - 🔥 **[2024.07.31]** Important change: The package name `llmuses` has been changed to `evalscope`. Please update your code accordingly.
195
+ - 🔥 **[2024.07.26]** Support for **VLMEvalKit** as a third-party evaluation framework to initiate multimodal model evaluation tasks.
196
+ - 🔥 **[2024.06.29]** Support for **OpenCompass** as a third-party evaluation framework, which we have encapsulated at a higher level, supporting pip installation and simplifying evaluation task configuration.
197
+ - 🔥 **[2024.06.13]** EvalScope seamlessly integrates with the fine-tuning framework SWIFT, providing full-chain support from LLM training to evaluation.
198
+ - 🔥 **[2024.06.13]** Integrated the Agent evaluation dataset ToolBench.
199
+
200
+
201
+
202
+ ## 🛠️ Installation
203
+ ### Method 1: Install Using pip
204
+ We recommend using conda to manage your environment and installing dependencies with pip:
205
+
206
+ 1. Create a conda environment (optional)
207
+ ```shell
208
+ # It is recommended to use Python 3.10
209
+ conda create -n evalscope python=3.10
210
+ # Activate the conda environment
211
+ conda activate evalscope
212
+ ```
213
+
214
+ 2. Install dependencies using pip
215
+ ```shell
216
+ pip install evalscope # Install Native backend (default)
217
+ # Additional options
218
+ pip install evalscope[opencompass] # Install OpenCompass backend
219
+ pip install evalscope[vlmeval] # Install VLMEvalKit backend
220
+ pip install evalscope[all] # Install all backends (Native, OpenCompass, VLMEvalKit)
221
+ ```
222
+
223
+ > [!WARNING]
224
+ > As the project has been renamed to `evalscope`, for versions `v0.4.3` or earlier, you can install using the following command:
225
+ > ```shell
226
+ > pip install llmuses<=0.4.3
227
+ > ```
228
+ > To import relevant dependencies using `llmuses`:
229
+ > ``` python
230
+ > from llmuses import ...
231
+ > ```
232
+
233
+ ### Method 2: Install from Source
234
+ 1. Download the source code
235
+ ```shell
236
+ git clone https://github.com/modelscope/evalscope.git
237
+ ```
238
+
239
+ 2. Install dependencies
240
+ ```shell
241
+ cd evalscope/
242
+ pip install -e . # Install Native backend
243
+ # Additional options
244
+ pip install -e '.[opencompass]' # Install OpenCompass backend
245
+ pip install -e '.[vlmeval]' # Install VLMEvalKit backend
246
+ pip install -e '.[all]' # Install all backends (Native, OpenCompass, VLMEvalKit)
247
+ ```
248
+
249
+
250
+ ## 🚀 Quick Start
251
+
252
+ ### 1. Simple Evaluation
253
+ To evaluate a model using default settings on specified datasets, follow the process below:
254
+
255
+ #### Install using pip
256
+ You can execute this command from any directory:
257
+ ```bash
258
+ python -m evalscope.run \
259
+ --model qwen/Qwen2-0.5B-Instruct \
260
+ --template-type qwen \
261
+ --datasets arc
262
+ ```
263
+
264
+ #### Install from source
265
+ Execute this command in the `evalscope` directory:
266
+ ```bash
267
+ python evalscope/run.py \
268
+ --model qwen/Qwen2-0.5B-Instruct \
269
+ --template-type qwen \
270
+ --datasets arc
271
+ ```
272
+
273
+ If prompted with `Do you wish to run the custom code? [y/N]`, please type `y`.
274
+
275
+
276
+ #### Basic Parameter Descriptions
277
+ - `--model`: Specifies the `model_id` of the model on [ModelScope](https://modelscope.cn/), allowing automatic download. For example, see the [Qwen2-0.5B-Instruct model link](https://modelscope.cn/models/qwen/Qwen2-0.5B-Instruct/summary); you can also use a local path, such as `/path/to/model`.
278
+ - `--template-type`: Specifies the template type corresponding to the model. Refer to the `Default Template` field in the [template table](https://swift.readthedocs.io/en/latest/Instruction/Supported-models-datasets.html#llm) for filling in this field.
279
+ - `--datasets`: The dataset name, allowing multiple datasets to be specified, separated by spaces; these datasets will be automatically downloaded. Refer to the [supported datasets list](https://evalscope.readthedocs.io/en/latest/get_started/supported_dataset.html) for available options.
280
+
281
+ ### 2. Parameterized Evaluation
282
+ If you wish to conduct a more customized evaluation, such as modifying model parameters or dataset parameters, you can use the following commands:
283
+
284
+ **Example 1:**
285
+ ```shell
286
+ python evalscope/run.py \
287
+ --model qwen/Qwen2-0.5B-Instruct \
288
+ --template-type qwen \
289
+ --model-args revision=master,precision=torch.float16,device_map=auto \
290
+ --datasets gsm8k ceval \
291
+ --use-cache true \
292
+ --limit 10
293
+ ```
294
+
295
+ **Example 2:**
296
+ ```shell
297
+ python evalscope/run.py \
298
+ --model qwen/Qwen2-0.5B-Instruct \
299
+ --template-type qwen \
300
+ --generation-config do_sample=false,temperature=0.0 \
301
+ --datasets ceval \
302
+ --dataset-args '{"ceval": {"few_shot_num": 0, "few_shot_random": false}}' \
303
+ --limit 10
304
+ ```
305
+
306
+ #### Parameter Descriptions
307
+ In addition to the three [basic parameters](#basic-parameter-descriptions), the other parameters are as follows:
308
+ - `--model-args`: Model loading parameters, separated by commas, in `key=value` format.
309
+ - `--generation-config`: Generation parameters, separated by commas, in `key=value` format.
310
+ - `do_sample`: Whether to use sampling, default is `false`.
311
+ - `max_new_tokens`: Maximum generation length, default is 1024.
312
+ - `temperature`: Sampling temperature.
313
+ - `top_p`: Sampling threshold.
314
+ - `top_k`: Sampling threshold.
315
+ - `--use-cache`: Whether to use local cache, default is `false`. If set to `true`, previously evaluated model and dataset combinations will not be evaluated again, and will be read directly from the local cache.
316
+ - `--dataset-args`: Evaluation dataset configuration parameters, provided in JSON format, where the key is the dataset name and the value is the parameter; note that these must correspond one-to-one with the values in `--datasets`.
317
+ - `--few_shot_num`: Number of few-shot examples.
318
+ - `--few_shot_random`: Whether to randomly sample few-shot data; if not specified, defaults to `true`.
319
+ - `--limit`: Maximum number of evaluation samples per dataset; if not specified, all will be evaluated, which is useful for quick validation.
320
+
321
+ ### 3. Use the run_task Function to Submit an Evaluation Task
322
+ Using the `run_task` function to submit an evaluation task requires the same parameters as the command line. You need to pass a dictionary as the parameter, which includes the following fields:
323
+
324
+ #### 1. Configuration Task Dictionary Parameters
325
+ ```python
326
+ import torch
327
+ from evalscope.constants import DEFAULT_ROOT_CACHE_DIR
328
+
329
+ # Example
330
+ your_task_cfg = {
331
+ 'model_args': {'revision': None, 'precision': torch.float16, 'device_map': 'auto'},
332
+ 'generation_config': {'do_sample': False, 'repetition_penalty': 1.0, 'max_new_tokens': 512},
333
+ 'dataset_args': {},
334
+ 'dry_run': False,
335
+ 'model': 'qwen/Qwen2-0.5B-Instruct',
336
+ 'template_type': 'qwen',
337
+ 'datasets': ['arc', 'hellaswag'],
338
+ 'work_dir': DEFAULT_ROOT_CACHE_DIR,
339
+ 'outputs': DEFAULT_ROOT_CACHE_DIR,
340
+ 'mem_cache': False,
341
+ 'dataset_hub': 'ModelScope',
342
+ 'dataset_dir': DEFAULT_ROOT_CACHE_DIR,
343
+ 'limit': 10,
344
+ 'debug': False
345
+ }
346
+ ```
347
+ Here, `DEFAULT_ROOT_CACHE_DIR` is set to `'~/.cache/evalscope'`.
348
+
349
+ #### 2. Execute Task with run_task
350
+ ```python
351
+ from evalscope.run import run_task
352
+ run_task(task_cfg=your_task_cfg)
353
+ ```
354
+
355
+
356
+ ## Evaluation Backend
357
+ EvalScope supports using third-party evaluation frameworks to initiate evaluation tasks, which we call Evaluation Backend. Currently supported Evaluation Backend includes:
358
+ - **Native**: EvalScope's own **default evaluation framework**, supporting various evaluation modes including single model evaluation, arena mode, and baseline model comparison mode.
359
+ - [OpenCompass](https://github.com/open-compass/opencompass): Initiate OpenCompass evaluation tasks through EvalScope. Lightweight, easy to customize, supports seamless integration with the LLM fine-tuning framework ms-swift. [📖 User Guide](https://evalscope.readthedocs.io/en/latest/user_guides/opencompass_backend.html)
360
+ - [VLMEvalKit](https://github.com/open-compass/VLMEvalKit): Initiate VLMEvalKit multimodal evaluation tasks through EvalScope. Supports various multimodal models and datasets, and offers seamless integration with the LLM fine-tuning framework ms-swift. [📖 User Guide](https://evalscope.readthedocs.io/en/latest/user_guides/vlmevalkit_backend.html)
361
+ - **ThirdParty**: The third-party task, e.g. [ToolBench](https://evalscope.readthedocs.io/en/latest/third_party/toolbench.html), you can contribute your own evaluation task to EvalScope as third-party backend.
362
+
363
+ ## Custom Dataset Evaluation
364
+ EvalScope supports custom dataset evaluation. For detailed information, please refer to the Custom Dataset Evaluation [📖User Guide](https://evalscope.readthedocs.io/en/latest/advanced_guides/custom_dataset.html)
365
+
366
+ ## Offline Evaluation
367
+ You can use local dataset to evaluate the model without internet connection.
368
+
369
+ Refer to: Offline Evaluation [📖 User Guide](https://evalscope.readthedocs.io/en/latest/user_guides/offline_evaluation.html)
370
+
371
+
372
+ ## Arena Mode
373
+ The Arena mode allows multiple candidate models to be evaluated through pairwise battles, and can choose to use the AI Enhanced Auto-Reviewer (AAR) automatic evaluation process or manual evaluation to obtain the evaluation report.
374
+
375
+ Refer to: Arena Mode [📖 User Guide](https://evalscope.readthedocs.io/en/latest/user_guides/arena.html)
376
+
377
+ ## Model Serving Performance Evaluation
378
+ A stress testing tool that focuses on large language models and can be customized to support various data set formats and different API protocol formats.
379
+
380
+ Refer to : Model Serving Performance Evaluation [📖 User Guide](https://evalscope.readthedocs.io/en/latest/user_guides/stress_test.html)
381
+
382
+
383
+ ## Leaderboard
384
+ The LLM Leaderboard aims to provide an objective and comprehensive evaluation standard and platform to help researchers and developers understand and compare the performance of models on various tasks on ModelScope.
385
+
386
+ Refer to : [Leaderboard](https://modelscope.cn/leaderboard/58/ranking?type=free)
387
+
388
+
389
+ ## TO-DO List
390
+ - [x] Agents evaluation
391
+ - [x] vLLM
392
+ - [ ] Distributed evaluating
393
+ - [x] Multi-modal evaluation
394
+ - [ ] Benchmarks
395
+ - [ ] GAIA
396
+ - [ ] GPQA
397
+ - [x] MBPP
398
+ - [ ] Auto-reviewer
399
+ - [ ] Qwen-max
400
+