crfm-helm 0.5.6__py3-none-any.whl → 0.5.10__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of crfm-helm might be problematic. Click here for more details.
- {crfm_helm-0.5.6.dist-info → crfm_helm-0.5.10.dist-info}/METADATA +72 -130
- {crfm_helm-0.5.6.dist-info → crfm_helm-0.5.10.dist-info}/RECORD +372 -305
- helm/benchmark/adaptation/adapter_spec.py +10 -0
- helm/benchmark/adaptation/adapters/multimodal/multiple_choice_joint_multimodal_adapter.py +11 -3
- helm/benchmark/adaptation/adapters/multiple_choice_joint_adapter.py +11 -8
- helm/benchmark/annotation/aci_bench_annotator.py +11 -22
- helm/benchmark/annotation/air_bench_annotator.py +1 -1
- helm/benchmark/annotation/alrage_annotator.py +90 -0
- helm/benchmark/annotation/chw_care_plan_annotator.py +10 -21
- helm/benchmark/annotation/dischargeme_annotator.py +11 -22
- helm/benchmark/annotation/live_qa_annotator.py +1 -1
- helm/benchmark/annotation/med_dialog_annotator.py +11 -22
- helm/benchmark/annotation/medalign_annotator.py +11 -22
- helm/benchmark/annotation/medi_qa_annotator.py +11 -22
- helm/benchmark/annotation/medication_qa_annotator.py +11 -22
- helm/benchmark/annotation/mental_health_annotator.py +11 -22
- helm/benchmark/annotation/mimic_bhc_annotator.py +11 -22
- helm/benchmark/annotation/mimic_rrs_annotator.py +11 -22
- helm/benchmark/annotation/model_as_judge.py +23 -18
- helm/benchmark/annotation/mtsamples_procedures_annotator.py +11 -22
- helm/benchmark/annotation/mtsamples_replicate_annotator.py +11 -22
- helm/benchmark/annotation/starr_patient_instructions_annotator.py +11 -22
- helm/benchmark/metrics/air_bench_metrics.py +3157 -1
- helm/benchmark/metrics/alrage_metric.py +35 -0
- helm/benchmark/metrics/basic_metrics.py +267 -2
- helm/benchmark/metrics/bbq_metrics.py +12 -0
- helm/benchmark/metrics/classification_metrics.py +19 -1
- helm/benchmark/metrics/codeinsights_code_efficiency_metrics.py +186 -0
- helm/benchmark/metrics/codeinsights_code_evaluation_metrics.py +477 -0
- helm/benchmark/metrics/codeinsights_correct_code_metrics.py +366 -0
- helm/benchmark/metrics/codeinsights_edge_case_metrics.py +92 -0
- helm/benchmark/metrics/codeinsights_metric_specs.py +51 -0
- helm/benchmark/metrics/comet_metric.py +1 -1
- helm/benchmark/metrics/conv_fin_qa_calc_metrics.py +12 -1
- helm/benchmark/metrics/copyright_metrics.py +1 -1
- helm/benchmark/metrics/decodingtrust_stereotype_bias_metrics.py +1 -1
- helm/benchmark/metrics/dry_run_metrics.py +30 -1
- helm/benchmark/metrics/efficiency_metrics.py +74 -0
- helm/benchmark/metrics/ehr_sql_metrics.py +57 -1
- helm/benchmark/metrics/evaluate_reference_metrics.py +312 -1
- helm/benchmark/metrics/gpqa_chain_of_thought_metric.py +13 -1
- helm/benchmark/metrics/helpdesk_call_summarization_metrics.py +13 -1
- helm/benchmark/metrics/ifeval_metrics.py +13 -1
- helm/benchmark/metrics/image_generation/clip_score_metrics.py +13 -2
- helm/benchmark/metrics/image_generation/fractal_dimension/fractal_dimension_util.py +1 -1
- helm/benchmark/metrics/instruction_following_critique_metrics.py +41 -1
- helm/benchmark/metrics/kpi_edgar_metrics.py +21 -0
- helm/benchmark/metrics/language_modeling_metrics.py +13 -1
- helm/benchmark/metrics/live_qa_metrics.py +13 -1
- helm/benchmark/metrics/llm_jury_metrics.py +13 -1
- helm/benchmark/metrics/lmkt_metric_specs.py +12 -0
- helm/benchmark/metrics/lmkt_metrics.py +47 -0
- helm/benchmark/metrics/medcalc_bench_metrics.py +14 -1
- helm/benchmark/metrics/medec_metrics.py +25 -2
- helm/benchmark/metrics/melt_toxicity_metric.py +1 -1
- helm/benchmark/metrics/metric.py +25 -0
- helm/benchmark/metrics/mimiciv_billing_code_metrics.py +32 -1
- helm/benchmark/metrics/omni_math_metrics.py +13 -1
- helm/benchmark/metrics/safety_metrics.py +13 -1
- helm/benchmark/metrics/seahelm_metrics.py +14 -1
- helm/benchmark/metrics/summac/model_summac.py +3 -3
- helm/benchmark/metrics/summarization_metrics.py +129 -1
- helm/benchmark/metrics/toxicity_metrics.py +31 -1
- helm/benchmark/metrics/ultra_suite_asr_classification_metrics.py +52 -0
- helm/benchmark/metrics/wildbench_metrics.py +21 -1
- helm/benchmark/model_deployment_registry.py +11 -19
- helm/benchmark/presentation/create_plots.py +11 -2
- helm/benchmark/presentation/run_display.py +13 -3
- helm/benchmark/presentation/run_entry.py +2 -2
- helm/benchmark/presentation/schema.py +10 -22
- helm/benchmark/presentation/summarize.py +189 -14
- helm/benchmark/presentation/taxonomy_info.py +20 -0
- helm/benchmark/presentation/test_create_plots.py +4 -1
- helm/benchmark/run.py +15 -4
- helm/benchmark/run_expander.py +4 -0
- helm/benchmark/run_specs/arabic_run_specs.py +197 -0
- helm/benchmark/run_specs/bluex_run_specs.py +40 -0
- helm/benchmark/run_specs/classic_run_specs.py +2 -55
- helm/benchmark/run_specs/codeinsights_run_specs.py +192 -0
- helm/benchmark/run_specs/healthqa_br_run_specs.py +40 -0
- helm/benchmark/run_specs/heim_run_specs.py +3 -1
- helm/benchmark/run_specs/lmkt_run_specs.py +144 -0
- helm/benchmark/run_specs/long_context_run_specs.py +48 -1
- helm/benchmark/run_specs/medhelm/__init__.py +0 -0
- helm/benchmark/run_specs/medhelm/benchmark_config.py +219 -0
- helm/benchmark/run_specs/medhelm_run_specs.py +363 -53
- helm/benchmark/run_specs/multilingual_run_specs.py +50 -0
- helm/benchmark/run_specs/speech_disorder_audio_run_specs.py +11 -13
- helm/benchmark/runner.py +7 -0
- helm/benchmark/scenarios/aci_bench_scenario.py +23 -0
- helm/benchmark/scenarios/air_bench_scenario.py +21 -0
- helm/benchmark/scenarios/alghafa_scenario.py +126 -0
- helm/benchmark/scenarios/alrage_scenario.py +54 -0
- helm/benchmark/scenarios/anthropic_hh_rlhf_scenario.py +23 -1
- helm/benchmark/scenarios/anthropic_red_team_scenario.py +12 -1
- helm/benchmark/scenarios/arabic_exams_scenario.py +114 -0
- helm/benchmark/scenarios/arabic_mmlu_scenario.py +82 -0
- helm/benchmark/scenarios/aratrust_scenario.py +95 -0
- helm/benchmark/scenarios/audio_language/casual_conversations2_scenario.py +1 -1
- helm/benchmark/scenarios/audio_language/mustard_scenario.py +1 -1
- helm/benchmark/scenarios/audio_language/ultra_suite_asr_classification_scenario.py +74 -0
- helm/benchmark/scenarios/audio_language/ultra_suite_asr_transcription_scenario.py +70 -0
- helm/benchmark/scenarios/audio_language/ultra_suite_classification_scenario.py +22 -53
- helm/benchmark/scenarios/audio_language/ultra_suite_disorder_breakdown_scenario.py +21 -21
- helm/benchmark/scenarios/audio_language/ultra_suite_disorder_symptoms_scenario.py +21 -52
- helm/benchmark/scenarios/babi_qa_scenario.py +15 -0
- helm/benchmark/scenarios/banking77_scenario.py +21 -0
- helm/benchmark/scenarios/bbq_scenario.py +15 -0
- helm/benchmark/scenarios/best_chatgpt_prompts.yaml +473 -0
- helm/benchmark/scenarios/bird_sql_scenario.py +18 -0
- helm/benchmark/scenarios/bluex_scenario.py +70 -0
- helm/benchmark/scenarios/bold_scenario.py +15 -0
- helm/benchmark/scenarios/boolq_scenario.py +20 -0
- helm/benchmark/scenarios/chw_care_plan_scenario.py +23 -0
- helm/benchmark/scenarios/civil_comments_scenario.py +13 -0
- helm/benchmark/scenarios/clear_scenario.py +23 -0
- helm/benchmark/scenarios/cleva_scenario.py +480 -1
- helm/benchmark/scenarios/code_scenario.py +28 -0
- helm/benchmark/scenarios/codeinsights_code_efficiency_scenario.py +197 -0
- helm/benchmark/scenarios/codeinsights_correct_code_scenario.py +78 -0
- helm/benchmark/scenarios/codeinsights_edge_case_scenario.py +192 -0
- helm/benchmark/scenarios/codeinsights_student_coding_scenario.py +162 -0
- helm/benchmark/scenarios/codeinsights_student_mistake_scenario.py +188 -0
- helm/benchmark/scenarios/commonsense_scenario.py +32 -0
- helm/benchmark/scenarios/compositional_instructions.yaml +70 -0
- helm/benchmark/scenarios/conv_fin_qa_calc_scenario.py +21 -0
- helm/benchmark/scenarios/copyright_scenario.py +35 -1
- helm/benchmark/scenarios/cti_to_mitre_scenario.py +21 -0
- helm/benchmark/scenarios/czech_bank_qa_scenario.py +18 -0
- helm/benchmark/scenarios/decodingtrust_adv_demonstration_scenario.py +22 -1
- helm/benchmark/scenarios/decodingtrust_adv_robustness_scenario.py +23 -1
- helm/benchmark/scenarios/decodingtrust_fairness_scenario.py +22 -1
- helm/benchmark/scenarios/decodingtrust_machine_ethics_scenario.py +21 -1
- helm/benchmark/scenarios/decodingtrust_ood_robustness_scenario.py +13 -0
- helm/benchmark/scenarios/decodingtrust_privacy_scenario.py +13 -1
- helm/benchmark/scenarios/decodingtrust_stereotype_bias_scenario.py +13 -1
- helm/benchmark/scenarios/decodingtrust_toxicity_prompts_scenario.py +13 -1
- helm/benchmark/scenarios/dischargeme_scenario.py +24 -0
- helm/benchmark/scenarios/disinformation_scenario.py +22 -0
- helm/benchmark/scenarios/dyck_language_scenario.py +15 -0
- helm/benchmark/scenarios/ehrshot_scenario.py +22 -0
- helm/benchmark/scenarios/enem_challenge_scenario.py +19 -0
- helm/benchmark/scenarios/entity_data_imputation_scenario.py +14 -0
- helm/benchmark/scenarios/entity_matching_scenario.py +14 -0
- helm/benchmark/scenarios/exams_multilingual_scenario.py +115 -0
- helm/benchmark/scenarios/fin_qa_scenario.py +20 -0
- helm/benchmark/scenarios/financebench_scenario.py +21 -0
- helm/benchmark/scenarios/financial_phrasebank_scenario.py +21 -0
- helm/benchmark/scenarios/gold_commodity_news_scenario.py +21 -0
- helm/benchmark/scenarios/gpqa_scenario.py +18 -0
- helm/benchmark/scenarios/grammar_scenario.py +20 -1
- helm/benchmark/scenarios/gsm_scenario.py +21 -0
- helm/benchmark/scenarios/harm_bench_gcg_transfer_scenario.py +12 -1
- helm/benchmark/scenarios/harm_bench_scenario.py +12 -1
- helm/benchmark/scenarios/headqa_scenario.py +22 -0
- helm/benchmark/scenarios/healthqa_br_scenario.py +80 -0
- helm/benchmark/scenarios/helpdesk_call_summarization_scenario.py +13 -0
- helm/benchmark/scenarios/ice_scenario.py +21 -1
- helm/benchmark/scenarios/ifeval_scenario.py +18 -0
- helm/benchmark/scenarios/imdb_scenario.py +15 -0
- helm/benchmark/scenarios/infinite_bench_en_mc_scenario.py +111 -0
- helm/benchmark/scenarios/infinite_bench_en_qa_scenario.py +1 -1
- helm/benchmark/scenarios/infinite_bench_en_sum_scenario.py +19 -0
- helm/benchmark/scenarios/koala_scenario.py +21 -1
- helm/benchmark/scenarios/kpi_edgar_scenario.py +21 -0
- helm/benchmark/scenarios/legal_contract_summarization_scenario.py +20 -0
- helm/benchmark/scenarios/legal_summarization_scenario.py +50 -0
- helm/benchmark/scenarios/legal_support_scenario.py +13 -0
- helm/benchmark/scenarios/legalbench_scenario.py +19 -0
- helm/benchmark/scenarios/lex_glue_scenario.py +11 -0
- helm/benchmark/scenarios/lextreme_scenario.py +11 -0
- helm/benchmark/scenarios/lmkt_scenarios.py +288 -0
- helm/benchmark/scenarios/lsat_qa_scenario.py +14 -0
- helm/benchmark/scenarios/madinah_qa_scenario.py +73 -0
- helm/benchmark/scenarios/math_scenario.py +54 -20
- helm/benchmark/scenarios/mbzuai_human_translated_arabic_mmlu.py +68 -0
- helm/benchmark/scenarios/med_dialog_scenario.py +32 -1
- helm/benchmark/scenarios/med_mcqa_scenario.py +14 -0
- helm/benchmark/scenarios/med_qa_scenario.py +20 -0
- helm/benchmark/scenarios/medalign_scenario.py +23 -0
- helm/benchmark/scenarios/medalign_scenario_helper.py +19 -125
- helm/benchmark/scenarios/medbullets_scenario.py +22 -0
- helm/benchmark/scenarios/medcalc_bench_scenario.py +22 -0
- helm/benchmark/scenarios/medec_scenario.py +23 -0
- helm/benchmark/scenarios/medhallu_scenario.py +23 -0
- helm/benchmark/scenarios/medhelm/__init__.py +0 -0
- helm/benchmark/scenarios/medhelm/judges.yaml +14 -0
- helm/benchmark/scenarios/medhelm_configurable_scenario.py +101 -0
- helm/benchmark/scenarios/medi_qa_scenario.py +24 -1
- helm/benchmark/scenarios/medication_qa_scenario.py +31 -1
- helm/benchmark/scenarios/melt_scenarios.py +2 -2
- helm/benchmark/scenarios/mental_health_scenario.py +23 -0
- helm/benchmark/scenarios/mimic_bhc_scenario.py +25 -1
- helm/benchmark/scenarios/mimic_rrs_scenario.py +23 -0
- helm/benchmark/scenarios/mimiciv_billing_code_scenario.py +22 -0
- helm/benchmark/scenarios/mmlu_pro_scenario.py +18 -0
- helm/benchmark/scenarios/mmlu_scenario.py +21 -0
- helm/benchmark/scenarios/mmmlu_scenario.py +85 -0
- helm/benchmark/scenarios/msmarco_scenario.py +30 -0
- helm/benchmark/scenarios/mtsamples_procedures_scenario.py +22 -0
- helm/benchmark/scenarios/mtsamples_replicate_scenario.py +22 -0
- helm/benchmark/scenarios/n2c2_ct_matching_scenario.py +20 -0
- helm/benchmark/scenarios/narrativeqa_scenario.py +19 -0
- helm/benchmark/scenarios/natural_qa_scenario.py +32 -0
- helm/benchmark/scenarios/omni_math_scenario.py +18 -0
- helm/benchmark/scenarios/open_assistant_scenario.py +22 -0
- helm/benchmark/scenarios/openai_mrcr_scenario.py +15 -0
- helm/benchmark/scenarios/pubmed_qa_scenario.py +22 -0
- helm/benchmark/scenarios/quac_scenario.py +14 -0
- helm/benchmark/scenarios/race_based_med_scenario.py +23 -0
- helm/benchmark/scenarios/raft_scenario.py +15 -0
- helm/benchmark/scenarios/real_toxicity_prompts_scenario.py +14 -1
- helm/benchmark/scenarios/ruler_qa_scenarios.py +40 -0
- helm/benchmark/scenarios/scenario.py +31 -0
- helm/benchmark/scenarios/seahelm_scenario.py +350 -2
- helm/benchmark/scenarios/self_instruct_scenario.py +29 -1
- helm/benchmark/scenarios/shc_bmt_scenario.py +22 -0
- helm/benchmark/scenarios/shc_cdi_scenario.py +20 -0
- helm/benchmark/scenarios/shc_conf_scenario.py +23 -0
- helm/benchmark/scenarios/shc_ent_scenario.py +21 -0
- helm/benchmark/scenarios/shc_gip_scenario.py +20 -0
- helm/benchmark/scenarios/shc_privacy_scenario.py +22 -0
- helm/benchmark/scenarios/shc_proxy_scenario.py +23 -1
- helm/benchmark/scenarios/shc_ptbm_scenario.py +23 -0
- helm/benchmark/scenarios/shc_sequoia_scenario.py +21 -0
- helm/benchmark/scenarios/simple_safety_tests_scenario.py +12 -1
- helm/benchmark/scenarios/situation_prompts.yaml +49 -0
- helm/benchmark/scenarios/spider_scenario.py +18 -0
- helm/benchmark/scenarios/starr_patient_instructions_scenario.py +22 -0
- helm/benchmark/scenarios/summarization_scenario.py +37 -0
- helm/benchmark/scenarios/synthetic_efficiency_scenario.py +22 -1
- helm/benchmark/scenarios/synthetic_reasoning_natural_scenario.py +13 -0
- helm/benchmark/scenarios/test_alghafa_scenario.py +29 -0
- helm/benchmark/scenarios/test_alrage_scenario.py +23 -0
- helm/benchmark/scenarios/test_arabic_exams_scenario.py +21 -0
- helm/benchmark/scenarios/test_aratrust_scenario.py +21 -0
- helm/benchmark/scenarios/test_bluex_scenario.py +59 -0
- helm/benchmark/scenarios/test_exams_multilingual_scenario.py +29 -0
- helm/benchmark/scenarios/test_healtha_br_scenario.py +57 -0
- helm/benchmark/scenarios/thai_exam_scenario.py +95 -0
- helm/benchmark/scenarios/the_pile_scenario.py +13 -1
- helm/benchmark/scenarios/truthful_qa_scenario.py +14 -0
- helm/benchmark/scenarios/twitter_aae_scenario.py +20 -1
- helm/benchmark/scenarios/vicuna_scenario.py +21 -1
- helm/benchmark/scenarios/wikifact_scenario.py +20 -0
- helm/benchmark/scenarios/wildbench_scenario.py +18 -0
- helm/benchmark/scenarios/wmt_14_scenario.py +19 -0
- helm/benchmark/slurm_jobs.py +1 -2
- helm/benchmark/slurm_runner.py +8 -1
- helm/benchmark/static/schema_arabic.yaml +271 -0
- helm/benchmark/static/schema_classic.yaml +0 -17
- helm/benchmark/static/schema_long_context.yaml +17 -18
- helm/benchmark/static/schema_medhelm.yaml +36 -0
- helm/benchmark/static/schema_slp.yaml +219 -0
- helm/benchmark/static_build/assets/audio-table-Dn5NMMeJ.png +0 -0
- helm/benchmark/static_build/assets/index-oIeiQW2g.css +1 -0
- helm/benchmark/static_build/assets/index-qOFpOyHb.js +10 -0
- helm/benchmark/static_build/assets/react-BteFIppM.js +85 -0
- helm/benchmark/static_build/assets/recharts-DxuQtTOs.js +97 -0
- helm/benchmark/static_build/assets/tremor-DR4fE7ko.js +10 -0
- helm/benchmark/static_build/index.html +5 -6
- helm/benchmark/window_services/image_generation/clip_window_service.py +1 -3
- helm/clients/ai21_client.py +2 -0
- helm/clients/aleph_alpha_client.py +2 -0
- helm/clients/anthropic_client.py +7 -1
- helm/clients/audio_language/diva_llama_client.py +2 -0
- helm/clients/audio_language/llama_omni/arguments.py +61 -0
- helm/clients/audio_language/llama_omni/constants.py +9 -0
- helm/clients/audio_language/llama_omni/conversation.py +213 -0
- helm/clients/audio_language/llama_omni/model/__init__.py +0 -0
- helm/clients/audio_language/llama_omni/model/builder.py +88 -0
- helm/clients/audio_language/llama_omni/model/language_model/omni_speech2s_llama.py +190 -0
- helm/clients/audio_language/llama_omni/model/language_model/omni_speech_llama.py +118 -0
- helm/clients/audio_language/llama_omni/model/omni_speech_arch.py +249 -0
- helm/clients/audio_language/llama_omni/model/speech_encoder/builder.py +9 -0
- helm/clients/audio_language/llama_omni/model/speech_encoder/speech_encoder.py +27 -0
- helm/clients/audio_language/llama_omni/model/speech_generator/builder.py +9 -0
- helm/clients/audio_language/llama_omni/model/speech_generator/generation.py +622 -0
- helm/clients/audio_language/llama_omni/model/speech_generator/speech_generator.py +104 -0
- helm/clients/audio_language/llama_omni/model/speech_projector/builder.py +9 -0
- helm/clients/audio_language/llama_omni/model/speech_projector/speech_projector.py +27 -0
- helm/clients/audio_language/llama_omni/preprocess.py +295 -0
- helm/clients/audio_language/llama_omni/utils.py +202 -0
- helm/clients/audio_language/llama_omni_client.py +2 -1
- helm/clients/audio_language/qwen2_5_omni_client.py +21 -8
- helm/clients/audio_language/qwen2_audiolm_client.py +2 -1
- helm/clients/audio_language/qwen_audiolm_client.py +2 -1
- helm/clients/audio_language/qwen_omni/configuration_qwen2_5_omni.py +519 -0
- helm/clients/audio_language/qwen_omni/modeling_qwen2_5_omni.py +4308 -0
- helm/clients/audio_language/qwen_omni/processing_qwen2_5_omni.py +270 -0
- helm/clients/audio_language/qwen_omni/qwen2_5_omni_utils/__init__.py +0 -0
- helm/clients/audio_language/qwen_omni/qwen2_5_omni_utils/v2_5/__init__.py +8 -0
- helm/clients/audio_language/qwen_omni/qwen2_5_omni_utils/v2_5/audio_process.py +56 -0
- helm/clients/audio_language/qwen_omni/qwen2_5_omni_utils/v2_5/vision_process.py +380 -0
- helm/clients/bedrock_client.py +63 -6
- helm/clients/cohere_client.py +3 -0
- helm/clients/dspy_client.py +135 -0
- helm/clients/google_client.py +2 -0
- helm/clients/http_model_client.py +2 -0
- helm/clients/huggingface_client.py +4 -3
- helm/clients/ibm_client.py +3 -1
- helm/clients/image_generation/adobe_vision_client.py +2 -0
- helm/clients/image_generation/aleph_alpha_image_generation_client.py +2 -0
- helm/clients/image_generation/cogview2/sr_pipeline/dsr_model.py +1 -1
- helm/clients/image_generation/cogview2_client.py +2 -1
- helm/clients/image_generation/dalle2_client.py +2 -0
- helm/clients/image_generation/dalle_mini_client.py +2 -1
- helm/clients/image_generation/deep_floyd_client.py +2 -0
- helm/clients/image_generation/huggingface_diffusers_client.py +2 -1
- helm/clients/image_generation/lexica_client.py +2 -0
- helm/clients/image_generation/mindalle/models/stage1/layers.py +2 -2
- helm/clients/image_generation/mindalle_client.py +2 -1
- helm/clients/image_generation/together_image_generation_client.py +2 -0
- helm/clients/megatron_client.py +2 -0
- helm/clients/mistral_client.py +2 -0
- helm/clients/moderation_api_client.py +2 -0
- helm/clients/openai_client.py +38 -21
- helm/clients/openai_responses_client.py +34 -8
- helm/clients/openrouter_client.py +31 -0
- helm/clients/palmyra_client.py +2 -1
- helm/clients/reka_client.py +2 -1
- helm/clients/stanfordhealthcare_azure_openai_client.py +2 -2
- helm/clients/stanfordhealthcare_http_model_client.py +2 -0
- helm/clients/test_huggingface_client.py +3 -3
- helm/clients/test_openrouter_client.py +69 -0
- helm/clients/together_client.py +52 -13
- helm/clients/vertexai_client.py +23 -11
- helm/clients/vision_language/huggingface_vision2seq_client.py +2 -1
- helm/clients/vision_language/huggingface_vlm_client.py +2 -0
- helm/clients/vision_language/idefics_client.py +2 -1
- helm/clients/vision_language/open_flamingo_client.py +2 -1
- helm/clients/vision_language/paligemma_client.py +2 -1
- helm/clients/vision_language/palmyra_vision_client.py +2 -0
- helm/clients/vision_language/qwen2_vlm_client.py +2 -1
- helm/clients/vision_language/qwen_vlm_client.py +2 -1
- helm/clients/vllm_client.py +43 -7
- helm/clients/vllm_granite_thinking_client.py +56 -0
- helm/clients/writer_client.py +5 -2
- helm/common/critique_request.py +0 -1
- helm/common/hierarchical_logger.py +103 -34
- helm/common/object_spec.py +23 -8
- helm/common/optional_dependencies.py +1 -1
- helm/common/test_general.py +4 -0
- helm/common/test_logging.py +94 -0
- helm/config/model_deployments.yaml +1001 -187
- helm/config/model_metadata.yaml +602 -18
- helm/config/tokenizer_configs.yaml +202 -5
- helm/proxy/cli.py +1 -1
- helm/proxy/example_queries.py +8 -8
- helm/proxy/retry.py +5 -0
- helm/proxy/server.py +2 -1
- helm/proxy/static/index.css +4 -0
- helm/proxy/static/index.js +7 -1
- helm/tokenizers/auto_tokenizer.py +2 -2
- helm/tokenizers/grok_tokenizer.py +2 -0
- helm/benchmark/metrics/aci_bench_metrics.py +0 -14
- helm/benchmark/metrics/chw_care_plan_metrics.py +0 -14
- helm/benchmark/metrics/dischargeme_metrics.py +0 -14
- helm/benchmark/metrics/med_dialog_metrics.py +0 -14
- helm/benchmark/metrics/medalign_metrics.py +0 -14
- helm/benchmark/metrics/medi_qa_metrics.py +0 -14
- helm/benchmark/metrics/medication_qa_metrics.py +0 -14
- helm/benchmark/metrics/mental_health_metrics.py +0 -14
- helm/benchmark/metrics/mimic_bhc_metrics.py +0 -14
- helm/benchmark/metrics/mimic_rrs_metrics.py +0 -14
- helm/benchmark/metrics/mtsamples_procedures_metrics.py +0 -14
- helm/benchmark/metrics/mtsamples_replicate_metrics.py +0 -14
- helm/benchmark/metrics/numeracy_metrics.py +0 -72
- helm/benchmark/metrics/starr_patient_instructions_metrics.py +0 -14
- helm/benchmark/metrics/test_numeracy_metrics.py +0 -95
- helm/benchmark/scenarios/audio_language/ultra_suite_asr_classification.py +0 -103
- helm/benchmark/scenarios/numeracy_scenario.py +0 -794
- helm/benchmark/static_build/assets/index-94295e78.js +0 -10
- helm/benchmark/static_build/assets/index-b9779128.css +0 -1
- helm/benchmark/static_build/assets/react-f82877fd.js +0 -85
- helm/benchmark/static_build/assets/recharts-4037aff0.js +0 -97
- helm/benchmark/static_build/assets/tremor-38a10867.js +0 -10
- {crfm_helm-0.5.6.dist-info → crfm_helm-0.5.10.dist-info}/WHEEL +0 -0
- {crfm_helm-0.5.6.dist-info → crfm_helm-0.5.10.dist-info}/entry_points.txt +0 -0
- {crfm_helm-0.5.6.dist-info → crfm_helm-0.5.10.dist-info}/licenses/LICENSE +0 -0
- {crfm_helm-0.5.6.dist-info → crfm_helm-0.5.10.dist-info}/top_level.txt +0 -0
- /helm/benchmark/static_build/assets/{air-overview-d2e6c49f.png → air-overview-DpBbyagA.png} +0 -0
- /helm/benchmark/static_build/assets/{crfm-logo-74391ab8.png → crfm-logo-Du4T1uWZ.png} +0 -0
- /helm/benchmark/static_build/assets/{heim-logo-3e5e3aa4.png → heim-logo-BJtQlEbV.png} +0 -0
- /helm/benchmark/static_build/assets/{helm-logo-simple-2ed5400b.png → helm-logo-simple-DzOhNN41.png} +0 -0
- /helm/benchmark/static_build/assets/{helm-safety-2907a7b6.png → helm-safety-COfndXuS.png} +0 -0
- /helm/benchmark/static_build/assets/{helmhero-28e90f4d.png → helmhero-D9TvmJsp.png} +0 -0
- /helm/benchmark/static_build/assets/{medhelm-overview-eac29843.png → medhelm-overview-CND0EIsy.png} +0 -0
- /helm/benchmark/static_build/assets/{medhelm-v1-overview-3ddfcd65.png → medhelm-v1-overview-Cu2tphBB.png} +0 -0
- /helm/benchmark/static_build/assets/{overview-74aea3d8.png → overview-BwypNWnk.png} +0 -0
- /helm/benchmark/static_build/assets/{process-flow-bd2eba96.png → process-flow-DWDJC733.png} +0 -0
- /helm/benchmark/static_build/assets/{vhelm-aspects-1437d673.png → vhelm-aspects-NiDQofvP.png} +0 -0
- /helm/benchmark/static_build/assets/{vhelm-framework-a1ca3f3f.png → vhelm-framework-NxJE4fdA.png} +0 -0
- /helm/benchmark/static_build/assets/{vhelm-model-8afb7616.png → vhelm-model-ypCL5Yvq.png} +0 -0
|
@@ -1,19 +1,15 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: crfm-helm
|
|
3
|
-
Version: 0.5.
|
|
3
|
+
Version: 0.5.10
|
|
4
4
|
Summary: Benchmark for language models
|
|
5
|
-
|
|
6
|
-
Author: Stanford CRFM
|
|
7
|
-
Author-email: contact-crfm@stanford.edu
|
|
5
|
+
Author-email: Stanford CRFM <contact-crfm@stanford.edu>
|
|
8
6
|
License: Apache License 2.0
|
|
9
|
-
|
|
7
|
+
Project-URL: Homepage, https://github.com/stanford-crfm/helm
|
|
8
|
+
Keywords: language,models,benchmarking
|
|
10
9
|
Classifier: Programming Language :: Python :: 3
|
|
11
10
|
Classifier: Programming Language :: Python :: 3 :: Only
|
|
12
|
-
Classifier: Programming Language :: Python :: 3.9
|
|
13
|
-
Classifier: Programming Language :: Python :: 3.10
|
|
14
|
-
Classifier: Programming Language :: Python :: 3.11
|
|
15
11
|
Classifier: License :: OSI Approved :: Apache Software License
|
|
16
|
-
Requires-Python: >=3.
|
|
12
|
+
Requires-Python: >=3.10
|
|
17
13
|
Description-Content-Type: text/markdown
|
|
18
14
|
License-File: LICENSE
|
|
19
15
|
Requires-Dist: cattrs~=22.2
|
|
@@ -30,7 +26,7 @@ Requires-Dist: tqdm~=4.64
|
|
|
30
26
|
Requires-Dist: zstandard~=0.18.0
|
|
31
27
|
Requires-Dist: sqlitedict<3.0,>=2.1.0
|
|
32
28
|
Requires-Dist: bottle~=0.12.23
|
|
33
|
-
Requires-Dist: datasets~=
|
|
29
|
+
Requires-Dist: datasets~=3.1
|
|
34
30
|
Requires-Dist: pyarrow>=11.0.0
|
|
35
31
|
Requires-Dist: pyarrow-hotfix~=0.6
|
|
36
32
|
Requires-Dist: nltk!=3.9.0,~=3.7
|
|
@@ -38,24 +34,31 @@ Requires-Dist: rouge-score~=0.1.2
|
|
|
38
34
|
Requires-Dist: scipy>=1.10
|
|
39
35
|
Requires-Dist: uncertainty-calibration~=0.1.4
|
|
40
36
|
Requires-Dist: scikit-learn>=1.1
|
|
41
|
-
Requires-Dist: transformers
|
|
37
|
+
Requires-Dist: transformers<4.53.0,~=4.40
|
|
42
38
|
Requires-Dist: torch<3.0.0,>=1.13.1
|
|
43
39
|
Requires-Dist: torchvision<3.0.0,>=0.14.1
|
|
44
40
|
Provides-Extra: proxy-server
|
|
45
41
|
Requires-Dist: gunicorn>=20.1; extra == "proxy-server"
|
|
46
42
|
Provides-Extra: human-evaluation
|
|
47
|
-
Requires-Dist: scaleapi~=2.13
|
|
48
|
-
Requires-Dist: surge-api~=1.1
|
|
43
|
+
Requires-Dist: scaleapi~=2.13; extra == "human-evaluation"
|
|
44
|
+
Requires-Dist: surge-api~=1.1; extra == "human-evaluation"
|
|
45
|
+
Provides-Extra: dspy
|
|
46
|
+
Requires-Dist: dspy~=3.0.3; extra == "dspy"
|
|
47
|
+
Requires-Dist: fastapi~=0.118.0; extra == "dspy"
|
|
48
|
+
Requires-Dist: apscheduler~=3.11.0; extra == "dspy"
|
|
49
|
+
Requires-Dist: cryptography~=43.0.3; extra == "dspy"
|
|
50
|
+
Requires-Dist: python-multipart~=0.0.20; extra == "dspy"
|
|
51
|
+
Requires-Dist: email-validator~=2.3.0; extra == "dspy"
|
|
52
|
+
Requires-Dist: fastapi_sso~=0.18.0; extra == "dspy"
|
|
49
53
|
Provides-Extra: scenarios
|
|
50
54
|
Requires-Dist: gdown~=5.1; extra == "scenarios"
|
|
51
|
-
Requires-Dist:
|
|
52
|
-
Requires-Dist: xlrd~=2.0.1; extra == "scenarios"
|
|
55
|
+
Requires-Dist: xlrd~=2.0; extra == "scenarios"
|
|
53
56
|
Provides-Extra: metrics
|
|
54
57
|
Requires-Dist: google-api-python-client~=2.64; extra == "metrics"
|
|
55
58
|
Requires-Dist: numba~=0.56; extra == "metrics"
|
|
56
|
-
Requires-Dist: sacrebleu~=2.2
|
|
57
|
-
Requires-Dist: langdetect~=1.0
|
|
58
|
-
Requires-Dist: immutabledict~=4.2
|
|
59
|
+
Requires-Dist: sacrebleu~=2.2; extra == "metrics"
|
|
60
|
+
Requires-Dist: langdetect~=1.0; extra == "metrics"
|
|
61
|
+
Requires-Dist: immutabledict~=4.2; extra == "metrics"
|
|
59
62
|
Requires-Dist: gradio_client~=1.3; extra == "metrics"
|
|
60
63
|
Provides-Extra: ranking
|
|
61
64
|
Requires-Dist: pytrec_eval==0.5; extra == "ranking"
|
|
@@ -63,7 +66,7 @@ Provides-Extra: summarization
|
|
|
63
66
|
Requires-Dist: summ-eval~=0.892; extra == "summarization"
|
|
64
67
|
Requires-Dist: bert-score~=0.3; extra == "summarization"
|
|
65
68
|
Provides-Extra: plots
|
|
66
|
-
Requires-Dist: colorcet~=3.0
|
|
69
|
+
Requires-Dist: colorcet~=3.0; extra == "plots"
|
|
67
70
|
Requires-Dist: matplotlib>=3.6.0; extra == "plots"
|
|
68
71
|
Requires-Dist: seaborn>=0.11.0; extra == "plots"
|
|
69
72
|
Provides-Extra: decodingtrust
|
|
@@ -86,22 +89,22 @@ Requires-Dist: evaluate~=0.4.1; extra == "unitxt"
|
|
|
86
89
|
Provides-Extra: seahelm
|
|
87
90
|
Requires-Dist: pythainlp==5.0.0; extra == "seahelm"
|
|
88
91
|
Requires-Dist: pyonmttok==1.37.0; extra == "seahelm"
|
|
89
|
-
Requires-Dist: sacrebleu~=2.2
|
|
92
|
+
Requires-Dist: sacrebleu~=2.2; extra == "seahelm"
|
|
90
93
|
Requires-Dist: python-crfsuite~=0.9.11; extra == "seahelm"
|
|
91
94
|
Provides-Extra: accelerate
|
|
92
95
|
Requires-Dist: accelerate~=0.25; extra == "accelerate"
|
|
93
96
|
Provides-Extra: aleph-alpha
|
|
94
|
-
Requires-Dist: aleph-alpha-client~=2.14
|
|
97
|
+
Requires-Dist: aleph-alpha-client~=2.14; extra == "aleph-alpha"
|
|
95
98
|
Requires-Dist: tokenizers>=0.13.3; extra == "aleph-alpha"
|
|
96
99
|
Provides-Extra: allenai
|
|
97
100
|
Requires-Dist: ai2-olmo~=0.2; extra == "allenai"
|
|
98
101
|
Provides-Extra: amazon
|
|
99
|
-
Requires-Dist: boto3~=1.34
|
|
100
|
-
Requires-Dist: awscli~=1.33
|
|
101
|
-
Requires-Dist: botocore~=1.34
|
|
102
|
+
Requires-Dist: boto3~=1.34; extra == "amazon"
|
|
103
|
+
Requires-Dist: awscli~=1.33; extra == "amazon"
|
|
104
|
+
Requires-Dist: botocore~=1.34; extra == "amazon"
|
|
102
105
|
Provides-Extra: anthropic
|
|
103
106
|
Requires-Dist: anthropic~=0.48; extra == "anthropic"
|
|
104
|
-
Requires-Dist: websocket-client~=1.3
|
|
107
|
+
Requires-Dist: websocket-client~=1.3; extra == "anthropic"
|
|
105
108
|
Requires-Dist: httpx<0.28.0; extra == "anthropic"
|
|
106
109
|
Provides-Extra: cohere
|
|
107
110
|
Requires-Dist: cohere~=5.3; extra == "cohere"
|
|
@@ -136,7 +139,7 @@ Requires-Dist: crfm-helm[yandex]; extra == "models"
|
|
|
136
139
|
Requires-Dist: crfm-helm[writer]; extra == "models"
|
|
137
140
|
Requires-Dist: crfm-helm[ibm-enterprise-scenarios]; extra == "models"
|
|
138
141
|
Provides-Extra: reka
|
|
139
|
-
Requires-Dist: reka-api~=2.0
|
|
142
|
+
Requires-Dist: reka-api~=2.0; extra == "reka"
|
|
140
143
|
Provides-Extra: vlm
|
|
141
144
|
Requires-Dist: crfm-helm[openai]; extra == "vlm"
|
|
142
145
|
Requires-Dist: einops~=0.7.0; extra == "vlm"
|
|
@@ -150,63 +153,65 @@ Requires-Dist: crfm-helm[reka]; extra == "vlm"
|
|
|
150
153
|
Requires-Dist: crfm-helm[images]; extra == "vlm"
|
|
151
154
|
Requires-Dist: crfm-helm[image2struct]; extra == "vlm"
|
|
152
155
|
Requires-Dist: pycocoevalcap~=1.2; extra == "vlm"
|
|
153
|
-
Requires-Dist: transformers~=4.45
|
|
156
|
+
Requires-Dist: transformers~=4.45; extra == "vlm"
|
|
154
157
|
Requires-Dist: qwen-vl-utils~=0.0.8; extra == "vlm"
|
|
155
158
|
Provides-Extra: ibm-enterprise-scenarios
|
|
156
159
|
Requires-Dist: openpyxl~=3.1; extra == "ibm-enterprise-scenarios"
|
|
157
160
|
Provides-Extra: ibm
|
|
158
|
-
Requires-Dist: ibm-watsonx-ai~=1.2
|
|
161
|
+
Requires-Dist: ibm-watsonx-ai~=1.2; extra == "ibm"
|
|
159
162
|
Provides-Extra: image2struct
|
|
160
163
|
Requires-Dist: crfm-helm[images]; extra == "image2struct"
|
|
161
164
|
Requires-Dist: latex~=0.7.0; extra == "image2struct"
|
|
162
|
-
Requires-Dist: pdf2image~=1.16
|
|
163
|
-
Requires-Dist: selenium~=4.17
|
|
165
|
+
Requires-Dist: pdf2image~=1.16; extra == "image2struct"
|
|
166
|
+
Requires-Dist: selenium~=4.17; extra == "image2struct"
|
|
164
167
|
Requires-Dist: html2text~=2024.2.26; extra == "image2struct"
|
|
165
|
-
Requires-Dist: opencv-python
|
|
168
|
+
Requires-Dist: opencv-python-headless<=4.11.0.86,>=4.7.0.68; extra == "image2struct"
|
|
166
169
|
Requires-Dist: lpips~=0.1.4; extra == "image2struct"
|
|
167
|
-
Requires-Dist: imagehash~=4.3
|
|
170
|
+
Requires-Dist: imagehash~=4.3; extra == "image2struct"
|
|
168
171
|
Provides-Extra: heim
|
|
169
172
|
Requires-Dist: gdown~=5.1; extra == "heim"
|
|
170
|
-
Requires-Dist: diffusers~=0.
|
|
173
|
+
Requires-Dist: diffusers~=0.34.0; extra == "heim"
|
|
171
174
|
Requires-Dist: icetk~=0.0.4; extra == "heim"
|
|
172
|
-
Requires-Dist: jax~=0.
|
|
173
|
-
Requires-Dist:
|
|
175
|
+
Requires-Dist: jax~=0.6.2; python_version >= "3.10" and extra == "heim"
|
|
176
|
+
Requires-Dist: jax~=0.4.30; python_version < "3.10" and extra == "heim"
|
|
177
|
+
Requires-Dist: jaxlib~=0.6.2; python_version >= "3.10" and extra == "heim"
|
|
178
|
+
Requires-Dist: jaxlib~=0.4.30; python_version < "3.10" and extra == "heim"
|
|
174
179
|
Requires-Dist: crfm-helm[openai]; extra == "heim"
|
|
175
180
|
Requires-Dist: einops~=0.7.0; extra == "heim"
|
|
176
|
-
Requires-Dist: omegaconf~=2.3
|
|
177
|
-
Requires-Dist: pytorch-lightning~=2.0
|
|
178
|
-
Requires-Dist: flax~=0.
|
|
179
|
-
Requires-Dist:
|
|
180
|
-
Requires-Dist:
|
|
181
|
+
Requires-Dist: omegaconf~=2.3; extra == "heim"
|
|
182
|
+
Requires-Dist: pytorch-lightning~=2.0; extra == "heim"
|
|
183
|
+
Requires-Dist: flax~=0.10.7; python_version >= "3.10" and extra == "heim"
|
|
184
|
+
Requires-Dist: flax~=0.8.5; python_version < "3.10" and extra == "heim"
|
|
185
|
+
Requires-Dist: ftfy~=6.1; extra == "heim"
|
|
186
|
+
Requires-Dist: Unidecode~=1.3; extra == "heim"
|
|
181
187
|
Requires-Dist: wandb~=0.16; extra == "heim"
|
|
182
|
-
Requires-Dist: google-cloud-translate~=3.11
|
|
183
|
-
Requires-Dist: autokeras~=1.0
|
|
184
|
-
Requires-Dist: clip-anytorch~=2.5
|
|
188
|
+
Requires-Dist: google-cloud-translate~=3.11; extra == "heim"
|
|
189
|
+
Requires-Dist: autokeras~=1.0; extra == "heim"
|
|
190
|
+
Requires-Dist: clip-anytorch~=2.5; extra == "heim"
|
|
185
191
|
Requires-Dist: google-cloud-storage~=2.9; extra == "heim"
|
|
186
192
|
Requires-Dist: lpips~=0.1.4; extra == "heim"
|
|
187
|
-
Requires-Dist: multilingual-clip~=1.0
|
|
188
|
-
Requires-Dist: NudeNet~=2.0
|
|
189
|
-
Requires-Dist:
|
|
193
|
+
Requires-Dist: multilingual-clip~=1.0; extra == "heim"
|
|
194
|
+
Requires-Dist: NudeNet~=2.0; extra == "heim"
|
|
195
|
+
Requires-Dist: numpy<2,>=1.26; extra == "heim"
|
|
196
|
+
Requires-Dist: opencv-python<4.8.2.0,>=4.7.0.68; python_version >= "3.10" and extra == "heim"
|
|
197
|
+
Requires-Dist: opencv-python-headless<=4.11.0.86,>=4.7.0.68; python_version < "3.10" and extra == "heim"
|
|
190
198
|
Requires-Dist: pytorch-fid~=0.3.0; extra == "heim"
|
|
191
199
|
Requires-Dist: tensorflow~=2.11; extra == "heim"
|
|
192
200
|
Requires-Dist: timm~=0.6.12; extra == "heim"
|
|
193
201
|
Requires-Dist: torch-fidelity~=0.3.0; extra == "heim"
|
|
194
202
|
Requires-Dist: torchmetrics~=0.11.1; extra == "heim"
|
|
195
|
-
Requires-Dist: scikit-image
|
|
203
|
+
Requires-Dist: scikit-image==0.*,>=0.22; extra == "heim"
|
|
196
204
|
Requires-Dist: crfm-helm[images]; extra == "heim"
|
|
197
205
|
Provides-Extra: medhelm
|
|
198
|
-
Requires-Dist:
|
|
206
|
+
Requires-Dist: accelerate~=0.25; extra == "medhelm"
|
|
199
207
|
Requires-Dist: crfm-helm[openai]; extra == "medhelm"
|
|
200
|
-
Requires-Dist: crfm-helm[summarization]; extra == "medhelm"
|
|
201
208
|
Requires-Dist: crfm-helm[yandex]; extra == "medhelm"
|
|
209
|
+
Requires-Dist: crfm-helm[scenarios]; extra == "medhelm"
|
|
202
210
|
Requires-Dist: bert_score~=0.3.13; extra == "medhelm"
|
|
203
|
-
Requires-Dist:
|
|
204
|
-
Requires-Dist: langchain-community~=0.3.8; extra == "medhelm"
|
|
205
|
-
Requires-Dist: lxml~=5.3.0; extra == "medhelm"
|
|
211
|
+
Requires-Dist: lxml~=5.3; extra == "medhelm"
|
|
206
212
|
Requires-Dist: openpyxl~=3.1; extra == "medhelm"
|
|
207
|
-
Requires-Dist: python-docx~=1.1
|
|
208
|
-
Requires-Dist:
|
|
209
|
-
Requires-Dist: torchvision~=0.17.2; extra == "medhelm"
|
|
213
|
+
Requires-Dist: python-docx~=1.1; extra == "medhelm"
|
|
214
|
+
Requires-Dist: transformers<4.50,~=4.45; extra == "medhelm"
|
|
210
215
|
Provides-Extra: audiolm
|
|
211
216
|
Requires-Dist: crfm-helm[openai]; extra == "audiolm"
|
|
212
217
|
Requires-Dist: crfm-helm[google]; extra == "audiolm"
|
|
@@ -216,16 +221,21 @@ Requires-Dist: soundfile~=0.12; extra == "audiolm"
|
|
|
216
221
|
Requires-Dist: librosa~=0.10; extra == "audiolm"
|
|
217
222
|
Requires-Dist: einops~=0.7.0; extra == "audiolm"
|
|
218
223
|
Requires-Dist: openai-whisper==20240930; extra == "audiolm"
|
|
219
|
-
Requires-Dist: transformers~=4.48
|
|
224
|
+
Requires-Dist: transformers~=4.48; extra == "audiolm"
|
|
220
225
|
Requires-Dist: transformers_stream_generator~=0.0.4; extra == "audiolm"
|
|
221
|
-
Requires-Dist: av~=14.3
|
|
226
|
+
Requires-Dist: av~=14.3; extra == "audiolm"
|
|
222
227
|
Requires-Dist: scipy~=1.10; extra == "audiolm"
|
|
223
228
|
Requires-Dist: torchvision<3.0.0,>=0.14.1; extra == "audiolm"
|
|
224
|
-
Requires-Dist: flash-attn~=2.7
|
|
229
|
+
Requires-Dist: flash-attn~=2.7; extra == "audiolm"
|
|
225
230
|
Requires-Dist: pycocoevalcap~=1.2; extra == "audiolm"
|
|
226
231
|
Requires-Dist: jiwer~=3.0; extra == "audiolm"
|
|
227
232
|
Requires-Dist: rapidfuzz~=3.10; extra == "audiolm"
|
|
228
233
|
Requires-Dist: jieba~=0.42.1; extra == "audiolm"
|
|
234
|
+
Provides-Extra: codeinsights
|
|
235
|
+
Requires-Dist: clang~=20.1; extra == "codeinsights"
|
|
236
|
+
Requires-Dist: Levenshtein~=0.27; extra == "codeinsights"
|
|
237
|
+
Provides-Extra: lmkt
|
|
238
|
+
Requires-Dist: sentence_transformers~=4.1; extra == "lmkt"
|
|
229
239
|
Provides-Extra: all
|
|
230
240
|
Requires-Dist: crfm-helm[proxy-server]; extra == "all"
|
|
231
241
|
Requires-Dist: crfm-helm[human-evaluation]; extra == "all"
|
|
@@ -240,8 +250,11 @@ Requires-Dist: crfm-helm[models]; extra == "all"
|
|
|
240
250
|
Requires-Dist: crfm-helm[mongo]; extra == "all"
|
|
241
251
|
Requires-Dist: crfm-helm[heim]; extra == "all"
|
|
242
252
|
Requires-Dist: crfm-helm[vlm]; extra == "all"
|
|
253
|
+
Requires-Dist: crfm-helm[codeinsights]; extra == "all"
|
|
254
|
+
Requires-Dist: crfm-helm[lmkt]; extra == "all"
|
|
243
255
|
Provides-Extra: dev
|
|
244
256
|
Requires-Dist: pytest~=7.2.0; extra == "dev"
|
|
257
|
+
Requires-Dist: xdoctest~=1.2.0; extra == "dev"
|
|
245
258
|
Requires-Dist: pre-commit~=2.20.0; extra == "dev"
|
|
246
259
|
Requires-Dist: black==24.3.0; extra == "dev"
|
|
247
260
|
Requires-Dist: mypy==1.16.0; extra == "dev"
|
|
@@ -334,6 +347,7 @@ The HELM framework was used in the following papers for evaluating models.
|
|
|
334
347
|
- **The Mighty ToRR: A Benchmark for Table Reasoning and Robustness** - [paper](https://arxiv.org/abs/2502.19412), [leaderboard](https://crfm.stanford.edu/helm/torr/latest/)
|
|
335
348
|
- **Reliable and Efficient Amortized Model-based Evaluation** - [paper](https://arxiv.org/abs/2503.13335), [documentation](https://crfm-helm.readthedocs.io/en/latest/reeval/)
|
|
336
349
|
- **MedHELM** - paper in progress, [leaderboard](https://crfm.stanford.edu/helm/medhelm/latest/), [documentation](https://crfm-helm.readthedocs.io/en/latest/reeval/)
|
|
350
|
+
- **Holistic Evaluation of Audio-Language Models** - [paper](https://arxiv.org/abs/2508.21376), [leaderboard](https://crfm.stanford.edu/helm/audio/latest/)
|
|
337
351
|
|
|
338
352
|
The HELM framework can be used to reproduce the published model evaluation results from these papers. To get started, refer to the documentation links above for the corresponding paper, or the [main Reproducing Leaderboards documentation](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).
|
|
339
353
|
|
|
@@ -353,75 +367,3 @@ url={https://openreview.net/forum?id=iO4LZibEqW},
|
|
|
353
367
|
note={Featured Certification, Expert Certification}
|
|
354
368
|
}
|
|
355
369
|
```
|
|
356
|
-
|
|
357
|
-
# Tutorial
|
|
358
|
-
|
|
359
|
-
This tutorial will explain how to use the HELM command line tools to run benchmarks, aggregate statistics, and visualize results.
|
|
360
|
-
|
|
361
|
-
We will run two runs using the `mmlu` scenario on the `openai/gpt2` model. The `mmlu` scenario implements the **Massive Multitask Language (MMLU)** benchmark from [this paper](https://arxiv.org/pdf/2009.03300.pdf), and consists of a Question Answering (QA) task using a dataset with questions from 57 subjects such as elementary mathematics, US history, computer science, law, and more. Note that GPT-2 performs poorly on MMLU, so this is just a proof of concept. We will run two runs: the first using questions about anatomy, and the second using questions about philosophy.
|
|
362
|
-
|
|
363
|
-
## Using `helm-run`
|
|
364
|
-
|
|
365
|
-
`helm-run` is a command line tool for running benchmarks.
|
|
366
|
-
|
|
367
|
-
To run this benchmark using the HELM command-line tools, we need to specify **run entries** that describes the desired runs. For this example, the run entries are `mmlu:subject=anatomy,model=openai/gpt2` (for anatomy) and `mmlu:subject=philosophy,model=openai/gpt2` (for philosophy).
|
|
368
|
-
|
|
369
|
-
We will now use `helm-run` to execute the runs. Run this command:
|
|
370
|
-
|
|
371
|
-
```sh
|
|
372
|
-
helm-run --run-entries mmlu:subject=anatomy,model=openai/gpt2 mmlu:subject=philosophy,model=openai/gpt2 --suite my-suite --max-eval-instances 10
|
|
373
|
-
```
|
|
374
|
-
|
|
375
|
-
The meaning of the arguments are as follows:
|
|
376
|
-
|
|
377
|
-
- `--run-entries` specifies the run entries from the desired runs.
|
|
378
|
-
- `--suite` specifies a subdirectory under the output directory in which all the output will be placed.
|
|
379
|
-
- `--max-eval-instances` limits evaluation to only *N* instances (i.e. items) from the benchmark, using a randomly shuffled order of instances.
|
|
380
|
-
|
|
381
|
-
`helm-run` creates an environment directory environment and an output directory by default.
|
|
382
|
-
|
|
383
|
-
- The environment directory is `prod_env/` by default and can be set using `--local-path`. Credentials for making API calls should be added to a `credentials.conf` file in this directory.
|
|
384
|
-
- The output directory is `benchmark_output/` by default and can be set using `--output-path`.
|
|
385
|
-
|
|
386
|
-
After running this command, navigate to the `benchmark_output/runs/my-suite/` directory. This should contain a two sub-directories named `mmlu:subject=anatomy,model=openai_gpt2` and `mmlu:subject=philosophy,model=openai_gpt2`. Note that the names of these sub-directories is based on the run entries we used earlier, but with `/` replaced with `_`.
|
|
387
|
-
|
|
388
|
-
Each output sub-directory will contain several JSON files that were generated during the corresponding run:
|
|
389
|
-
|
|
390
|
-
- `run_spec.json` contains the `RunSpec`, which specifies the scenario, adapter and metrics for the run.
|
|
391
|
-
- `scenario.json` contains a serialized `Scenario`, which contains the scenario for the run and specifies the instances (i.e. inputs) used.
|
|
392
|
-
- `scenario_state.json` contains a serialized `ScenarioState`, which contains every request to and response from the model.
|
|
393
|
-
- `per_instance_stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics for each instance (i.e. input).
|
|
394
|
-
- `stats.json` contains a serialized list of `PerInstanceStats`, which contains the statistics produced for the metrics, aggregated across all instances (i.e. inputs).
|
|
395
|
-
|
|
396
|
-
## Using `helm-summarize`
|
|
397
|
-
|
|
398
|
-
The `helm-summarize` reads the output files of `helm-run` and computes aggregate statistics across runs. Run the following:
|
|
399
|
-
|
|
400
|
-
```sh
|
|
401
|
-
helm-summarize --suite my-suite
|
|
402
|
-
```
|
|
403
|
-
|
|
404
|
-
This reads the pre-existing files in `benchmark_output/runs/my-suite/` that were written by `helm-run` previously, and writes the following new files back to `benchmark_output/runs/my-suite/`:
|
|
405
|
-
|
|
406
|
-
- `summary.json` contains a serialized `ExecutiveSummary` with a date and suite name.
|
|
407
|
-
- `run_specs.json` contains the run entries for all the runs.
|
|
408
|
-
- `runs.json` contains serialized list of `Run`, which contains the run path, run spec and adapter spec and statistics for each run.
|
|
409
|
-
- `groups.json` contains a serialized list of `Table`, each containing information about groups in a group category.
|
|
410
|
-
- `groups_metadata.json` contains a list of all the groups along with a human-readable description and a taxonomy.
|
|
411
|
-
|
|
412
|
-
Additionally, for each group and group-relavent metric, it will output a pair of files: `benchmark_output/runs/my-suite/groups/latex/<group_name>_<metric_name>.tex` and `benchmark_output/runs/my-suite/groups/json/<group_name>_<metric_name>.json`. These files contain the statistics for that metric from each run within the group.
|
|
413
|
-
|
|
414
|
-
## Using `helm-server`
|
|
415
|
-
|
|
416
|
-
Finally, the `helm-server` command launches a web server to visualize the output files of `helm-run` and `helm-benchmark`. Run:
|
|
417
|
-
|
|
418
|
-
```sh
|
|
419
|
-
helm-server --suite my-suite
|
|
420
|
-
```
|
|
421
|
-
|
|
422
|
-
Open a browser and go to http://localhost:8000/ to view the visualization. You should see a similar view as [live website for the paper](https://crfm.stanford.edu/helm/classic/latest/), but for the data from your benchmark runs. The website has the following sections accessible from the top menu bar:
|
|
423
|
-
|
|
424
|
-
- **Leaderboards** contains the leaderboards with aggregate metrics.
|
|
425
|
-
- **Models** contains a list of models and their descriptions
|
|
426
|
-
- **Scenarios** contains a list of scenarios and their descriptions.
|
|
427
|
-
- **Predictions** contains a searchable list of runs.
|