Releases: EleutherAI/lm-evaluation-harness
v0.4.9.1
lm-eval v0.4.9.1 Release Notes
This v0.4.9.1 release is a quick patch to bring in some new tasks and fixes. Looking aheas, we're gearing up for some bigger updates to tackle common community pain points. We'll do our best to keep things from breaking, but we anticipate a few changes might not be fully backward-compatible. We're excited to share more soon!
Enhanced Reasoning Model Handling
- Better support for reasoning models with a
think_end_tokenargument to strip intermediate reasoning from outputs for thehf,vllm, andsglangmodel backends. A relatedenable_thinkingargument was also added for specific models that support it (e.g., Qwen).
New Benchmarks & Tasks
- EgyMMLU and EgyHellaSwag by @houdaipha in #3063
- MultiBLiMP benchmark by @jmichaelov in #3155
- LIBRA benchmark for long-context evaluation by @karimovaSvetlana in #2943
- Multilingual Truthfulqa in Spanish, Basque and Galician by @BlancaCalvo in #3062
Fixes & Improvements
Tasks & Benchmarks:
- Aligned Humaneval results for Llama-3.1-70B-Instruct with official scores by @userljz, @baberabb, @idantene in (#3201. #3092, #3102)
- Fixed incorrect dataset paths for GLUE and medical benchmarks by @Avelina9X and @idantene. (#3159, #3151)
- Removed redundant "Let's think step by step" text from
bbh_cot_fewshotprompts by @philipdoldo. (#3140) - Increased
max_gen_toksto 2048 for HRM8K math benchmarks by @shing100. (#3124)
Backend & Stability:
- Reduce CLI loading time from 2.2s to 0.05s by @stakodiak. (#3099)
- Fixed a process hang caused by mp.Pool in bootstrap_stderr and introduced
DISABLE_MULTIPROCenvar by @ankitgola005 and @neel04. (#3135, #3106) - add image hashing and
LMEVAL_HASHMMenvar by @artemorloff in #2973 - TaskManager:
include-pathprecedence handling to prioritize custom dir over default by @parkhs21 in #3068
Housekeeping:
- Pinned
datasets < 4.0.0temporarily to maintain compatibility withtrust_remote_codeby @baberabb. (#3172) - Removed models from Neural Magic and other unneeded files by @baberabb. (#3112, #3113, #3108)
What's Changed
- llama3 task: update README.md by @annafontanaa in #3074
- Fix Anthropic API compatibility issues in chat completions by @NourFahmy in #3054
- Ensure backwards compatibility in
fewshot_contextby using kwargs by @kiersten-stokes in #3079 - [vllm] remove system message if
TemplateErrorfor chat_template by @baberabb in #3076 - feat / fix: Properly make use of
subfolderfrom HF models by @younesbelkada in #3072 - [HF] fix quantization config by @baberabb in #3039
- FixBug: Align the Humaneval with official results for Llama-3.1-70B-Instruct by @userljz in #3092
- Truthfulqa multi harness by @BlancaCalvo in #3062
- Fix: Reduce CLI loading time from 2.2s to 0.05s by @stakodiak in #3099
- Humaneval - fix regression by @baberabb in #3102
- Bugfix/hf tokenizer gguf override by @ankush13r in #3098
- [FIX] Initial code to disable multi-proc for stderr by @neel04 in #3106
- fix deps; update hooks by @baberabb in #3107
- delete unneeded files by @baberabb in #3108
- Fixed #3005: Processes both formats of model_args: string and dictionay by @DebjyotiRay in #3097
- add image hashing and
LMEVAL_HASHMMenvar by @artemorloff in #2973 - removal of Neural Magic models by @baberabb in #3112
- Neuralmagic by @baberabb in #3113
- check pil dep when hashing images by @baberabb in #3114
- warning for "chat" pretrained; disable buggy evalita configs by @baberabb in #3127
- fix: remove warning by @baberabb in #3128
- Adding EgyMMLU and EgyHellaSwag by @houdaipha in #3063
- Added mixed_precision_dtype argument to HFLM to enable autocasting by @Avelina9X in #3138
- Fix for hang due to mp.Pool in bootstrap_stderr by @ankitgola005 in #3135
- when using vllm with lora, it will have some mistakes, now i fix it. by @Jacky-MYQ in #3132
- truncate thinking tags in generations by @baberabb in #3145
bbh_cot_fewshot: Removed repeated "Let''s think step by step." text from bbh cot prompts by @philipdoldo in #3140- Fix medical benchmarks import by @idantene in #3151
- fix request hanging when request api by @mmmans in #3090
- Custom request headers | trust_remote_code param fix by @RawthiL in #3069
- Bugfix: update path for GLUE by @Avelina9X in #3159
- Add the MultiBLiMP benchmark by @jmichaelov in #3155
- multiblimp - readme by @baberabb in #3162
- [tests] Added missing fixture in test_unitxt_tasks.py by @Avelina9X in #3163
- Fix: extended to max_gen_toks 2048 for HRM8K math benchmarks by @shing100 in #3124
- feat: Add LIBRA benchmark for long-context evaluation by @karimovaSvetlana in #2943
- Added
chat_template_argsto vllm by @Avelina9X in #3164 - Pin datasets < 4.0.0 by @baberabb in #3172
- Remove "device" from vllm_causallms.py by @mgoin in #3176
- remove trust-remote-code in configs; fix escape sequences by @baberabb in #3180
- Fix vllm test issue that call pop() from None by @weireweire in #3182
- [hotfix] vllm: pop
devicefrom kwargs by @baberabb in #3181 - Update vLLM compatibility by @DarkLight1337 in #3024
- Fix
mmlu_continuationsubgroup names to fit Readme and other variants by @lamalunderscore in #3137 - Fix humaneval_instruct by @idantene in #3201
- Update README.md for mlqa by @newme616 in #3117
- improve include-path precedence handling by @parkhs21 in #3068
- Bump version to 0.4.9.1 by @baberabb in #3208
New Contributors
- @NourFahmy made their first contribution in #3054
- @userljz made their first contribution in #3092
- @BlancaCalvo made their first contribution in #3062
- @stakodiak made their first contribution in #3099
- @ankush13r made their first contribution in #3098
- @neel04 made their first contribution in https://...
v0.4.9
lm-eval v0.4.9 Release Notes
Key Improvements
-
Enhanced Backend Support:
- SGLang Generate API by @baberabb in #2997
- vLLM enhancements: Added support for
enable_thinkingargument (#2947) and data parallel for V1 (#3011) by @anmarques and @baberabb - Chat template improvements: Extended vLLM chat template support (#2902) and fixed HF chat template resolution (#2992) by @anmarques and @fxmarty-amd
-
Multimodal Capabilities:
- Audio modality support for Qwen2 Audio models by @artemorloff in #2689
- Image processing improvements: Added resize images support (#2958) and enabled multimodal API usage (#2981) by @artemorloff and @baberabb
- ChartQA multimodal task implementation by @baberabb in #2544
-
Performance & Reliability:
- Quantization support added via
quantization_configby @jerryzh168 in #2842 - Memory optimization: Use
yaml.CLoaderfor faster YAML loading by @giuliolovisotto in #2777 - Bug fixes: Resolved MMLU generative metric aggregation (#2761) and context length handling issues (#2972)
- Quantization support added via
New Benchmarks & Tasks
Code Evaluation
- HumanEval Instruct - Instruction-following code generation benchmark by @baberabb in #2650
- MBPP Instruct - Instruction-based Python programming evaluation by @baberabb in #2995
Language Modeling
- C4 Dataset Support - Added perplexity evaluation on C4 web crawl dataset by @Zephyr271828 in #2889
Long Context Benchmarks
Mathematical & Reasoning
- GSM8K Platinum - Enhanced mathematical reasoning benchmark by @Qubitium in #2771
- MastermindEval - Logic reasoning evaluation by @whoisjones in #2788
- JSONSchemaBench - Structured output evaluation by @Saibo-creator in #2865
Llama Reference Implementations
- Llama Reference Implementations - Added task variants for Multilingual MMLU, MMLU CoT, GSM8K, and ARC Challenge based on Llama evaluation standards by @anmarques in #2797, #2826, #2829
Multilingual Expansion
Asian Languages:
- Korean MMLU (KMMLU) multiple-choice task by @Aprilistic in #2849
- MMLU-ProX extended evaluation by @heli-qi in #2811
- KBL 2025 Dataset - Updated Korean benchmark evaluation by @abzb1 in #3000
European Languages:
African Languages:
- AfroBench - Multi-African language evaluation by @JessicaOjo in #2825
- Darija tasks - Moroccan dialect benchmarks (DarijaMMLU, DarijaHellaSwag, Darija_Bench) by @hadi-abdine in #2521
Arabic Languages:
- Arab Culture task for cultural understanding by @bodasadallah in #3006
Domain-Specific Benchmarks
- CareQA - Healthcare evaluation benchmark by @PabloAgustin in #2714
- ACPBench & ACPBench Hard - Automated code generation evaluation by @harshakokel in #2807, #2980
- INCLUDE tasks - Inclusivity evaluation suite by @agromanou in #2769
- Cocoteros VA dataset by @sgs97ua in #2787
Social & Bias Evaluation
- Various social bias tasks for fairness assessment by @oskarvanderwal in #1185
Technical Enhancements
- Fine-grained evaluation: Added
--examplesargument for efficient multi-prompt evaluation by @felipemaiapolo and @mirianfsilva in #2520 - Improved tokenization: Better handling of
add_bos_tokeninitialization by @baberabb in #2781 - Memory management: Enhanced softmax computations with
softmax_dtypeargument forHFLMby @Avelina9X in #2921
Critical Bug Fixes
- Collating Queries Fix - Resolved error with different continuation lengths that was causing evaluation failures by @ameyagodbole in #2987
- Mutual Information Metric - Fixed acc_mutual_info calculation bug that affected metric accuracy by @baberabb in #3035
Breaking Changes & Important Updates
- MMLU dataset migration: Switched to
cais/mmludataset source by @baberabb in #2918 - Default parameter updates: Increased
max_gen_toksto 2048 andmax_lengthto 8192 for MMLU Pro tests by @dazipe in #2824 - Temperature defaults: Set default temperature to 0.0 for vLLM and SGLang backends by @baberabb in #2819
We extend our heartfelt thanks to all contributors who made this release possible, including 43 first-time contributors who brought fresh perspectives and valuable improvements to the evaluation harness.
What's Changed
- fix mmlu (generative) metric aggregation by @wangcho2k in #2761
- Bugfix by @baberabb in #2762
- fix verbosity typo by @baberabb in #2765
- docs: Fix typos in README.md by @ruivieira in #2778
- initialize tokenizer with
add_bos_tokenby @baberabb in #2781 - improvement: Use yaml.CLoader to load yaml files when available. by @giuliolovisotto in #2777
- Consistency Fix: Filter new leaderboard_math_hard dataset to "Level 5" only by @perlitz in #2773
- Fix for mc2 calculation by @kdymkiewicz in #2768
- New healthcare benchmark: careqa by @PabloAgustin in #2714
- Capture gen_kwargs from CLI in squad_completion by @ksurya in #2727
- humaneval instruct by @baberabb in #2650
- Update evaluator.py by @zhuzeyuan in #2786
- change piqa dataset path (uses parquet rather than dataset script) by @baberabb in #2790
- use verify_certificate flag in batch requests by @daniel-salib in #2785
- add audio modality (qwen2 audio only) by @artemorloff in #2689
- Add various social bias tasks by @oskarvanderwal in #1185
- update pre-commit by @baberabb in #2799
- Update Legacy OpenLLM leaderboard to use "train" split for ARC fewshot by @Avelina9X in #2802
- Add INCLUDE tasks by @agromanou in #2769
- Add support for token-based auth for watsonx models by @kiersten-stokes in #2796
- add version by @baberabb in #2808
- Add cocoteros_va dataset by @sgs97ua in #2787
- Add MastermindEval by @whoisjones in #2788
- Add loncxt tasks by @baberabb in #2629
- [hf-multimodal] pass kwargs to self.processor by @baberabb in #2667
- [MM] Chartqa by @baberabb in #2544
- Allow writing config to wandb by @ksurya in #2736
- [change] group -> tag on afrimgsm, afrimmlu, afrixnli dataset by @jd730 in #2813
- Clean up README and pyproject.toml by @kiersten-stokes in #2814
- Llama3 mmlu correction by @anmarques in #2797
- Add Markdown linter by @kiersten-stokes in #2818
- Configure the pad tokens for Qwen when using vLLM by @zhangruoxu in #2810
- fix typo in humaneval by @baberabb in #2820
- default temp=0.0 for vllm and slang by @baberabb in #2819
- Fixes to mmlu_pro_llama by @anmarques in #2816
- Add MMLU-ProX task by @heli-qi in #2811
- Quick fix for mmlu_pro_llama by @anmarques in #2827
- Fix: tj-actions/changed-files is compromised by @Tautorn in #2828
- Multilingual MMLU for Llama instruct models by @anmarques in #2826
- bbh - changed dataset to parquet version by @baberabb in #2845
- Fix typo in longbench metrics by @djwackey in #2854
- Add kmmlu multiple-choice(accuracy) task #2848 by @Aprilistic in #2849
- Adding ACPBench task by @harshakokel in #2807
- add Darija (Moroccan dialects) tasks including darijammlu. darijahellaswag and darija_bench by @hadi-abdine in #2521
- Increase default max_gen_toks to 2048 and max_...
v0.4.8
lm-eval v0.4.8 Release Notes
Key Improvements
-
New Backend Support:
- Added SGLang as new evaluation backend! by @Monstertail
- Enabled model steering with vector support via
sparsifyorsae_lensby @luciaquirke and @AMindToThink
-
Breaking Change: Python 3.8 support has been dropped as it reached end of life. Please upgrade to Python 3.9 or newer.
-
Added Support for
gen_prefixin config, allowing you to append text after the <|assistant|> token (or at the end of non-chat prompts) - particularly effective for evaluating instruct models
New Benchmarks & Tasks
Code Evaluation
- HumanEval by @hjlee1371 in #1992
- MBPP by @hjlee1371 in #2247
- HumanEval+ and MBPP+ by @bzantium in #2734
Multilingual Expansion
-
Global Coverage:
- Global MMLU (Lite version by @shivalika-singh in #2567, Full version by @bzantium in #2636)
- MLQA multilingual question answering by @KahnSvaer in #2622
-
Asian Languages:
-
European Languages:
-
Middle Eastern Languages:
- Arabic MMLU by @bodasadallah in #2541
- AraDICE task by @firojalam in #2507
Ethics & Reasoning
- Moral Stories by @upunaprosk in #2653
- Histoires Morales by @upunaprosk in #2662
Others
- MMLU Pro Plus by @asgsaeid in #2366
- GroundCocoa by @HarshKohli in #2724
We extend our thanks to all contributors who made this release possible and to our users for your continued support and feedback.
Thanks, the LM Eval Harness team (@baberabb and @lintangsutawika)
What's Changed
- drop python 3.8 support by @baberabb in #2575
- Add Global MMLU Lite by @shivalika-singh in #2567
- add warning for truncation by @baberabb in #2585
- Wandb step handling bugfix and feature by @sjmielke in #2580
- AraDICE task config file by @firojalam in #2507
- fix extra_match low if batch_size > 1 by @sywangyi in #2595
- fix model tests by @baberabb in #2604
- update scrolls by @baberabb in #2602
- some minor logging nits by @baberabb in #2609
- Fix gguf loading via Transformers by @CL-ModelCloud in #2596
- Fix Zeno visualizer on tasks like GSM8k by @pasky in #2599
- Fix the format of mgsm zh and ja. by @timturing in #2587
- Add HumanEval by @hjlee1371 in #1992
- Add MBPP by @hjlee1371 in #2247
- Add MLQA by @KahnSvaer in #2622
- assistant prefill by @baberabb in #2615
- fix gen_prefix by @baberabb in #2630
- update pre-commit by @baberabb in #2632
- add hrm8k benchmark for both Korean and English by @bzantium in #2627
- New arabicmmlu by @bodasadallah in #2541
- Add
global_mmlufull version by @bzantium in #2636 - Update KorMedMCQA: ver 2.0 by @GyoukChu in #2540
- fix tmlu tmlu_taiwan_specific_tasks tag by @nike00811 in #2420
- fixed mmlu generative response extraction by @RawthiL in #2503
- revise mbpp prompt by @bzantium in #2645
- aggregate by group (total and categories) by @bzantium in #2643
- Fix max_tokens handling in vllm_vlms.py by @jkaniecki in #2637
- separate category for
global_mmluby @bzantium in #2652 - Add Moral Stories by @upunaprosk in #2653
- add TransformerLens example by @nickypro in #2651
- fix multiple input chat tempalte by @baberabb in #2576
- Add Aggregation for Kobest Benchmark by @tryumanshow in #2446
- update pre-commit by @baberabb in #2660
- remove
groupfrom bigbench task configs by @baberabb in #2663 - Add Histoires Morales task by @upunaprosk in #2662
- MMLU Pro Plus by @asgsaeid in #2366
- fix early return for multiple dict in task process_results by @baberabb in #2673
- Turkish mmlu Config Update by @ArdaYueksel in #2678
- Fix typos by @omahs in #2679
- remove cuda device assertion by @baberabb in #2680
- Adding the Evalita-LLM benchmark by @m-resta in #2681
- Delete lm_eval/tasks/evalita_llm/single_prompt.zip by @baberabb in #2687
- Update unitxt task.py to bring in line with recent repo changes by @kiersten-stokes in #2684
- change ensure_ascii to False for JsonChatStr by @artemorloff in #2691
- Set defaults for BLiMP scores by @jmichaelov in #2692
- Update remaining references to
assistant_prefillin docs togen_prefixby @kiersten-stokes in #2683 - Update README.md by @upunaprosk in #2694
- fix
construct_requestskwargs in python tasks by @baberabb in #2700 arithmetic: set target delimiter to empty string by @baberabb in #2701- fix vllm by @baberabb in #2708
- add math_verify to some tasks by @baberabb in #2686
- Logging by @lintangsutawika in #2203
- Replace missing
lighteval/MATH-Harddataset withDigitalLearningGmbH/MATH-lightevalby @f4str in #2719 - remove unused import by @baberabb in #2728
- README updates: Added IberoBench citation info in correpsonding READMEs by @naiarapm in #2729
- add o3-mini support by @HelloJocelynLu in #2697
- add Basque translation of ARC and PAWS to BasqueBench by @naiarapm in #2732
- Add cocoteros_es task in spanish_bench by @sgs97ua in #2721
- Fix the import source for eval_logger by @kailashbuki in #2735
- add humaneval+ and mbpp+ by @bzantium in #2734
- Support SGLang as Potential Backend for Evaluation by @Monstertail in #2703
- fix log condition on main by @baberabb in #2737
- fix vllm data parallel by @baberabb in #2746
- [Readme change for SGLang] fix error in readme and add OOM solutions for sglang by @Monstertail in #2738
- Groundcocoa by @HarshKohli in #2724
- fix doc: generate_until only outputs the generated text! by @baberabb in #2755
- Enable steering HF models by @luciaquirke in #2749
- Add test for a simple Unitxt task by @kiersten-stokes in #2742
- add debug log by @baberabb in #2757
- increment version to 0.4.8 by @baberabb in #2760
New Contributors...
v0.4.7
lm-eval v0.4.7 Release Notes
This release includes several bug fixes, minor improvements to model handling, and task additions.
⚠️ Python 3.8 End of Support Notice
Python 3.8 support will be dropped in future releases as it has reached its end of life. Users are encouraged to upgrade to Python 3.9 or newer.
Backwards Incompatibilities
Chat Template Delimiter Handling (in v0.4.6)
An important modification has been made to how delimiters are handled when applying chat templates in request construction, particularly affecting multiple-choice tasks. This change ensures better compatibility with chat models by respecting their native formatting conventions.
📝 For detailed documentation, please refer to docs/chat-template-readme.md
New Benchmarks & Tasks
- Basque Integration: Added Basque translation of PIQA (piqa_eu) to BasqueBench by @naiarapm in #2531
- SCORE Tasks: Added new subtask for non-greedy robustness evaluation by @rimashahbazyan in #2558
As well as several slight fixes or changes to existing tasks (as noted via the incrementing of versions).
Thanks, the LM Eval Harness team (@baberabb and @lintangsutawika)
What's Changed
- Score tasks by @rimashahbazyan in #2452
- Filters bugfix; add
metricsandfilterto logged sample by @baberabb in #2517 - skip casting if predict_only by @baberabb in #2524
- make utility function to handle
untilby @baberabb in #2518 - Update Unitxt task to use locally installed unitxt and not download Unitxt code from Huggingface by @yoavkatz in #2514
- add Basque translation of PIQA (piqa_eu) to BasqueBench by @naiarapm in #2531
- avoid timeout errors with high concurrency in api_model by @dtrawins in #2307
- Update README.md by @baberabb in #2534
- better doc_to_test testing by @baberabb in #2535
- Support pipeline parallel with OpenVINO models by @sstrehlk in #2349
- Super little tiny fix doc by @fzyzcjy in #2546
- [API] left truncate for generate_until by @baberabb in #2554
- Update Lightning import by @maanug-nv in #2549
- add optimum-intel ipex model by @yao-matrix in #2566
- add warning to readme by @baberabb in #2568
- Adding new subtask to SCORE tasks: non greedy robustness by @rimashahbazyan in #2558
- batch
loglikelihood_rollingacross requests by @baberabb in #2559 - fix
DeprecationWarning: invalid escape sequence '\s'for whitespace filter by @baberabb in #2560 - increment version to 4.6.7 by @baberabb in #2574
New Contributors
- @rimashahbazyan made their first contribution in #2452
- @naiarapm made their first contribution in #2531
- @dtrawins made their first contribution in #2307
- @sstrehlk made their first contribution in #2349
- @fzyzcjy made their first contribution in #2546
- @maanug-nv made their first contribution in #2549
- @yao-matrix made their first contribution in #2566
Full Changelog: v0.4.6...v0.4.7
v0.4.6
lm-eval v0.4.6 Release Notes
This release brings important changes to chat template handling, expands our task library with new multilingual and multimodal benchmarks, and includes various bug fixes.
Backwards Incompatibilities
Chat Template Delimiter Handling
An important modification has been made to how delimiters are handled when applying chat templates in request construction, particularly affecting multiple-choice tasks. This change ensures better compatibility with chat models by respecting their native formatting conventions.
📝 For detailed documentation, please refer to docs/chat-template-readme.md
New Benchmarks & Tasks
Multilingual Expansion
- Spanish Bench: Enhanced benchmark with additional tasks by @zxcvuser in #2390
- Japanese Leaderboard: New comprehensive Japanese language benchmark by @sitfoxfly in #2439
New Task Collections
- Multimodal Unitext: Added support for multimodal tasks available in unitext by @elronbandel in #2364
- Metabench: New benchmark contributed by @kozzy97 in #2357
As well as several slight fixes or changes to existing tasks (as noted via the incrementing of versions).
Thanks, the LM Eval Harness team (@baberabb and @lintangsutawika)
What's Changed
- Add Unitxt Multimodality Support by @elronbandel in #2364
- Add new tasks to spanish_bench and fix duplicates by @zxcvuser in #2390
- fix typo bug for minerva_math by @renjie-ranger in #2404
- Fix: Turkish MMLU Regex Pattern by @ArdaYueksel in #2393
- fix storycloze datanames by @t1101675 in #2409
- Update NoticIA prompt by @ikergarcia1996 in #2421
- [Fix] Replace generic exception classes with a more specific ones by @LSinev in #1989
- Support for IBM watsonx_llm by @Medokins in #2397
- Fix package extras for watsonx support by @kiersten-stokes in #2426
- Fix lora requests when dp with vllm by @ckgresla in #2433
- Add xquad task by @zxcvuser in #2435
- Add verify_certificate argument to local-completion by @sjmonson in #2440
- Add GPTQModel support for evaluating GPTQ models by @Qubitium in #2217
- Add missing task links by @Sypherd in #2449
- Update CODEOWNERS by @haileyschoelkopf in #2453
- Add real process_docs example by @Sypherd in #2456
- Modify label errors in catcola and paws-x by @zxcvuser in #2434
- Add Japanese Leaderboard by @sitfoxfly in #2439
- Typos: Fix 'loglikelihood' misspellings in api_models.py by @RobGeada in #2459
- use global
multi_choice_filterfor mmlu_flan by @baberabb in #2461 - typo by @baberabb in #2465
- pass device_map other than auto for parallelize by @baberabb in #2457
- OpenAI ChatCompletions: switch
max_tokensby @baberabb in #2443 - Ifeval: Dowload
punkt_tabon rank 0 by @baberabb in #2267 - Fix chat template; fix leaderboard math by @baberabb in #2475
- change warning to debug by @baberabb in #2481
- Updated wandb logger to use
new_printer()instead ofget_printer(...)by @alex-titterton in #2484 - IBM watsonx_llm fixes & refactor by @Medokins in #2464
- Fix revision parameter to vllm get_tokenizer by @OyvindTafjord in #2492
- update pre-commit hooks and git actions by @baberabb in #2497
- kbl-v0.1.1 by @whwang299 in #2493
- Add mamba hf to
mamba_ssmby @baberabb in #2496 - remove duplicate
arc_catag by @baberabb in #2499 - Add metabench task to LM Evaluation Harness by @kozzy97 in #2357
- Nits by @baberabb in #2500
- [API models] parse tokenizer_backend=None properly by @baberabb in #2509
New Contributors
- @renjie-ranger made their first contribution in #2404
- @t1101675 made their first contribution in #2409
- @Medokins made their first contribution in #2397
- @kiersten-stokes made their first contribution in #2426
- @ckgresla made their first contribution in #2433
- @sjmonson made their first contribution in #2440
- @Qubitium made their first contribution in #2217
- @Sypherd made their first contribution in #2449
- @sitfoxfly made their first contribution in #2439
- @RobGeada made their first contribution in #2459
- @alex-titterton made their first contribution in #2484
- @OyvindTafjord made their first contribution in #2492
- @whwang299 made their first contribution in #2493
- @kozzy97 made their first contribution in #2357
Full Changelog: v0.4.5...v0.4.6
v0.4.5
lm-eval v0.4.5 Release Notes
New Additions
Prototype Support for Vision Language Models (VLMs)
We're excited to introduce prototype support for Vision Language Models (VLMs) in this release, using model types hf-multimodal and vllm-vlm. This allows for evaluation of models that can process text and image inputs and produce text outputs. Currently we have added support for the MMMU (mmmu_val) task and we welcome contributions and feedback from the community!
New VLM-Specific Arguments
VLM models can be configured with several new arguments within --model_args to support their specific requirements:
max_images(int): Set the maximum number of images for each prompt.interleave(bool): Determines the positioning of image inputs. WhenTrue(default) images are interleaved with the text. WhenFalseall images are placed at the front of the text. This is model dependent.
hf-multimodal specific args:
image_token_id(int) orimage_string(str): Specifies a custom token or string for image placeholders. For example, Llava models expect an"<image>"string to indicate the location of images in the input, while Qwen2-VL models expect an"<|image_pad|>"sentinel string instead. This will be inferred based on model configuration files whenever possible, but we recommend confirming that an override is needed when testing a new model familyconvert_img_format(bool): Whether to convert the images to RGB format.
Example usage:
-
lm_eval --model hf-multimodal --model_args pretrained=llava-hf/llava-1.5-7b-hf,attn_implementation=flash_attention_2,max_images=1,interleave=True,image_string=<image> --tasks mmmu_val --apply_chat_template -
lm_eval --model vllm-vlm --model_args pretrained=llava-hf/llava-1.5-7b-hf,max_images=1,interleave=True --tasks mmmu_val --apply_chat_template
Important considerations
- Chat Template: Most VLMs require the
--apply_chat_templateflag to ensure proper input formatting according to the model's expected chat template. - Some VLM models are limited to processing a single image per prompt. For these models, always set
max_images=1. Additionally, certain models expect image placeholders to be non-interleaved with the text, requiringinterleave=False. - Performance and Compatibility: When working with VLMs, be mindful of potential memory constraints and processing times, especially when handling multiple images or complex tasks.
Tested VLM Models
We have currently most notably tested the implementation with the following models:
- llava-hf/llava-1.5-7b-hf
- llava-hf/llava-v1.6-mistral-7b-hf
- Qwen/Qwen2-VL-2B-Instruct
- HuggingFaceM4/idefics2 (requires the latest
transformersfrom source)
New Tasks
Several new tasks have been contributed to the library for this version!
New tasks as of v0.4.5 include:
- Open Arabic LLM Leaderboard tasks, contributed by @shahrzads @Malikeh97 in #2232
- MMMU (validation set), by @haileyschoelkopf @baberabb @lintangsutawika in #2243
- TurkishMMLU by @ArdaYueksel in #2283
- PortugueseBench, SpanishBench, GalicianBench, BasqueBench, and CatalanBench aggregate multilingual tasks in #2153 #2154 #2155 #2156 #2157 by @zxcvuser and others
As well as several slight fixes or changes to existing tasks (as noted via the incrementing of versions).
Backwards Incompatibilities
Finalizing group versus tag split
We've now fully deprecated the use of group keys directly within a task's configuration file. The appropriate key to use is now solely tag for many cases. See the v0.4.4 patchnotes for more info on migration, if you have a set of task YAMLs maintained outside the Eval Harness repository.
Handling of Causal vs. Seq2seq backend in HFLM
In HFLM, logic specific to handling inputs for Seq2seq (encoder-decoder models like T5) versus Causal (decoder-only autoregressive models, and the vast majority of current LMs) models previously hinged on a check for self.AUTO_MODEL_CLASS == transformers.AutoModelForCausalLM. Some users may want to use causal model behavior, but set self.AUTO_MODEL_CLASS to a different factory class, such as transformers.AutoModelForVision2Seq.
As a result, those users who subclass HFLM but do not call HFLM.__init__() may now also need to set the self.backend attribute to either "causal" or "seq2seq" during initialization themselves.
While this should not affect a large majority of users, for those who subclass HFLM in potentially advanced ways, see #2353 for the full set of changes.
Future Plans
We intend to further expand our multimodal support to a wider set of vision-language tasks, as well as a broader set of model types, and are actively seeking user feedback!
Thanks, the LM Eval Harness team (@baberabb @haileyschoelkopf @lintangsutawika)
What's Changed
- Add Open Arabic LLM Leaderboard Benchmarks (Full and Light Version) by @Malikeh97 in #2232
- Multimodal prototyping by @lintangsutawika in #2243
- Update README.md by @SYusupov in #2297
- remove comma by @baberabb in #2315
- Update neuron backend by @dacorvo in #2314
- Fixed dummy model by @Am1n3e in #2339
- Add a note for missing dependencies by @eldarkurtic in #2336
- squad v2: load metric with
evaluateby @baberabb in #2351 - fix writeout script by @baberabb in #2350
- Treat tags in python tasks the same as yaml tasks by @giuliolovisotto in #2288
- change group to tags in task
eus_examstask configs by @baberabb in #2320 - change glianorex to test split by @baberabb in #2332
- mmlu-pro: add newlines to task descriptions (not leaderboard) by @baberabb in #2334
- Added TurkishMMLU to LM Evaluation Harness by @ArdaYueksel in #2283
- add mmlu readme by @baberabb in #2282
- openai: better error messages; fix greedy matching by @baberabb in #2327
- fix some bugs of mmlu by @eyuansu62 in #2299
- Add new benchmark: Portuguese bench by @zxcvuser in #2156
- Fix missing key in custom task loading. by @giuliolovisotto in #2304
- Add new benchmark: Spanish bench by @zxcvuser in #2157
- Add new benchmark: Galician bench by @zxcvuser in #2155
- Add new benchmark: Basque bench by @zxcvuser in #2153
- Add new benchmark: Catalan bench by @zxcvuser in #2154
- fix tests by @baberabb in #2380
- Hotfix! by @baberabb in #2383
- Solution for CSAT-QA tasks evaluation by @KyujinHan in #2385
- LingOly - Fixing scoring bugs for smaller models by @am-bean in #2376
- Fix float limit override by @cjluo-omniml in #2325
- [API] tokenizer: add trust-remote-code by @baberabb in #2372
- HF: switch conditional checks to
self.backendfromAUTO_MODEL_CLASSby @baberabb in #2353 - max_images are passed on to vllms
limit_mm_per_promptby @baberabb in #2387 - Fix Llava-1.5-hf ; Update to version 0.4.5 by @haileyschoelkopf in #2388
- Bump version to v0.4.5 by @haileyschoelkopf in #2389
New Contributors
- @Malikeh97 made their first contribution in #2232
- @SYusupov made their first contribution in #2297
- @dacorvo made their first contribution in #2314
- @eldarkurtic made their first contribution in #2336
- @giuliolovisotto made their first contribution in #2288
- @ArdaYueksel made their first contribution in #2283
- @zxcvuser made their first contribution in #2156
- @KyujinHan made their first contribution in #2385
- @cjluo-omniml made their first contribution in #2325
Full Changelog: https://github.com/Eleu...
v0.4.4
lm-eval v0.4.4 Release Notes
New Additions
-
This release includes the Open LLM Leaderboard 2 official task implementations! These can be run by using
--tasks leaderboard. Thank you to the HF team (@clefourrier, @NathanHB , @KonradSzafer, @lozovskaya) for contributing these -- you can read more about their Open LLM Leaderboard 2 release here. -
API support is overhauled! Now: support for concurrent requests, chat templates, tokenization, batching and improved customization. This makes API support both more generalizable to new providers and should dramatically speed up API model inference.
- The url can be specified by passing the
base_urlto--model_args, for example,base_url=http://localhost:8000/v1/completions; concurrent requests are controlled with thenum_concurrentargument; tokenization is controlled withtokenized_requests. - Other arguments (such as top_p, top_k, etc.) can be passed to the API using
--gen_kwargsas usual. - Note: Instruct-tuned models, not just base models, can be used with
local-completionsusing--apply_chat_template(either with or withouttokenized_requests).- They can also be used with
local-chat-completions(for e.g. with a OpenAI Chat API endpoint), but only the former supports loglikelihood tasks (e.g. multiple-choice). This is because ChatCompletion style APIs generally do not provide access to logits on prompt/input tokens, preventing easy measurement of multi-token continuations' log probabilities.
- They can also be used with
- example with OpenAI completions API (using vllm serve):
lm_eval --model local-completions --model_args model=meta-llama/Meta-Llama-3.1-8B-Instruct,num_concurrent=10,tokenized_requests=True,tokenizer_backend=huggingface,max_length=4096 --apply_chat_template --batch_size 1 --tasks mmlu
- example with chat API:
lm_eval --model local-chat-completions --model_args model=meta-llama/Meta-Llama-3.1-8B-Instruct,num_concurrent=10 --apply_chat_template --tasks gsm8k
- We recommend evaluating Llama-3.1-405B models via serving them with vllm then running under
local-completions!
- The url can be specified by passing the
-
We've reworked the Task Grouping system to make it clearer when and when not to report an aggregated average score across multiple subtasks. See #Backwards Incompatibilities below for more information on changes and migration instructions.
-
A combination of data-parallel and model-parallel (using HF's
device_mapfunctionality for "naive" pipeline parallel) inference using--model hfis now supported, thank you to @NathanHB and team!
Other new additions include a number of miscellaneous bugfixes and much more. Thank you to all contributors who helped out on this release!
New Tasks
A number of new tasks have been contributed to the library.
As a further discoverability improvement, lm_eval --tasks list now shows all tasks, tags, and groups in a prettier format, along with (if applicable) where to find the associated config file for a task or group! Thank you to @anthony-dipofi for working on this.
New tasks as of v0.4.4 include:
- Open LLM Leaderboard 2 tasks--see above!
- Inverse Scaling tasks, contributed by @h-albert-lee in #1589
- Unitxt tasks reworked by @elronbandel in #1933
- MMLU-SR, contributed by @SkySuperCat in #2032
- IrokoBench, contributed by @JessicaOjo @IsraelAbebe in #2042
- MedConceptQA, contributed by @Ofir408 in #2010
- MMLU Pro, contributed by @ysjprojects in #1961
- GSM-Plus, contributed by @ysjprojects in #2103
- Lingoly, contributed by @am-bean in #2198
- GSM8k and Asdiv settings matching the Llama 3.1 evaluation settings, contributed by @Cameron7195 in #2215 #2236
- TMLU, contributed by @adamlin120 in #2093
- Mela, contributed by @Geralt-Targaryen in #1970
Backwards Incompatibilities
tags versus groups, and how to migrate
Previously, we supported the ability to group a set of tasks together, generally for two purposes: 1) to have an easy-to-call shortcut for a set of tasks one might want to frequently run simultaneously, and 2) to allow for "parent" tasks like mmlu to aggregate and report a unified score across a set of component "subtasks".
There were two ways to add a task to a given group name: 1) to provide (a list of) values to the group field in a given subtask's config file:
# this is a *task* yaml file.
group: group_name1
task: my_task1
# rest of task config goes here...or 2) to define a "group config file" and specify a group along with its constituent subtasks:
# this is a group's yaml file
group: group_name1
task:
- subtask_name1
- subtask_name2
# ...These would both have the same effect of reporting an averaged metric for group_name1 when calling lm_eval --tasks group_name1. However, in use-case 1) (simply registering a shorthand for a list of tasks one is interested in), reporting an aggregate score can be undesirable or ill-defined.
We've now separated out these two use-cases ("shorthand" groupings and hierarchical subtask collections) into a tag and group property separately!
To register a shorthand (now called a tag), simply change the group field name within your task's config to be tag (group_alias keys will no longer be supported in task configs.):
# this is a *task* yaml file.
tag: tag_name1
task: my_task1
# rest of task config goes here...Group config files may remain as is if aggregation is not desired. To opt-in to reporting aggregated scores across a group's subtasks, add the following to your group config file:
# this is a group's yaml file
group: group_name1
task:
- subtask_name1
- subtask_name2
# ...
### New! Needed to turn on aggregation ###
aggregate_metric_list:
- metric: acc # placeholder. Note that all subtasks in this group must report an `acc` metric key
- weight_by_size: True # whether one wishes to report *micro* or *macro*-averaged scores across subtasks. Defaults to `True`.
Please see our documentation here for more information. We apologize for any headaches this migration may create--however, we believe separating out these two functionalities will make it less likely for users to encounter confusion or errors related to mistaken undesired aggregation.
Future Plans
We're planning to make more planning documents public and standardize on (likely) 1 new PyPI release per month! Stay tuned.
Thanks, the LM Eval Harness team (@haileyschoelkopf @lintangsutawika @baberabb)
What's Changed
- fix wandb logger module import in example by @ToluClassics in #2041
- Fix strip whitespace filter by @NathanHB in #2048
- Gemma-2 also needs default
add_bos_token=Trueby @haileyschoelkopf in #2049 - Update
trust_remote_codefor Hellaswag by @haileyschoelkopf in #2029 - Adds Open LLM Leaderboard Taks by @NathanHB in #2047
- #1442 inverse scaling tasks implementation by @h-albert-lee in #1589
- Fix TypeError in samplers.py by converting int to str by @uni2237 in #2074
- Group agg rework by @lintangsutawika in #1741
- Fix printout tests (N/A expected for stderrs) by @haileyschoelkopf in #2080
- Easier unitxt tasks loading and removal of unitxt library dependancy by @elronbandel in #1933
- Allow gating EvaluationTracker HF Hub results; customizability by @NathanHB in #2051
- Minor doc fix: leaderboard README.md missing mmlu-pro group and task by @pankajarm in #2075
- Revert missing utf-8 encoding for logged sample files (#2027) by @haileyschoelkopf in #2082
- Update utils.py by @lintangsutawika in #2085
- batch_size may be str if 'auto' is specified by @meg-huggingface in #2084
- Prettify lm_eval --tasks list by @anthony-dipofi in #1929
- Suppress noisy RougeScorer logs in
truthfulqa_genby @haileyschoelkopf in #2090 - Update default.yaml by @waneon in #2092
- Add new dataset MMLU-SR tasks by @SkySuperCat in #2032
- Irokobench: Benchmark Dataset for African languages by @JessicaOjo in #2042
- docs: remove trailing sentence from contribution doc by @nathan-weinberg in #2098
- Added MedConceptsQA Benchmark by @Ofir408 in #2010
- Also force BOS for
"recurrent_gemma"and other Gemma model types by @haileyschoelkopf in #2105 - formatting by @lintangsutawika in #2104
- docs: align local test command to match CI by @nathan-weinberg in https://gith...
v0.4.3
lm-eval v0.4.3 Release Notes
We're releasing a new version of LM Eval Harness for PyPI users at long last. We intend to release new PyPI versions more frequently in the future.
New Additions
The big new feature is the often-requested Chat Templating, contributed by @KonradSzafer @clefourrier @NathanHB and also worked on by a number of other awesome contributors!
You can now run using a chat template with --apply_chat_template and a system prompt of your choosing using --system_instruction "my sysprompt here". The --fewshot_as_multiturn flag can control whether each few-shot example in context is a new conversational turn or not.
This feature is currently only supported for model types hf and vllm but we intend to gather feedback on improvements and also extend this to other relevant models such as APIs.
There's a lot more to check out, including:
-
Logging results to the HF Hub if desired using
--hf_hub_log_args, by @KonradSzafer and team! -
NeMo model support by @sergiopperez !
-
Anthropic Chat API support by @tryuman !
-
DeepSparse and SparseML model types by @mgoin !
-
Handling of delta-weights in HF models, by @KonradSzafer !
-
LoRA support for VLLM, by @bcicc !
-
Fixes to PEFT modules which add new tokens to the embedding layers, by @mapmeld !
-
Fixes to handling of BOS tokens in multiple-choice loglikelihood settings, by @djstrong !
-
The use of custom
Samplersubclasses in tasks, by @LSinev ! -
The ability to specify "hardcoded" few-shot examples more cleanly, by @clefourrier !
-
Support for Ascend NPUs (
--device npu) by @statelesshz, @zhabuye, @jiaqiw09 and others! -
Logging of
higher_is_betterin results tables for clearer understanding of eval metrics by @zafstojano ! -
extra info logged about models, including info about tokenizers, chat templating, and more, by @artemorloff @djstrong and others!
-
Miscellaneous bug fixes! And many more great contributions we weren't able to list here.
New Tasks
We had a number of new tasks contributed. A listing of subfolders and a brief description of the tasks contained in them can now be found at lm_eval/tasks/README.md. Hopefully this will be a useful step to help users to locate the definitions of relevant tasks more easily, by first visiting this page and then locating the appropriate README.md within a given lm_eval/tasks subfolder, for further info on each task contained within a given folder. Thank you to @anthonydipofi @Harryalways317 @nairbv @sepiatone and others for working on this and giving feedback!
Without further ado, the tasks:
- ACLUE, a benchmark for Ancient Chinese understanding, by @haonan-li
- BasqueGlue and EusExams, two Basque-language tasks by @juletx
- TMMLU+, an evaluation for Traditional Chinese, contributed by @ZoneTwelve
- XNLIeu, a Basque version of XNLI, by @juletx
- Pile-10K, a perplexity eval taken from a subset of the Pile's validation set, contributed by @mukobi
- FDA, SWDE, and Squad-Completion zero-shot tasks by @simran-arora and team
- Added back the
hendrycks_mathtask, the MATH task using the prompt and answer parsing from the original Hendrycks et al. MATH paper rather than Minerva's prompt and parsing - COPAL-ID, a natively-Indonesian commonsense benchmark, contributed by @Erland366
- tinyBenchmarks variants of the Open LLM Leaderboard 1 tasks, by @LucWeber and team!
- Glianorex, a benchmark for testing performance on fictional medical questions, by @maximegmd
- New FLD (formal logic) task variants by @MorishT
- Improved translations of Lambada Multilingual tasks, added by @zafstojano
- NoticIA, a Spanish summarization dataset by @ikergarcia1996
- The Paloma perplexity benchmark, added by @zafstojano
- We've removed the AMMLU dataset due to concerns about auto-translation quality.
- Added the localized, not translated, ArabicMMLU dataset, contributed by @Yazeed7 !
- BertaQA, a Basque cultural knowledge benchmark, by @juletx
- New machine-translated ARC-C datasets by @jonabur !
- CommonsenseQA, in a prompt format following Llama, by @murphybrendan
- ...
Backwards Incompatibilities
The save format for logged results has now changed.
output files will now be written to
{output_path}/{sanitized_model_name}/results_YYYY-MM-DDTHH-MM-SS.xxxxx.jsonif--output_pathis set, and{output_path}/{sanitized_model_name}/samples_{task_name}_YYYY-MM-DDTHH-MM-SS.xxxxx.jsonlfor each task's samples if--log_samplesis set.
e.g. outputs/gpt2/results_2024-06-28T00-00-00.00001.json and outputs/gpt2/samples_lambada_openai_2024-06-28T00-00-00.00001.jsonl.
See #1926 for utilities which may help to work with these new filenames.
Future Plans
In general, we'll be doing our best to keep up with the strong interest and large number of contributions we've seen coming in!
-
The official Open LLM Leaderboard 2 tasks will be landing soon in the Eval Harness main branch and subsequently in
v0.4.4on PyPI! -
The fact that
groups of tasks by-default attempt to report an aggregated score across constituent subtasks has been a sharp edge. We are finishing up some internal reworking to distinguish betweengroups of tasks that do report aggregate scores (thinkmmlu) versustags which simply are a convenient shortcut to call a bunch of tasks one might want to run at once (think thepythiagrouping which merely represents a collection of tasks one might want to gather results on each of all at once but where averaging doesn't make sense). -
We'd also like to improve the API model support in the Eval Harness from its current state.
-
More to come!
Thank you to everyone who's contributed to or used the library!
Thanks, @haileyschoelkopf @lintangsutawika
What's Changed
- use BOS token in loglikelihood by @djstrong in #1588
- Revert "Patch for Seq2Seq Model predictions" by @haileyschoelkopf in #1601
- fix gen_kwargs arg reading by @artemorloff in #1607
- fix until arg processing by @artemorloff in #1608
- Fixes to Loglikelihood prefix token / VLLM by @haileyschoelkopf in #1611
- Add ACLUE task by @haonan-li in #1614
- OpenAI Completions -- fix passing of unexpected 'until' arg by @haileyschoelkopf in #1612
- add logging of model args by @baberabb in #1619
- Add vLLM FAQs to README (#1625) by @haileyschoelkopf in #1633
- peft Version Assertion by @LameloBally in #1635
- Seq2seq fix by @lintangsutawika in #1604
- Integration of NeMo models into LM Evaluation Harness library by @sergiopperez in #1598
- Fix conditional import for Nemo LM class by @haileyschoelkopf in #1641
- Fix SuperGlue's ReCoRD task following regression in v0.4 refactoring by @orsharir in #1647
- Add Latxa paper evaluation tasks for Basque by @juletx in #1654
- Fix CLI --batch_size arg for openai-completions/local-completions by @mgoin in #1656
- Patch QQP prompt (#1648 ) by @haileyschoelkopf in #1661
- TMMLU+ implementation by @ZoneTwelve in #1394
- Anthropic Chat API by @tryumanshow in #1594
- correction bug #1664 by @nicho2 in #1670
- Signpost potential bugs / unsupported ops in MPS backend by @haileyschoelkopf in #1680
- Add delta weights model loading by @KonradSzafer in #1712
- Add
neuralmagicmodels forsparsemlanddeepsparseby @mgoin in #1674 - Improvements to run NVIDIA NeMo models on LM Evaluation Harness by @sergiopperez in #1699
- Adding retries and rate limit to toxicity tasks by @sator-labs in #1620
- reference
--tasks listin README by @nairbv in #1726 - Add XNLIeu: a dataset for cross-lingual NLI in Basque by @juletx in #1694
- Fix Parameter Propagation for Tasks that have
includeby @lintangsutawika in #1749 - Support individual scrolls datasets by @giorgossideris in #1740
- Add filter registry decorator by @lozhn in #1750
- remove duplicated
num_fewshot: 0by @chujiezheng in #1769 - Pile 10k new task by @mukobi in #1758
- Fix m_arc choices by @jordane95 in #1760
- upload new tasks by @simran-arora in https://github.com/EleutherAI/lm-eva...
v0.4.2
lm-eval v0.4.2 Release Notes
We are releasing a new minor version of lm-eval for PyPI users! We've been very happy to see continued usage of the lm-evaluation-harness, including as a standard testbench to propel new architecture design (https://arxiv.org/abs/2402.18668), to ease new benchmark creation (https://arxiv.org/abs/2402.11548, https://arxiv.org/abs/2402.00786, https://arxiv.org/abs/2403.01469), enabling controlled experimentation on LLM evaluation (https://arxiv.org/abs/2402.01781), and more!
New Additions
- Request Caching by @inf3rnus - speedups on startup via caching the construction of documents/requests’ contexts
- Weights and Biases logging by @ayulockin - evals can now be logged to both WandB and Zeno!
- New Tasks
- KMMLU, a localized - not (auto) translated! - dataset for testing Korean knowledge by @h-albert-lee @guijinSON
- GPQA by @uanu2002
- French Bench by @ManuelFay
- EQ-Bench by @pbevan1 and @sqrkl
- HAERAE-Bench, readded by @h-albert-lee
- Updates to answer parsing on many generative tasks (GSM8k, MGSM, BBH zeroshot) by @thinknbtfly!
- Okapi (translated) Open LLM Leaderboard tasks by @uanu2002 and @giux78
- Arabic MMLU and aEXAMS by @khalil-Hennara
- And more!
- Re-introduction of
TemplateLMbase class for lower-code new LM class implementations by @anjor - Run the library with metrics/scoring stage skipped via
--predict_onlyby @baberabb - Many more miscellaneous improvements by a lot of great contributors!
Backwards Incompatibilities
There were a few breaking changes to lm-eval's general API or logic we'd like to highlight:
TaskManager API
previously, users had to call lm_eval.tasks.initialize_tasks() to register the library's default tasks, or lm_eval.tasks.include_path() to include a custom directory of task YAML configs.
Old usage:
import lm_eval
lm_eval.tasks.initialize_tasks()
# or:
lm_eval.tasks.include_path("/path/to/my/custom/tasks")
lm_eval.simple_evaluate(model=lm, tasks=["arc_easy"])
New intended usage:
import lm_eval
# optional--only need to instantiate separately if you want to pass custom path!
task_manager = TaskManager() # pass include_path="/path/to/my/custom/tasks" if desired
lm_eval.simple_evaluate(model=lm, tasks=["arc_easy"], task_manager=task_manager)
get_task_dict() now also optionally takes a TaskManager object, when wanting to load custom tasks.
This should allow for much faster library startup times due to lazily loading requested tasks or groups.
Updated Stderr Aggregation
Previous versions of the library incorrectly reported erroneously large stderr scores for groups of tasks such as MMLU.
We've since updated the formula to correctly aggregate Standard Error scores for groups of tasks reporting accuracies aggregated via their mean across the dataset -- see #1390 #1427 for more information.
As always, please feel free to give us feedback or request new features! We're grateful for the community's support.
What's Changed
- Add support for RWKV models with World tokenizer by @PicoCreator in #1374
- add bypass metric by @baberabb in #1156
- Expand docs, update CITATION.bib by @haileyschoelkopf in #1227
- Hf: minor egde cases by @baberabb in #1380
- Enable override of printed
n-shotin table by @haileyschoelkopf in #1379 - Faster Task and Group Loading, Allow Recursive Groups by @lintangsutawika in #1321
- Fix for #1383 by @pminervini in #1384
- fix on --task list by @lintangsutawika in #1387
- Support for Inf2 optimum class [WIP] by @michaelfeil in #1364
- Update README.md by @mycoalchen in #1398
- Fix confusing
write_out.pyinstructions in README by @haileyschoelkopf in #1371 - Use Pooled rather than Combined Variance for calculating stderr of task groupings by @haileyschoelkopf in #1390
- adding hf_transfer by @michaelfeil in #1400
batch_sizewithautodefaults to 1 ifNo executable batch size foundis raised by @pminervini in #1405- Fix printing bug in #1390 by @haileyschoelkopf in #1414
- Fixes #1416 by @pminervini in #1418
- Fix watchdog timeout by @JeevanBhoot in #1404
- Evaluate by @baberabb in #1385
- Add multilingual ARC task by @uanu2002 in #1419
- Add multilingual TruthfulQA task by @uanu2002 in #1420
- [m_mmul] added multilingual evaluation from alexandrainst/m_mmlu by @giux78 in #1358
- Added seeds to
evaluator.simple_evaluatesignature by @Am1n3e in #1412 - Fix: task weighting by subtask size ; update Pooled Stderr formula slightly by @haileyschoelkopf in #1427
- Refactor utilities into a separate model utils file. by @baberabb in #1429
- Nit fix: Updated OpenBookQA Readme by @adavidho in #1430
- improve hf_transfer activation by @michaelfeil in #1438
- Correct typo in task name in ARC documentation by @larekrow in #1443
- update bbh, gsm8k, mmlu parsing logic and prompts (Orca2 bbh_cot_zeroshot 0% -> 42%) by @thnkinbtfly in #1356
- Add a new task HaeRae-Bench by @h-albert-lee in #1445
- Group reqs by context by @baberabb in #1425
- Add a new task GPQA (the part without CoT) by @uanu2002 in #1434
- Added KMMLU evaluation method and changed ReadMe by @h-albert-lee in #1447
- Add TemplateLM boilerplate LM class by @anjor in #1279
- Log which subtasks were called with which groups by @haileyschoelkopf in #1456
- PR fixing the issue #1391 (wrong contexts in the mgsm task) by @leocnj in #1440
- feat: Add Weights and Biases support by @ayulockin in #1339
- Fixed generation args issue affection OpenAI completion model by @Am1n3e in #1458
- update parsing logic of mgsm following gsm8k (mgsm en 0 -> 50%) by @thnkinbtfly in #1462
- Adding documentation for Weights and Biases CLI interface by @veekaybee in #1466
- Add environment and transformers version logging in results dump by @LSinev in #1464
- Apply code autoformatting with Ruff to tasks/*.py an *init.py by @LSinev in #1469
- Setting trust_remote_code to
Truefor HuggingFace datasets compatibility by @veekaybee in #1467 - add arabic mmlu by @khalil-Hennara in #1402
- Add Gemma support (Add flag to control BOS token usage) by @haileyschoelkopf in #1465
- Revert "Setting trust_remote_code to
Truefor HuggingFace datasets compatibility" by @haileyschoelkopf in #1474 - Create a means for caching task registration and request building. Ad… by @inf3rnus in #1372
- Cont metrics by @lintangsutawika in #1475
- Refactor
evaluater.evaluateby @baberabb in #1441 - add multilingual mmlu eval by @jordane95 in #1484
- Update TruthfulQA val split name by @haileyschoelkopf in #1488
- Fix AttributeError in huggingface.py When 'model_type' is Missing by @richwardle in #1489
- Fix duplicated kwargs in some model init by @lchu-ibm in #1495
- Add multilingual truthfulqa targets by @jordane95 in #1499
- Always include EOS token as stop sequence by @haileyschoelkopf in...
v0.4.1
Release Notes
This PR release contains all changes so far since the release of v0.4.0 , and is partially a test of our release automation, provided by @anjor .
At a high level, some of the changes include:
- Data-parallel inference using vLLM (contributed by @baberabb )
- A major fix to Huggingface model generation--previously, in v0.4.0, due to a bug with stop sequence handling, generations were sometimes cut off too early.
- Miscellaneous documentation updates
- A number of new tasks, and bugfixes to old tasks!
- The support for OpenAI-like API models using
local-completionsorlocal-chat-completions( Thanks to @veekaybee @mgoin @anjor and others on this)! - Integration with tools for visualization of results, such as with Zeno, and WandB coming soon!
More frequent (minor) version releases may be done in the future, to make it easier for PyPI users!
We're very pleased by the uptick in interest in LM Evaluation Harness recently, and we hope to continue to improve the library as time goes on. We're grateful to everyone who's contributed, and are excited by how many new contributors this version brings! If you have feedback for us, or would like to help out developing the library, please let us know.
In the next version release, we hope to include
- Chat Templating + System Prompt support, for locally-run models
- Improved Answer Extraction for many generative tasks, making them more easily run zero-shot and less dependent on model output formatting
- General speedups and QoL fixes to the non-inference portions of LM-Evaluation-Harness, including drastically reduced startup times / faster non-inference processing steps especially when num_fewshot is large!
- A new
TaskManagerobject and the deprecation oflm_eval.tasks.initialize_tasks(), for achieving the easier registration of many tasks and configuration of new groups of tasks
What's Changed
- Announce v0.4.0 in README by @haileyschoelkopf in #1061
- remove commented planned samplers in
lm_eval/api/samplers.pyby @haileyschoelkopf in #1062 - Confirming links in docs work (WIP) by @haileyschoelkopf in #1065
- Set actual version to v0.4.0 by @haileyschoelkopf in #1064
- Updating docs hyperlinks by @haileyschoelkopf in #1066
- Fiddling with READMEs, Reenable CI tests on
mainby @haileyschoelkopf in #1063 - Update _cot_fewshot_template_yaml by @lintangsutawika in #1074
- Patch scrolls by @lintangsutawika in #1077
- Update template of qqp dataset by @shiweijiezero in #1097
- Change the sub-task name from sst to sst2 in glue by @shiweijiezero in #1099
- Add kmmlu evaluation to tasks by @h-albert-lee in #1089
- Fix stderr by @lintangsutawika in #1106
- Simplified
evaluator.pyby @lintangsutawika in #1104 - [Refactor] vllm data parallel by @baberabb in #1035
- Unpack group in
write_outby @baberabb in #1113 - Revert "Simplified
evaluator.py" by @lintangsutawika in #1116 qqp,mnli_mismatch: remove unlabled test sets by @baberabb in #1114- fix: bug of BBH_cot_fewshot by @Momo-Tori in #1118
- Bump BBH version by @haileyschoelkopf in #1120
- Refactor
hfmodeling code by @haileyschoelkopf in #1096 - Additional process for doc_to_choice by @lintangsutawika in #1093
- doc_to_decontamination_query can use function by @lintangsutawika in #1082
- Fix vllm
batch_sizetype by @xTayEx in #1128 - fix: passing max_length to vllm engine args by @NanoCode012 in #1124
- Fix Loading Local Dataset by @lintangsutawika in #1127
- place model onto
mpsby @baberabb in #1133 - Add benchmark FLD by @MorishT in #1122
- fix typo in README.md by @lennijusten in #1136
- add correct openai api key to README.md by @lennijusten in #1138
- Update Linter CI Job by @haileyschoelkopf in #1130
- add utils.clear_torch_cache() to model_comparator by @baberabb in #1142
- Enabling OpenAI completions via gooseai by @veekaybee in #1141
- vllm clean up tqdm by @baberabb in #1144
- openai nits by @baberabb in #1139
- Add IFEval / Instruction-Following Eval by @wiskojo in #1087
- set
--gen_kwargsarg to None by @baberabb in #1145 - Add shorthand flags by @baberabb in #1149
- fld bugfix by @baberabb in #1150
- Remove GooseAI docs and change no-commit-to-branch precommit hook by @veekaybee in #1154
- Add docs on adding a multiple choice metric by @polm-stability in #1147
- Simplify evaluator by @lintangsutawika in #1126
- Generalize Qwen tokenizer fix by @haileyschoelkopf in #1146
- self.device in huggingface.py line 210 treated as torch.device but might be a string by @pminervini in #1172
- Fix Column Naming and Dataset Naming Conventions in K-MMLU Evaluation by @seungduk-yanolja in #1171
- feat: add option to upload results to Zeno by @Sparkier in #990
- Switch Linting to
ruffby @baberabb in #1166 - Error in --num_fewshot option for K-MMLU Evaluation Harness by @guijinSON in #1178
- Implementing local OpenAI API-style chat completions on any given inference server by @veekaybee in #1174
- Update README.md by @anjor in #1184
- Update README.md by @anjor in #1183
- Add tokenizer backend by @anjor in #1186
- Correctly Print Task Versioning by @haileyschoelkopf in #1173
- update Zeno example and reference in README by @Sparkier in #1190
- Remove tokenizer for openai chat completions by @anjor in #1191
- Update README.md by @anjor in #1181
- disable
mypyby @baberabb in #1193 - Generic decorator for handling rate limit errors by @zachschillaci27 in #1109
- Refer in README to main branch by @BramVanroy in #1200
- Hardcode 0-shot for fewshot Minerva Math tasks by @haileyschoelkopf in #1189
- Upstream Mamba Support (
mamba_ssm) by @haileyschoelkopf in #1110 - Update cuda handling by @anjor in #1180
- Fix documentation in API table by @haileyschoelkopf in #1203
- Consolidate batching by @baberabb in #1197
- Add remove_whitespace to FLD benchmark by @MorishT in #1206
- Fix the argument order in
utils.dividedoc by @xTayEx in #1208 - [Fix #1211 ] pin vllm at < 0.2.6 by @haileyschoelkopf in #1212
- fix unbounded local variable by @onnoo in #1218
- nits + fix siqa by @baberabb in #1216
- add length of strings and answer options to Zeno met...