We now support VLM when using transformers backend 🥳
What's Changed
New Features 🎉
- Added support for quantization in vLLM backend by @SulRash in #690
- Adds multimodal support and MMMU pro by @NathanHB in #675
- Allow for model kwargs when loading transformers from pretrained by @NathanHB in #754
- Adds template for custom path saving results by @NathanHB in #755
- Nanotron, Multilingual tasks update + misc by @hynky1999 in #756
- Async vllm by @clefourrier in #693
New Tasks
- Adds More Generative tasks by @hynky1999 in #694
- Added Flores by @clefourrier in #717
Task and Metrics changes 🛠️
- Nanotron, Multilingual tasks update + misc by @hynky1999 in #756
- add livecodebench v6 by @Cppowboy in #712
- Add MCQ support to Yourbench evaluation by @alozowski in #734
Other Changes
- Bump ruff version by @NathanHB in #774
- Fix revision arg for vLLM tokenizer by @lewtun in #721
- Update README.md by @clefourrier in #733
- Fix litellm by @NathanHB in #736
New Contributors
- @Cppowboy made their first contribution in #712
- @SulRash made their first contribution in #690
- @Abelgurung made their first contribution in #743
Full Changelog: v0.9.2...v0.10.0