SOTAVerified

Multi-task Language Understanding

The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. https://arxiv.org/pdf/2009.03300.pdf

Papers

Showing 4150 of 57 papers

TitleStatusHype
IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding0
Llama 3 Meets MoE: Efficient UpcyclingCode0
GPT-4o as the Gold Standard: A Scalable and General Purpose Approach to Filter Language Model Pretraining Data0
Reasoning Beyond Bias: A Study on Counterfactual Prompting and Chain of Thought Reasoning0
Claude 3.5 Sonnet Model Card Addendum0
MMLU-SR: A Benchmark for Stress-Testing Reasoning Capability of Large Language Models0
Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLMCode0
The Claude 3 Model Family: Opus, Sonnet, Haiku0
The Falcon Series of Open Language Models0
Orca 2: Teaching Small Language Models How to Reason0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.