SOTAVerified

Multi-task Language Understanding

The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. https://arxiv.org/pdf/2009.03300.pdf

Papers

Showing 4150 of 57 papers

TitleStatusHype
Transcending Scaling Laws with 0.1% Extra Compute0
Model Card and Evaluations for Claude Models0
Orca 2: Teaching Small Language Models How to Reason0
Reasoning Beyond Bias: A Study on Counterfactual Prompting and Chain of Thought Reasoning0
Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning0
IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding0
GPT-4o as the Gold Standard: A Scalable and General Purpose Approach to Filter Language Model Pretraining Data0
Effectiveness of Zero-shot-CoT in Japanese Prompts0
MMLU-SR: A Benchmark for Stress-Testing Reasoning Capability of Large Language Models0
The Claude 3 Model Family: Opus, Sonnet, Haiku0
Show:102550
← PrevPage 5 of 6Next →

No leaderboard results yet.