SOTAVerified

Multi-task Language Understanding

The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. https://arxiv.org/pdf/2009.03300.pdf

Papers

Showing 3140 of 57 papers

TitleStatusHype
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive PrinciplesCode1
MiLe Loss: a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language ModelsCode1
Language Models are Unsupervised Multitask LearnersCode1
Large Language Models Only Pass Primary School Exams in Indonesia: A Comprehensive Test on IndoMMLUCode1
Merging Models with Fisher-Weighted AveragingCode1
RoBERTa: A Robustly Optimized BERT Pretraining ApproachCode1
TUMLU: A Unified and Native Language Understanding Benchmark for Turkic LanguagesCode1
UnifiedQA: Crossing Format Boundaries With a Single QA SystemCode1
Claude 3.5 Sonnet Model Card Addendum0
Measuring Hong Kong Massive Multi-Task Language Understanding0
Show:102550
← PrevPage 4 of 6Next →

No leaderboard results yet.