SOTAVerified

HellaSwag

Papers

Showing 3139 of 39 papers

TitleStatusHype
Training Compute-Optimal Large Language ModelsCode6
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data AugmentationCode1
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation0
Comparing Test Sets with Item Response Theory0
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask BenchmarkCode1
English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too0
Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning0
HellaSwag: Can a Machine Really Finish Your Sentence?Code0
Show:102550
← PrevPage 4 of 4Next →

No leaderboard results yet.