SOTAVerified

Winogrande

Papers

Showing 110 of 26 papers

TitleStatusHype
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-TuningCode9
LayerSkip: Enabling Early Exit Inference and Self-Speculative DecodingCode3
ST-MoE: Designing Stable and Transferable Sparse Expert ModelsCode3
Scaling Language Models: Methods, Analysis & Insights from Training GopherCode2
Bridging the Gap: Enhancing LLM Performance for Low-Resource African Languages with New Benchmarks, Fine-Tuning, and Cultural AdjustmentsCode1
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask BenchmarkCode1
Generative Data Augmentation for Commonsense ReasoningCode1
WinoGrande: An Adversarial Winograd Schema Challenge at ScaleCode1
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma20
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.