SOTAVerified

MMLU

Papers

Showing 151175 of 340 papers

TitleStatusHype
Unveiling the Secret Recipe: A Guide For Supervised Fine-Tuning Small LLMs0
Bridging the Gap: Enhancing LLM Performance for Low-Resource African Languages with New Benchmarks, Fine-Tuning, and Cultural AdjustmentsCode1
Nanoscaling Floating-Point (NxFP): NanoMantissa, Adaptive Microexponents, and Code Recycling for Direct-Cast Compression of Large Language Models0
Llama 3 Meets MoE: Efficient Upcycling0
LLM Distillation for Efficient Few-Shot Multiple Choice Question Answering0
HadaCore: Tensor Core Accelerated Hadamard Transform KernelCode3
Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation0
Nemotron-CC: Transforming Common Crawl into a Refined Long-Horizon Pretraining Dataset0
The Vulnerability of Language Model Benchmarks: Do They Accurately Reflect True LLM Performance?0
Noise Injection Reveals Hidden Capabilities of Sandbagging Language ModelsCode0
Improving Physics Reasoning in Large Language Models Using Mixture of Refinement Agents0
Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models0
Mixture of Cache-Conditional Experts for Efficient Mobile Device Inference0
Predicting Emergent Capabilities by Finetuning0
Learning from "Silly" Questions Improves Large Language Models, But Only Slightly0
GenBFA: An Evolutionary Optimization Approach to Bit-Flip Attacks on LLMs0
Real-time Adapting Routing (RAR): Improving Efficiency Through Continuous Learning in Software Powered by Layered Foundation Models0
Reasoning Robustness of LLMs to Adversarial Typographical Errors0
Watson: A Cognitive Observability Framework for the Reasoning of LLM-Powered Agents0
TODO: Enhancing LLM Alignment with Ternary PreferencesCode0
Project MPG: towards a generalized performance benchmark for LLM capabilities0
Shopping MMLU: A Massive Multi-Task Online Shopping Benchmark for Large Language ModelsCode1
Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-DesignCode1
LOGO -- Long cOntext aliGnment via efficient preference OptimizationCode1
Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment0
Show:102550
← PrevPage 7 of 14Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1go ahead, make my dataFinal_score61.72Unverified
2#GreedyCowFinal_score61.63Unverified
3Don't Ask Us yFinal_score61.4Unverified
4Data_and_ConfusedFinal_score60.96Unverified
5raakaFinal_score60.91Unverified
6WafflesFinal_score60.91Unverified
7Team ProcrustinationFinal_score60.64Unverified
8Axiom Consulting PartnersFinal_score60.63Unverified
9Lets_Be_FairFinal_score60.23Unverified
10goonersFinal_score60.22Unverified
#ModelMetricClaimedVerifiedStatus
1Orange-mini0-shot MRR99.19Unverified
#ModelMetricClaimedVerifiedStatus
1HybridBeam+SI-SDRi13.3Unverified