SOTAVerified

GSM8K

Papers

Showing 101125 of 439 papers

TitleStatusHype
GReaTer: Gradients over Reasoning Makes Smaller Language Models Strong Prompt OptimizersCode1
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal DictionariesCode1
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning CapabilityCode1
What Do Learning Dynamics Reveal About Generalization in LLM Reasoning?Code1
UTMath: Math Evaluation with Unit Test via Reasoning-to-Coding ThoughtsCode1
LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA OptimizationCode1
Math Neurosurgery: Isolating Language Models' Math Reasoning Abilities Using Only Forward PassesCode1
Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement LearningCode1
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsCode1
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning TasksCode1
Improving LLM Reasoning with Multi-Agent Tree-of-Thought Validator AgentCode1
SORSA: Singular Values and Orthonormal Regularized Singular Vectors Adaptation of Large Language ModelsCode1
Mathfish: Evaluating Language Model Math Reasoning via Grounding in Educational CurriculaCode1
Learning Goal-Conditioned Representations for Language Reward ModelsCode1
DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical ReasoningCode1
Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical ReasoningCode1
LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language FeedbackCode1
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive PrinciplesCode1
DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based SamplingCode1
ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token IdentificationCode1
Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-ContrastCode1
Multiple-Choice Questions are Efficient and Robust LLM EvaluatorsCode1
Markovian Transformers for Informative Language ModelingCode1
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word ProblemsCode1
Toward Self-Improvement of LLMs via Imagination, Searching, and CriticizingCode1
Show:102550
← PrevPage 5 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAccuracy98.1Unverified
2Orange-mini0-shot MRR98Unverified