SOTAVerified

GSM8K

Papers

Showing 101125 of 439 papers

TitleStatusHype
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought ThinkingCode1
GReaTer: Gradients over Reasoning Makes Smaller Language Models Strong Prompt OptimizersCode1
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning TasksCode1
MR-GSM8K: A Meta-Reasoning Benchmark for Large Language Model EvaluationCode1
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word ProblemsCode1
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning ProofsCode1
Self-Training Elicits Concise Reasoning in Large Language ModelsCode1
AskIt: Unified Programming Interface for Programming with Large Language ModelsCode1
DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical ReasoningCode1
Entropy-Regularized Process Reward ModelCode1
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with AutoformalizationCode1
Math Neurosurgery: Isolating Language Models' Math Reasoning Abilities Using Only Forward PassesCode1
Markovian Transformers for Informative Language ModelingCode1
Improving LLM Reasoning with Multi-Agent Tree-of-Thought Validator AgentCode1
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language ModelsCode1
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsCode1
LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA OptimizationCode1
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal DictionariesCode1
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and DistillationCode1
GRACE: Discriminator-Guided Chain-of-Thought ReasoningCode1
Breaking Language Barriers in Multilingual Mathematical Reasoning: Insights and ObservationsCode1
Learning Goal-Conditioned Representations for Language Reward ModelsCode1
Multiple-Choice Questions are Efficient and Robust LLM EvaluatorsCode1
Boosted Prompt Ensembles for Large Language ModelsCode1
Large (Vision) Language Models are Unsupervised In-Context LearnersCode1
Show:102550
← PrevPage 5 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAccuracy98.1Unverified
2Orange-mini0-shot MRR98Unverified