SOTAVerified

GSM8K

Papers

Showing 101150 of 439 papers

TitleStatusHype
Self-Polish: Enhance Reasoning in Large Language Models via Problem RefinementCode1
Efficient Reasoning for LLMs through Speculative Chain-of-ThoughtCode1
SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language Models via Selective Layer-Wise Model MergingCode1
Large Language Models are Contrastive ReasonersCode1
MR-GSM8K: A Meta-Reasoning Benchmark for Large Language Model EvaluationCode1
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word ProblemsCode1
Segment Policy Optimization: Effective Segment-Level Credit Assignment in RL for Large Language ModelsCode1
Entropy-Based Adaptive Weighting for Self-TrainingCode1
Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-ContrastCode1
AskIt: Unified Programming Interface for Programming with Large Language ModelsCode1
DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical ReasoningCode1
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with AutoformalizationCode1
PromptCoT: Synthesizing Olympiad-level Problems for Mathematical Reasoning in Large Language ModelsCode1
Matrix Information Theory for Self-Supervised LearningCode1
Over-Reasoning and Redundant Calculation of Large Language ModelsCode1
Solving Math Word Problems by Combining Language Models With Symbolic SolversCode1
Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical ReasoningCode1
OVM, Outcome-supervised Value Models for Planning in Mathematical ReasoningCode1
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and DistillationCode1
GRACE: Discriminator-Guided Chain-of-Thought ReasoningCode1
Breaking Language Barriers in Multilingual Mathematical Reasoning: Insights and ObservationsCode1
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt TemplatesCode1
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning ProofsCode1
Boosted Prompt Ensembles for Large Language ModelsCode1
Topology of Reasoning: Understanding Large Reasoning Models through Reasoning Graph PropertiesCode1
Toward Self-Improvement of LLMs via Imagination, Searching, and CriticizingCode1
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought ThinkingCode1
Design of Chain-of-Thought in Math Problem SolvingCode1
DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based SamplingCode1
Data Whisperer: Efficient Data Selection for Task-Specific LLM Fine-Tuning via Few-Shot In-Context LearningCode1
Multiple-Choice Questions are Efficient and Robust LLM EvaluatorsCode1
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsCode1
Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language ModelsCode1
Math Neurosurgery: Isolating Language Models' Math Reasoning Abilities Using Only Forward PassesCode1
NeMo-Inspector: A Visualization Tool for LLM Generation AnalysisCode1
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive PrinciplesCode1
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsCode1
Markovian Transformers for Informative Language ModelingCode1
GReaTer: Gradients over Reasoning Makes Smaller Language Models Strong Prompt OptimizersCode1
Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning CapabilityCode1
LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA OptimizationCode1
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language ModelsCode1
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal DictionariesCode1
Learning Goal-Conditioned Representations for Language Reward ModelsCode1
Large (Vision) Language Models are Unsupervised In-Context LearnersCode1
Learning From Mistakes Makes LLM Better ReasonerCode1
FINEREASON: Evaluating and Improving LLMs' Deliberate Reasoning through Reflective Puzzle SolvingCode1
Improving LLM Reasoning with Multi-Agent Tree-of-Thought Validator AgentCode1
IRanker: Towards Ranking Foundation ModelCode1
Learning Math Reasoning from Self-Sampled Correct and Partially-Correct SolutionsCode1
Show:102550
← PrevPage 3 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAccuracy98.1Unverified
2Orange-mini0-shot MRR98Unverified