SOTAVerified

GSM8K

Papers

Showing 76100 of 439 papers

TitleStatusHype
CoT-Valve: Length-Compressible Chain-of-Thought TuningCode2
Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-RewardingCode2
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of ParametersCode2
Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning ProcessCode2
Preference Optimization for Reasoning with Pseudo FeedbackCode2
Seek in the Dark: Reasoning via Test-Time Instance-Level Policy Gradient in Latent SpaceCode2
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning TasksCode1
NeMo-Inspector: A Visualization Tool for LLM Generation AnalysisCode1
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning ProofsCode1
OVM, Outcome-supervised Value Models for Planning in Mathematical ReasoningCode1
Automatic Model Selection with Large Language Models for ReasoningCode1
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought ThinkingCode1
CommVQ: Commutative Vector Quantization for KV Cache CompressionCode1
Over-Reasoning and Redundant Calculation of Large Language ModelsCode1
Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement LearningCode1
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsCode1
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language ModelsCode1
MR-GSM8K: A Meta-Reasoning Benchmark for Large Language Model EvaluationCode1
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word ProblemsCode1
Markovian Transformers for Informative Language ModelingCode1
Math Neurosurgery: Isolating Language Models' Math Reasoning Abilities Using Only Forward PassesCode1
Multiple-Choice Questions are Efficient and Robust LLM EvaluatorsCode1
AskIt: Unified Programming Interface for Programming with Large Language ModelsCode1
DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical ReasoningCode1
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with AutoformalizationCode1
Show:102550
← PrevPage 4 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAccuracy98.1Unverified
2Orange-mini0-shot MRR98Unverified