SOTAVerified

Mathematical Reasoning

Papers

Showing 451500 of 805 papers

TitleStatusHype
Beyond Gold Standards: Epistemic Ensemble of LLM Judges for Formal Mathematical Reasoning0
Beyond Lines and Circles: Unveiling the Geometric Reasoning Gap in Large Language Models0
Beyond the First Error: Process Reward Models for Reflective Mathematical Reasoning0
BitNet b1.58 2B4T Technical Report0
Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning0
Boosting Lossless Speculative Decoding via Feature Sampling and Partial Alignment Distillation0
Bottlenecked Transformers: Periodic KV Cache Abstraction for Generalised Reasoning0
Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics0
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models0
Building Math Agents with Multi-Turn Iterative Preference Learning0
Can Language Models Rival Mathematics Students? Evaluating Mathematical Reasoning through Textual Manipulation and Human Experiments0
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations0
Can Large Language Models Invent Algorithms to Improve Themselves?0
Can LLMs understand Math? -- Exploring the Pitfalls in Mathematical Reasoning0
Can Pruning Improve Reasoning? Revisiting Long-CoT Compression with Capability in Mind for Better Reasoning0
Can Theoretical Physics Research Benefit from Language Agents?0
Causal Head Gating: A Framework for Interpreting Roles of Attention Heads in Transformers0
Causal Inference with Large Language Model: A Survey0
CDW-CoT: Clustered Distance-Weighted Chain-of-Thoughts Reasoning0
Chain-of-Reasoning: Towards Unified Mathematical Reasoning in Large Language Models via a Multi-Paradigm Perspective0
CHAMP: A Competition-level Dataset for Fine-Grained Analyses of LLMs' Mathematical Reasoning Capabilities0
Channel Merging: Preserving Specialization for Merged Experts0
CLEAR: Contrasting Textual Feedback with Experts and Amateurs for Reasoning0
Coarse-to-Fine Process Reward Modeling for Enhanced Mathematical Reasoning0
CodeGemma: Open Code Models Based on Gemma0
CodePMP: Scalable Preference Model Pretraining for Large Language Model Reasoning0
Composing Ensembles of Pre-trained Models via Iterative Consensus0
Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting0
Conjectures, Tests and Proofs: An Overview of Theory Exploration0
ControlMath: Controllable Data Generation Promotes Math Generalist Models0
CoRE: Enhancing Metacognition with Label-free Self-evaluation in LRMs0
CPL: Critical Plan Step Learning Boosts LLM Generalization in Reasoning Tasks0
DeepDistill: Enhancing LLM Reasoning Capabilities via Large-Scale Difficulty-Graded Data Training0
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data0
Describe-then-Reason: Improving Multimodal Mathematical Reasoning through Visual Comprehension Training0
Diversity-Aware Policy Optimization for Large Language Model Reasoning0
Diversity of Thought Elicits Stronger Reasoning Capabilities in Multi-Agent Debate Frameworks0
Do Large Language Models Truly Grasp Mathematics? An Empirical Exploration From Cognitive Psychology0
Don't Look Only Once: Towards Multimodal Interactive Reasoning with Selective Visual Revisitation0
Don't Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models0
DRP: Distilled Reasoning Pruning with Skill-aware Step Decomposition for Efficient Large Reasoning Models0
Dual Instruction Tuning with Large Language Models for Mathematical Reasoning0
DynaMath: A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models0
Dynamic Sampling that Adapts: Iterative DPO for Self-Aware Mathematical Reasoning0
Efficient Long CoT Reasoning in Small Language Models0
Efficient Model-agnostic Alignment via Bayesian Persuasion0
Efficient Tool Use with Chain-of-Abstraction Reasoning0
Eliciting Reasoning in Language Models with Cognitive Tools0
Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning0
Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory0
Show:102550
← PrevPage 10 of 17Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAcc94.4Unverified
2DeepSeek-r1Acc79.8Unverified
3Openai-o1Acc74.4Unverified
4Openai-o1-miniAcc70Unverified
5s1-32BAcc56.7Unverified
6Search-o1Acc56.7Unverified
7Openai-o1-previewAcc44.6Unverified
8Qwen2.5-72B-InstructAcc23.3Unverified
9Claude3.5-SonnetAcc16Unverified
#ModelMetricClaimedVerifiedStatus
1o3Accuracy0.25Unverified
2Gemini 1.5 Pro (002)Accuracy0.02Unverified
3o1-previewAccuracy0.01Unverified
4GPT-4oAccuracy0.01Unverified
5Claude 3.5 SonnetAccuracy0.01Unverified
6o1-miniAccuracy0.01Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.6Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.48Unverified
3Neo-P (Fine-tuned, 2.7B)Accuracy0.39Unverified
4GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
5Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.25Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.2Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.59Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.45Unverified
3GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
4Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.27Unverified
5Neo-P (Fine-tuned, 2.7B)Accuracy0.24Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.18Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDCompletion accuracy65.8Unverified
2PGPSNetCompletion accuracy62.7Unverified
3GAPSCompletion accuracy61.2Unverified
4Inter-GPSCompletion accuracy59.8Unverified
5GeoformerCompletion accuracy35.6Unverified
6NGSCompletion accuracy34.1Unverified
#ModelMetricClaimedVerifiedStatus
1QWQ-32B-previewAcc82.5Unverified
2Math-MasterAcc82Unverified
3Qwen2.5-Math-7B-instructAcc62.5Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)75.2Unverified
2GAPSAccuracy (%)67.8Unverified
#ModelMetricClaimedVerifiedStatus
1Search-o1Acc86.4Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)98.5Unverified
#ModelMetricClaimedVerifiedStatus
1GAPSAccuracy (%)97.5Unverified