SOTAVerified

Mathematical Reasoning

Papers

Showing 2650 of 805 papers

TitleStatusHype
AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning datasetCode4
MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level SupervisionCode4
Knowledge Fusion of Large Language ModelsCode4
ChatGPT for Robotics: Design Principles and Model AbilitiesCode4
OpenMathInstruct-2: Accelerating AI for Math with Massive Open-Source Instruction DataCode4
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought TemplatesCode4
MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data EngineCode4
Galactica: A Large Language Model for ScienceCode4
SuperCorrect: Supervising and Correcting Language Models with Error-Driven InsightsCode4
LEAN-GitHub: Compiling GitHub LEAN repositories for a versatile LEAN proverCode4
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data CompositionCode3
Self-Refine: Iterative Refinement with Self-FeedbackCode3
Reinforcement Learning for Reasoning in Large Language Models with One Training ExampleCode3
Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn'tCode3
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon GenerationCode3
General-Reasoner: Advancing LLM Reasoning Across All DomainsCode3
Reasoning with Language Model Prompting: A SurveyCode3
Self-rewarding correction for mathematical reasoningCode3
MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical ReasoningCode3
PAL: Program-aided Language ModelsCode3
MM-Agent: LLM as Agents for Real-world Mathematical Modeling ProblemCode3
MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via Knowledge GraphsCode3
MoRA: High-Rank Updating for Parameter-Efficient Fine-TuningCode3
MARIO: MAth Reasoning with code Interpreter Output -- A Reproducible PipelineCode3
DeepMath-103K: A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing ReasoningCode3
Show:102550
← PrevPage 2 of 33Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAcc94.4Unverified
2DeepSeek-r1Acc79.8Unverified
3Openai-o1Acc74.4Unverified
4Openai-o1-miniAcc70Unverified
5s1-32BAcc56.7Unverified
6Search-o1Acc56.7Unverified
7Openai-o1-previewAcc44.6Unverified
8Qwen2.5-72B-InstructAcc23.3Unverified
9Claude3.5-SonnetAcc16Unverified
#ModelMetricClaimedVerifiedStatus
1o3Accuracy0.25Unverified
2Gemini 1.5 Pro (002)Accuracy0.02Unverified
3o1-previewAccuracy0.01Unverified
4GPT-4oAccuracy0.01Unverified
5Claude 3.5 SonnetAccuracy0.01Unverified
6o1-miniAccuracy0.01Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.6Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.48Unverified
3Neo-P (Fine-tuned, 2.7B)Accuracy0.39Unverified
4GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
5Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.25Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.2Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.59Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.45Unverified
3GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
4Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.27Unverified
5Neo-P (Fine-tuned, 2.7B)Accuracy0.24Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.18Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDCompletion accuracy65.8Unverified
2PGPSNetCompletion accuracy62.7Unverified
3GAPSCompletion accuracy61.2Unverified
4Inter-GPSCompletion accuracy59.8Unverified
5GeoformerCompletion accuracy35.6Unverified
6NGSCompletion accuracy34.1Unverified
#ModelMetricClaimedVerifiedStatus
1QWQ-32B-previewAcc82.5Unverified
2Math-MasterAcc82Unverified
3Qwen2.5-Math-7B-instructAcc62.5Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)75.2Unverified
2GAPSAccuracy (%)67.8Unverified
#ModelMetricClaimedVerifiedStatus
1Search-o1Acc86.4Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)98.5Unverified
#ModelMetricClaimedVerifiedStatus
1GAPSAccuracy (%)97.5Unverified