SOTAVerified

Mathematical Reasoning

Papers

Showing 326350 of 805 papers

TitleStatusHype
O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning PruningCode2
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningCode15
CDW-CoT: Clustered Distance-Weighted Chain-of-Thoughts Reasoning0
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model0
Benchmarking Large Language Models via Random Variables0
Chain-of-Reasoning: Towards Unified Mathematical Reasoning in Large Language Models via a Multi-Paradigm Perspective0
Control LLM: Controlled Evolution for Intelligence Retention in LLMCode1
Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback0
The Lessons of Developing Process Reward Models in Mathematical Reasoning0
Open Eyes, Then Reason: Fine-grained Visual Mathematical Understanding in MLLMsCode1
Search-o1: Agentic Search-Enhanced Large Reasoning ModelsCode5
VoxEval: Benchmarking the Knowledge Understanding Capabilities of End-to-End Spoken Language ModelsCode1
URSA: Understanding and Verifying Chain-of-thought Reasoning in Multimodal MathematicsCode2
Quantization Meets Reasoning: Exploring LLM Low-Bit Quantization Degradation for Mathematical Reasoning0
Understand, Solve and Translate: Bridging the Multilingual Mathematical Reasoning Gap0
Table as Thought: Exploring Structured Thoughts in LLM Reasoning0
Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search0
Plug-and-Play Training Framework for Preference Optimization0
LLM2: Let Large Language Models Harness System 2 ReasoningCode0
Large Language Models for Mathematical AnalysisCode0
LLM Reasoning Engine: Specialized Training for Enhanced Mathematical Reasoning0
Multilingual Mathematical Reasoning: Advancing Open-Source LLMs in Hindi and EnglishCode0
B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught ReasonersCode2
Multi-Agent Sampling: Scaling Inference Compute for Data Synthesis with Tree Search-Based Agentic CollaborationCode0
System-2 Mathematical Reasoning via Enriched Instruction Tuning0
Show:102550
← PrevPage 14 of 33Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAcc94.4Unverified
2DeepSeek-r1Acc79.8Unverified
3Openai-o1Acc74.4Unverified
4Openai-o1-miniAcc70Unverified
5s1-32BAcc56.7Unverified
6Search-o1Acc56.7Unverified
7Openai-o1-previewAcc44.6Unverified
8Qwen2.5-72B-InstructAcc23.3Unverified
9Claude3.5-SonnetAcc16Unverified
#ModelMetricClaimedVerifiedStatus
1o3Accuracy0.25Unverified
2Gemini 1.5 Pro (002)Accuracy0.02Unverified
3o1-previewAccuracy0.01Unverified
4GPT-4oAccuracy0.01Unverified
5Claude 3.5 SonnetAccuracy0.01Unverified
6o1-miniAccuracy0.01Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.6Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.48Unverified
3Neo-P (Fine-tuned, 2.7B)Accuracy0.39Unverified
4GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
5Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.25Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.2Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.59Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.45Unverified
3GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
4Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.27Unverified
5Neo-P (Fine-tuned, 2.7B)Accuracy0.24Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.18Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDCompletion accuracy65.8Unverified
2PGPSNetCompletion accuracy62.7Unverified
3GAPSCompletion accuracy61.2Unverified
4Inter-GPSCompletion accuracy59.8Unverified
5GeoformerCompletion accuracy35.6Unverified
6NGSCompletion accuracy34.1Unverified
#ModelMetricClaimedVerifiedStatus
1QWQ-32B-previewAcc82.5Unverified
2Math-MasterAcc82Unverified
3Qwen2.5-Math-7B-instructAcc62.5Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)75.2Unverified
2GAPSAccuracy (%)67.8Unverified
#ModelMetricClaimedVerifiedStatus
1Search-o1Acc86.4Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)98.5Unverified
#ModelMetricClaimedVerifiedStatus
1GAPSAccuracy (%)97.5Unverified