SOTAVerified

Mathematical Reasoning

Papers

Showing 501525 of 805 papers

TitleStatusHype
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?Code2
Self-Training with Direct Preference Optimization Improves Chain-of-Thought ReasoningCode2
LEAN-GitHub: Compiling GitHub LEAN repositories for a versatile LEAN proverCode4
Toward Adaptive Reasoning in Large Language Models with Thought RollbackCode1
A Comprehensive Evaluation of Large Language Models on Temporal Event Forecasting0
NeedleBench: Can LLMs Do Retrieval and Reasoning in Information-Dense Context?Code9
Reliable Reasoning Beyond Natural Language0
Fine-Tuning and Prompt Optimization: Two Great Steps that Work Better Together0
Key-Point-Driven Mathematical Reasoning Distillation of Large Language Model0
OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization ModelingCode1
Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models0
Skywork-Math: Data Scaling Laws for Mathematical Reasoning in Large Language Models -- The Story Goes On0
MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data EngineCode4
Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist0
SOLO: A Single Transformer for Scalable Vision-Language ModelingCode2
Progress or Regress? Self-Improvement Reversal in Post-training0
LogicVista: Multimodal LLM Logical Reasoning Benchmark in Visual ContextsCode1
Smart Vision-Language ReasonersCode0
DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical ReasoningCode1
How Does Quantization Affect Multilingual LLMs?0
TheoremLlama: Transforming General-Purpose LLMs into Lean4 ExpertsCode1
Integrate the Essence and Eliminate the Dross: Fine-Grained Self-Consistency for Free-Form Language GenerationCode0
FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in Large Language ModelsCode0
We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?Code2
Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical ReasoningCode1
Show:102550
← PrevPage 21 of 33Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAcc94.4Unverified
2DeepSeek-r1Acc79.8Unverified
3Openai-o1Acc74.4Unverified
4Openai-o1-miniAcc70Unverified
5Search-o1Acc56.7Unverified
6s1-32BAcc56.7Unverified
7Openai-o1-previewAcc44.6Unverified
8Qwen2.5-72B-InstructAcc23.3Unverified
9Claude3.5-SonnetAcc16Unverified
#ModelMetricClaimedVerifiedStatus
1o3Accuracy0.25Unverified
2Gemini 1.5 Pro (002)Accuracy0.02Unverified
3GPT-4oAccuracy0.01Unverified
4o1-miniAccuracy0.01Unverified
5o1-previewAccuracy0.01Unverified
6Claude 3.5 SonnetAccuracy0.01Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.6Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.48Unverified
3Neo-P (Fine-tuned, 2.7B)Accuracy0.39Unverified
4GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
5Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.25Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.2Unverified
#ModelMetricClaimedVerifiedStatus
1Codex (Few-Shot, 175B)Accuracy0.59Unverified
2Bhāskara-P (Fine-tuned, 2.7B)Accuracy0.45Unverified
3GPT-3 (Few-Shot, 175B)Accuracy0.38Unverified
4Bhāskara-A (Fine-tuned, 2.7B)Accuracy0.27Unverified
5Neo-P (Fine-tuned, 2.7B)Accuracy0.24Unverified
6Neo-A (Fine-tuned, 2.7B)Accuracy0.18Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDCompletion accuracy65.8Unverified
2PGPSNetCompletion accuracy62.7Unverified
3GAPSCompletion accuracy61.2Unverified
4Inter-GPSCompletion accuracy59.8Unverified
5GeoformerCompletion accuracy35.6Unverified
6NGSCompletion accuracy34.1Unverified
#ModelMetricClaimedVerifiedStatus
1QWQ-32B-previewAcc82.5Unverified
2Math-MasterAcc82Unverified
3Qwen2.5-Math-7B-instructAcc62.5Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)75.2Unverified
2GAPSAccuracy (%)67.8Unverified
#ModelMetricClaimedVerifiedStatus
1Search-o1Acc86.4Unverified
#ModelMetricClaimedVerifiedStatus
1GOLDAccuracy (%)98.5Unverified
#ModelMetricClaimedVerifiedStatus
1GAPSAccuracy (%)97.5Unverified