SOTAVerified

Arithmetic Reasoning

Papers

Showing 76100 of 175 papers

TitleStatusHype
SatLM: Satisfiability-Aided Language Models Using Declarative PromptingCode1
Rethinking Addressing in Language Models via Contexualized Equivariant Positional EncodingCode1
OVM, Outcome-supervised Value Models for Planning in Mathematical ReasoningCode1
Are Human-generated Demonstrations Necessary for In-context Learning?Code1
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and DistillationCode1
Learning Math Reasoning from Self-Sampled Correct and Partially-Correct SolutionsCode1
Prompt Optimization via Adversarial In-Context LearningCode1
Arithmetic Without Algorithms: Language Models Solve Math With a Bag of HeuristicsCode1
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language ModelsCode1
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor AdaptationCode1
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning ProofsCode1
Toward Adaptive Reasoning in Large Language Models with Thought RollbackCode1
Your Language Model May Think Too Rigidly: Achieving Reasoning Consistency with Symmetry-Enhanced Training0
Leveraging LLM Reasoning Enhances Personalized Recommender Systems0
Arithmetic Reasoning with LLM: Prolog Generation & Permutation0
Evaluating LLMs' Mathematical Reasoning in Financial Document Question Answering0
Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning0
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?0
CLoQ: Enhancing Fine-Tuning of Quantized LLMs via Calibrated LoRA Initialization0
Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models0
Composing Ensembles of Pre-trained Models via Iterative Consensus0
DiversiGATE: A Comprehensive Framework for Reliable Large Language Models0
DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models0
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment0
Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting0
Show:102550
← PrevPage 4 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Claude 3.5 Sonnet (HPT)Accuracy97.72Unverified
2DUP prompt upon GPT-4Accuracy97.1Unverified
3Qwen2-Math-72B-Instruct (greedy)Accuracy96.7Unverified
4SFT-Mistral-7B (Metamath, OVM, Smart Ensemble)Accuracy96.4Unverified
5OpenMath2-Llama3.1-70B (majority@256)Accuracy96Unverified
6Jiutian-大模型Accuracy95.2Unverified
7DAMOMath-7B(MetaMath, OVM, BS, Ensemble)Accuracy95.1Unverified
8Claude 3 Opus (0-shot chain-of-thought)Accuracy95Unverified
9OpenMath2-Llama3.1-70BAccuracy94.9Unverified
10GPT-4 (Teaching-Inspired)Accuracy94.8Unverified
#ModelMetricClaimedVerifiedStatus
1Text-davinci-002 (175B)(zero-shot-cot)Accuracy78.7Unverified
2Text-davinci-002 (175B) (zero-shot)Accuracy17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tree of Thoughts (b=5)Success0.74Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy92.2Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy89.2Unverified