SOTAVerified

Arithmetic Reasoning

Papers

Showing 101125 of 175 papers

TitleStatusHype
KwaiYiiMath: Technical Report0
Mistral 7BCode6
MuggleMath: Assessing the Impact of Query and Response Augmentation on Math ReasoningCode2
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language ModelsCode1
MathCoder: Seamless Code Integration in LLMs for Enhanced Mathematical ReasoningCode2
DOMINO: A Dual-System for Multi-step Visual Language ReasoningCode1
A Dynamic LLM-Powered Agent Network for Task-Oriented Agent CollaborationCode1
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem SolvingCode3
Are Human-generated Demonstrations Necessary for In-context Learning?Code1
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language ModelsCode2
OpenChat: Advancing Open-source Language Models with Mixed-Quality DataCode0
Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RLCode1
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-InstructCode5
Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-VerificationCode2
Token-Scaled Logit Distillation for Ternary Weight Generative Language ModelsCode1
Scaling Relationship on Learning Mathematical Reasoning with Large Language ModelsCode2
Llama 2: Open Foundation and Fine-Tuned Chat ModelsCode8
Model Card and Evaluations for Claude Models0
On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes0
Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMsCode1
DiversiGATE: A Comprehensive Framework for Reliable Large Language Models0
Boosting Language Models Reasoning with Chain-of-Knowledge PromptingCode1
Prompt Space Optimizing Few-shot Reasoning Success with Large Language ModelsCode0
Encouraging Divergent Thinking in Large Language Models through Multi-Agent DebateCode2
Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large Language Models0
Show:102550
← PrevPage 5 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Claude 3.5 Sonnet (HPT)Accuracy97.72Unverified
2DUP prompt upon GPT-4Accuracy97.1Unverified
3Qwen2-Math-72B-Instruct (greedy)Accuracy96.7Unverified
4SFT-Mistral-7B (Metamath, OVM, Smart Ensemble)Accuracy96.4Unverified
5OpenMath2-Llama3.1-70B (majority@256)Accuracy96Unverified
6Jiutian-大模型Accuracy95.2Unverified
7DAMOMath-7B(MetaMath, OVM, BS, Ensemble)Accuracy95.1Unverified
8Claude 3 Opus (0-shot chain-of-thought)Accuracy95Unverified
9OpenMath2-Llama3.1-70BAccuracy94.9Unverified
10GPT-4 (Teaching-Inspired)Accuracy94.8Unverified
#ModelMetricClaimedVerifiedStatus
1Text-davinci-002 (175B)(zero-shot-cot)Accuracy78.7Unverified
2Text-davinci-002 (175B) (zero-shot)Accuracy17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tree of Thoughts (b=5)Success0.74Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy92.2Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy89.2Unverified