SOTAVerified

Arithmetic Reasoning

Papers

Showing 5175 of 175 papers

TitleStatusHype
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word ProblemsCode1
Mastering Symbolic Operations: Augmenting Language Models with Compiled Neural NetworksCode1
Gemini: A Family of Highly Capable Multimodal ModelsCode1
Generative Parameter-Efficient Fine-TuningCode1
Batch Prompting: Efficient Inference with Large Language Model APIsCode1
Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMsCode1
HALO: Hierarchical Autonomous Logic-Oriented Orchestration for Multi-Agent LLM SystemsCode1
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive PrinciplesCode1
Empirical Study of Zero-Shot NER with ChatGPTCode1
MathPrompter: Mathematical Reasoning using Large Language ModelsCode1
Boosting Language Models Reasoning with Chain-of-Knowledge PromptingCode1
A Dynamic LLM-Powered Agent Network for Task-Oriented Agent CollaborationCode1
Arithmetic Without Algorithms: Language Models Solve Math With a Bag of HeuristicsCode1
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor AdaptationCode1
Bridging the Gap between Different Vocabularies for LLM EnsembleCode1
DOMINO: A Dual-System for Multi-step Visual Language ReasoningCode1
LEVER: Learning to Verify Language-to-Code Generation with ExecutionCode1
Large Language Models Can Be Easily Distracted by Irrelevant ContextCode1
Large Language Models are Better Reasoners with Self-VerificationCode1
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and DistillationCode1
Learning Math Reasoning from Self-Sampled Correct and Partially-Correct SolutionsCode1
Language Imbalance Driven Rewarding for Multilingual Self-improvingCode1
DialCoT Meets PPO: Decomposing and Exploring Reasoning Paths in Smaller Language ModelsCode1
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsCode1
Are Human-generated Demonstrations Necessary for In-context Learning?Code1
Show:102550
← PrevPage 3 of 7Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Claude 3.5 Sonnet (HPT)Accuracy97.72Unverified
2DUP prompt upon GPT-4Accuracy97.1Unverified
3Qwen2-Math-72B-Instruct (greedy)Accuracy96.7Unverified
4SFT-Mistral-7B (Metamath, OVM, Smart Ensemble)Accuracy96.4Unverified
5OpenMath2-Llama3.1-70B (majority@256)Accuracy96Unverified
6Jiutian-大模型Accuracy95.2Unverified
7DAMOMath-7B(MetaMath, OVM, BS, Ensemble)Accuracy95.1Unverified
8Claude 3 Opus (0-shot chain-of-thought)Accuracy95Unverified
9OpenMath2-Llama3.1-70BAccuracy94.9Unverified
10GPT-4 (Teaching-Inspired)Accuracy94.8Unverified
#ModelMetricClaimedVerifiedStatus
1Text-davinci-002 (175B)(zero-shot-cot)Accuracy78.7Unverified
2Text-davinci-002 (175B) (zero-shot)Accuracy17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tree of Thoughts (b=5)Success0.74Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy92.2Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy89.2Unverified