SOTAVerified

Arithmetic Reasoning

Papers

Showing 51100 of 175 papers

TitleStatusHype
WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work TasksCode3
Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMsCode3
Improving Arithmetic Reasoning Ability of Large Language Models through Relation Tuples, Verification and Dynamic FeedbackCode0
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-SolvingCode2
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive PrinciplesCode1
Breaking the Ceiling of the LLM Community by Treating Token Generation as a Classification for EnsemblingCode2
An Investigation of Neuron Activation as a Unified Lens to Explain Chain-of-Thought Eliciting Arithmetic Reasoning of LLMsCode1
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMsCode2
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language ModelsCode0
QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor AdaptationCode1
Arithmetic Reasoning with LLM: Prolog Generation & Permutation0
Large Language Models Can Self-Correct with Key Condition Verification0
Skin-in-the-Game: Decision Making via Multi-Stakeholder Alignment in LLMs0
Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment0
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word ProblemsCode1
Toward Self-Improvement of LLMs via Imagination, Searching, and CriticizingCode1
Bridging the Gap between Different Vocabularies for LLM EnsembleCode1
ReFT: Representation Finetuning for Language ModelsCode5
Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLMCode0
The Claude 3 Model Family: Opus, Sonnet, Haiku0
An Empirical Study of Data Ability Boundary in LLMs' Math ReasoningCode2
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and DistillationCode1
SymBa: Symbolic Backward Chaining for Structured Natural Language Reasoning0
Evaluating LLMs' Mathematical Reasoning in Financial Document Question Answering0
Orca-Math: Unlocking the potential of SLMs in Grade School Math0
OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning DatasetCode4
The Unreasonable Effectiveness of Eccentric Automatic Prompts0
Exploring Group and Symmetry Principles in Large Language Models0
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language ModelsCode9
Evaluating Gender Bias in Large Language Models via Chain-of-Thought Prompting0
Evaluating LLMs' Mathematical and Coding Competency through Ontology-guided InterventionsCode1
Large Language Models are Null-Shot Learners0
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General TasksCode2
LLM Augmented LLMs: Expanding Capabilities through CompositionCode0
Turning Dust into Gold: Distilling Complex Reasoning Capabilities from LLMs by Leveraging Negative DataCode1
Gemini: A Family of Highly Capable Multimodal ModelsCode1
TinyGSM: achieving >80% on GSM8k with small language models0
Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning0
Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human AnnotationsCode1
Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic ReasoningCode0
Prompt Optimization via Adversarial In-Context LearningCode1
ChatGPT as a Math Questioner? Evaluating ChatGPT on Generating Pre-university Math QuestionsCode0
Generative Parameter-Efficient Fine-TuningCode1
Orca 2: Teaching Small Language Models How to Reason0
OVM, Outcome-supervised Value Models for Planning in Mathematical ReasoningCode1
Neuro-Symbolic Integration Brings Causal and Reliable Reasoning ProofsCode1
The ART of LLM Refinement: Ask, Refine, and Trust0
Prompt Sketching for Large Language Models0
Llemma: An Open Language Model For MathematicsCode3
Empirical Study of Zero-Shot NER with ChatGPTCode1
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Claude 3.5 Sonnet (HPT)Accuracy97.72Unverified
2DUP prompt upon GPT-4Accuracy97.1Unverified
3Qwen2-Math-72B-Instruct (greedy)Accuracy96.7Unverified
4SFT-Mistral-7B (Metamath, OVM, Smart Ensemble)Accuracy96.4Unverified
5OpenMath2-Llama3.1-70B (majority@256)Accuracy96Unverified
6Jiutian-大模型Accuracy95.2Unverified
7DAMOMath-7B(MetaMath, OVM, BS, Ensemble)Accuracy95.1Unverified
8Claude 3 Opus (0-shot chain-of-thought)Accuracy95Unverified
9OpenMath2-Llama3.1-70BAccuracy94.9Unverified
10GPT-4 (Teaching-Inspired)Accuracy94.8Unverified
#ModelMetricClaimedVerifiedStatus
1Text-davinci-002 (175B)(zero-shot-cot)Accuracy78.7Unverified
2Text-davinci-002 (175B) (zero-shot)Accuracy17.7Unverified
#ModelMetricClaimedVerifiedStatus
1Tree of Thoughts (b=5)Success0.74Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy92.2Unverified
#ModelMetricClaimedVerifiedStatus
1GPT-4 (Teaching-Inspired)Accuracy89.2Unverified