SOTAVerified

GSM8K

Papers

Showing 201250 of 439 papers

TitleStatusHype
S-GRPO: Early Exit via Reinforcement Learning in Reasoning Models0
Elastic Weight Consolidation for Full-Parameter Continual Pre-Training of Gemma20
Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients0
TutorGym: A Testbed for Evaluating AI Agents as Tutors and StudentsCode0
Efficient Fine-Tuning of Quantized Models via Adaptive Rank and Bitwidth0
Local Prompt Optimization0
Trace-of-Thought Prompting: Investigating Prompt-Based Knowledge Distillation Through Question Decomposition0
AutoJudge: Judge Decoding Without Manual Annotation0
Training Large Language Models to Reason via EM Policy Gradient0
Not All Rollouts are Useful: Down-Sampling Rollouts in LLM Reinforcement Learning0
Entropy-Guided Watermarking for LLMs: A Test-Time Framework for Robust and Traceable Text Generation0
Question Tokens Deserve More Attention: Enhancing Large Language Models without Training through Step-by-Step Reading and Question Attention Recalibration0
Supervised Optimism Correction: Be Confident When LLMs Are Sure0
Synthetic Data Generation & Multi-Step RL for Reasoning & Tool Use0
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency0
Sample, Don't Search: Rethinking Test-Time Alignment for Language Models0
Reasoning Under 1 Billion: Memory-Augmented Reinforcement Learning for Large Language ModelsCode0
Adaptive Rectification Sampling for Test-Time Compute ScalingCode0
Exploring LLM Reasoning Through Controlled Prompt VariationsCode0
D^2LoRA: Data-Driven LoRA Initialization for Low Resource Tasks0
Lost in Cultural Translation: Do LLMs Struggle with Math Across Cultural Contexts?Code0
Tapered Off-Policy REINFORCE: Stable and efficient reinforcement learning for LLMs0
Improving Complex Reasoning with Dynamic Prompt Corruption: A soft prompt Optimization Approach0
Rule-Guided Feedback: Enhancing Reasoning by Enforcing Rule Adherence in Large Language Models0
Position-Aware Depth Decay Decoding (D^3): Boosting Large Language Model Inference Efficiency0
SOLAR: Scalable Optimization of Large-scale Architecture for Reasoning0
DeLTa: A Decoding Strategy based on Logit Trajectory Prediction Improves Factuality and Reasoning AbilityCode0
Self-Evolved Preference Optimization for Enhancing Mathematical Reasoning in Small Language Models0
CODI: Compressing Chain-of-Thought into Continuous Space via Self-DistillationCode0
Layer-Aware Task Arithmetic: Disentangling Task-Specific and Instruction-Following Knowledge0
Weaker LLMs' Opinions Also Matter: Mixture of Opinions Enhances LLM's Mathematical Reasoning0
Distill Not Only Data but Also Rewards: Can Smaller Language Models Surpass Larger Ones?0
SECURA: Sigmoid-Enhanced CUR Decomposition with Uninterrupted Retention and Low-Rank Adaptation in Large Language Models0
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint0
Dynamic Parallel Tree Search for Efficient LLM Reasoning0
Earlier Tokens Contribute More: Learning Direct Preference Optimization From Temporal Decay PerspectiveCode0
NLoRA: Nyström-Initiated Low-Rank Adaptation for Large Language ModelsCode0
From Correctness to Comprehension: AI Agents for Personalized Error Diagnosis in Education0
TreeCut: A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination EvaluationCode0
Integrating Arithmetic Learning Improves Mathematical Reasoning in Smaller Models0
MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task0
Leveraging Uncertainty Estimation for Efficient LLM Routing0
Balancing the Budget: Understanding Trade-offs Between Supervised and Preference-Based Finetuning0
Uncertainty-Aware Search and Value Models: Mitigating Search Scaling Flaws in LLMs0
Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration PitfallsCode0
Hybrid Offline-online Scheduling Method for Large Language Model Inference Optimization0
Cost-Saving LLM Cascades with Early Abstention0
Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical RangesCode0
Self-Training Large Language Models for Tool-Use Without Demonstrations0
Evolving LLMs' Self-Refinement Capability via Iterative Preference Optimization0
Show:102550
← PrevPage 5 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1XolverAccuracy98.1Unverified
2Orange-mini0-shot MRR98Unverified