SOTAVerified

Reinforcement Learning (RL)

Reinforcement Learning (RL) involves training an agent to take actions in an environment to maximize a cumulative reward signal. The agent interacts with the environment and learns by receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement learning is to find the optimal policy or decision-making strategy that maximizes the long-term reward.

Papers

Showing 151200 of 15113 papers

TitleStatusHype
Thinking vs. Doing: Agents that Reason by Scaling Test-Time InteractionCode2
Learning to Clarify by Reinforcement Learning Through Reward-Weighted Fine-Tuning0
Reliable Critics: Monotonic Improvement and Convergence Guarantees for Reinforcement Learning0
On the Generalization of Data-Assisted Control in port-Hamiltonian Systems (DAC-pH)0
CARoL: Context-aware Adaptation for Robot Learning0
Safety-Aware Reinforcement Learning for Control via Risk-Sensitive Action-Value Iteration and Quantile Regression0
QForce-RL: Quantized FPGA-Optimized Reinforcement Learning Compute Engine0
Prompting Wireless Networks: Reinforced In-Context Learning for Power Control0
Gradual Transition from Bellman Optimality Operator to Bellman Operator in Online Reinforcement LearningCode0
Towards Infant Sleep-Optimized Driving: Synergizing Wearable and Vehicle Sensing in Intelligent Cruise Control0
CodeContests+: High-Quality Test Case Generation for Competitive Programming0
Confidence Is All You Need: Few-Shot RL Fine-Tuning of Language Models0
Dissecting Long Reasoning Models: An Empirical StudyCode0
Safe Planning and Policy Optimization via World Model Learning0
Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning0
Improving Data Efficiency for LLM Reinforcement Fine-tuning Through Difficulty-targeted Online Data Selection and Rollout ReplayCode1
On the Mechanism of Reasoning Pattern Selection in Reinforcement Learning for Language Models0
Regret-Optimal Q-Learning with Low Cost for Single-Agent and Federated Reinforcement Learning0
Latent Guided Sampling for Combinatorial OptimizationCode0
Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning0
A Lyapunov Drift-Plus-Penalty Method Tailored for Reinforcement Learning with Queue Stability0
Learning-at-Criticality in Large Language Models for Quantum Field Theory and Beyond0
SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL0
CORE: Constraint-Aware One-Step Reinforcement Learning for Simulation-Guided Neural Network Accelerator Design0
Joint Modeling for Learning Decision-Making Dynamics in Behavioral Experiments0
Critique-GRPO: Advancing LLM Reasoning with Natural Language and Numerical Feedback0
Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem0
Learned Controllers for Agile Quadrotors in Pursuit-Evasion Games0
Knowledge or Reasoning? A Close Look at How LLMs Think Across Domains0
SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning0
Incentivizing Reasoning for Advanced Instruction-Following of Large Language ModelsCode1
KDRL: Post-Training Reasoning LLMs via Unified Knowledge Distillation and Reinforcement Learning0
Trajectory First: A Curriculum for Discovering Diverse Policies0
Data-assimilated model-informed reinforcement learning0
Reasoning-Table: Exploring Reinforcement Learning for Table ReasoningCode2
A Reinforcement Learning Approach for RIS-aided Fair Communications0
DriveMind: A Dual-VLM based Reinforcement Learning Framework for Autonomous Driving0
ARIA: Training Language Agents with Intention-Driven Reward Aggregation0
MMedAgent-RL: Optimizing Multi-Agent Collaboration for Multimodal Medical Reasoning0
Reinforcement Learning for Hanabi0
Balancing Profit and Fairness in Risk-Based Pricing Markets0
MOFGPT: Generative Design of Metal-Organic Frameworks using Language ModelsCode0
Reason-SVG: Hybrid Reward RL for Aha-Moments in Vector Graphics Generation0
Pangu DeepDiver: Adaptive Search Intensity Scaling via Open-Web Reinforcement Learning0
How Much Backtracking is Enough? Exploring the Interplay of SFT and RL in Enhancing LLM Reasoning0
Mixed-R1: Unified Reward Perspective For Reasoning Capability in Multimodal Large Language ModelsCode0
AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Language ReasoningCode7
ReasonGen-R1: CoT for Autoregressive Image generation models through SFT and RLCode2
ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language ModelsCode5
Towards Effective Code-Integrated ReasoningCode1
Show:102550
← PrevPage 4 of 303Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PPGMean Normalized Performance0.76Unverified
2PPOMean Normalized Performance0.58Unverified