SOTAVerified

Reinforcement Learning (RL)

Reinforcement Learning (RL) involves training an agent to take actions in an environment to maximize a cumulative reward signal. The agent interacts with the environment and learns by receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement learning is to find the optimal policy or decision-making strategy that maximizes the long-term reward.

Papers

Showing 421430 of 15113 papers

TitleStatusHype
Reinforcement Learning for AMR Charging Decisions: The Impact of Reward and Action Space Design0
Is PRM Necessary? Problem-Solving RL Implicitly Induces PRM Capability in LLMs0
Sample Efficient Reinforcement Learning via Large Vision Language Model DistillationCode1
Bi-directional Recurrence Improves Transformer in Partially Observable Markov Decision Processes0
ShiQ: Bringing back Bellman to LLMs0
DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable PolicyCode2
Group-in-Group Policy Optimization for LLM Agent TrainingCode5
Certifying Stability of Reinforcement Learning Policies using Generalized Lyapunov Functions0
Improving the Data-efficiency of Reinforcement Learning by Warm-starting with LLMCode0
Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RLCode0
Show:102550
← PrevPage 43 of 1512Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PPGMean Normalized Performance0.76Unverified
2PPOMean Normalized Performance0.58Unverified