SOTAVerified

Reinforcement Learning (RL)

Reinforcement Learning (RL) involves training an agent to take actions in an environment to maximize a cumulative reward signal. The agent interacts with the environment and learns by receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement learning is to find the optimal policy or decision-making strategy that maximizes the long-term reward.

Papers

Showing 25212530 of 15113 papers

TitleStatusHype
VerifyBench: Benchmarking Reference-based Reward Systems for Large Language Models0
A Temporal Difference Method for Stochastic Continuous DynamicsCode0
Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning0
Learning-based Autonomous Oversteer Control and Collision Avoidance0
When Can Large Reasoning Models Save Thinking? Mechanistic Analysis of Behavioral Divergence in Reasoning0
Thought-Augmented Policy Optimization: Bridging External Guidance and Internal Capabilities0
STAR-R1: Spacial TrAnsformation Reasoning by Reinforcing Multimodal LLMsCode0
ViaRL: Adaptive Temporal Grounding via Visual Iterated Amplification Reinforcement Learning0
GRIT: Teaching MLLMs to Think with Images0
LLM-Explorer: A Plug-in Reinforcement Learning Policy Exploration Enhancement Driven by Large Language Models0
Show:102550
← PrevPage 253 of 1512Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PPGMean Normalized Performance0.76Unverified
2PPOMean Normalized Performance0.58Unverified