SOTAVerified

Reinforcement Learning (RL)

Reinforcement Learning (RL) involves training an agent to take actions in an environment to maximize a cumulative reward signal. The agent interacts with the environment and learns by receiving feedback in the form of rewards or punishments for its actions. The goal of reinforcement learning is to find the optimal policy or decision-making strategy that maximizes the long-term reward.

Papers

Showing 601625 of 15113 papers

TitleStatusHype
HiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent PathfindingCode1
Distinctive Image Captioning: Leveraging Ground Truth Captions in CLIP Guided Reinforcement LearningCode1
Reflect-RL: Two-Player Online RL Fine-Tuning for LMsCode1
XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning TechniquesCode1
Policy Learning for Off-Dynamics RL with Deficient SupportCode1
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference AdjustmentCode1
Hybrid Inverse Reinforcement LearningCode1
Entropy-Regularized Token-Level Policy Optimization for Language Agent ReinforcementCode1
Deceptive Path Planning via Reinforcement Learning with Graph Neural NetworksCode1
QGFN: Controllable Greediness with Action ValuesCode1
Safety Filters for Black-Box Dynamical Systems by Learning Discriminating HyperplanesCode1
SEABO: A Simple Search-Based Method for Offline Imitation LearningCode1
Entropy-regularized Diffusion Policy with Q-Ensembles for Offline Reinforcement LearningCode1
ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient UpdateCode1
M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic ManipulationCode1
DittoGym: Learning to Control Soft Shape-Shifting RobotsCode1
SEER: Facilitating Structured Reasoning and Explanation via Reinforcement LearningCode1
HAZARD Challenge: Embodied Decision Making in Dynamically Changing EnvironmentsCode1
Stable and Safe Human-aligned Reinforcement Learning through Neural Ordinary Differential EquationsCode1
Open the Black Box: Step-based Policy Updates for Temporally-Correlated Episodic Reinforcement LearningCode1
Closing the Gap between TD Learning and Supervised Learning -- A Generalisation Point of ViewCode1
UOEP: User-Oriented Exploration Policy for Enhancing Long-Term User Experiences in Recommender SystemsCode1
Bridging State and History Representations: Understanding Self-Predictive RLCode1
Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing ConstraintCode1
Interpretable Concept Bottlenecks to Align Reinforcement Learning AgentsCode1
Show:102550
← PrevPage 25 of 605Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PPGMean Normalized Performance0.76Unverified
2PPOMean Normalized Performance0.58Unverified