SOTAVerified

Offline RL

Papers

Showing 191200 of 755 papers

TitleStatusHype
Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RLCode1
AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained OptimizationCode0
Unified Preference Optimization: Language Model Alignment Beyond the Preference Frontier0
OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators0
Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear q^π-Realizability and Concentrability0
Any-step Dynamics Model Improves Future Predictions for Online and Offline Reinforcement LearningCode2
Q-value Regularized Transformer for Offline Reinforcement LearningCode1
GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement LearningCode1
Diffusion-based Reinforcement Learning via Q-weighted Variational Policy OptimizationCode2
Generating Code World Models with Large Language Models Guided by Monte Carlo Tree SearchCode1
Show:102550
← PrevPage 20 of 76Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1KFCAverage Reward81.8Unverified
2ADMPOAverage Reward81Unverified
3Decision Transformer (DT)Average Reward73.5Unverified
#ModelMetricClaimedVerifiedStatus
1ParPID4RL Normalized Score151.4Unverified