SOTAVerified

Offline RL

Papers

Showing 150 of 755 papers

TitleStatusHype
Differentiable Tree Search NetworkCode5
A Clean Slate for Offline Reinforcement LearningCode3
Flow Q-LearningCode3
DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement LearningCode3
Is Value Learning Really the Main Bottleneck in Offline RL?Code3
Diffusion Guidance Is a Controllable Policy Improvement OperatorCode2
What Makes a Good Diffusion Planner for Decision Making?Code2
Offline Reinforcement Learning for LLM Multi-Step ReasoningCode2
Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline DataCode2
Revisiting Generative Policies: A Simpler Reinforcement Learning Algorithmic PerspectiveCode2
Pretrained LLM Adapted with LoRA as a Decision Transformer for Offline RL in Quantitative TradingCode2
LongReward: Improving Long-context Large Language Models with AI FeedbackCode2
Enhancing Sample Efficiency and Exploration in Reinforcement Learning through the Integration of Diffusion Models and Proximal Policy OptimizationCode2
Hokoff: Real Game Dataset from Honor of Kings and its Offline Reinforcement Learning BenchmarksCode2
A Simulation Benchmark for Autonomous Racing with Large-Scale Human DataCode2
Any-step Dynamics Model Improves Future Predictions for Online and Offline Reinforcement LearningCode2
Diffusion-based Reinforcement Learning via Q-weighted Variational Policy OptimizationCode2
Unsupervised Zero-Shot Reinforcement Learning via Functional Reward EncodingsCode2
Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future DirectionsCode2
Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion ModelCode2
AlphaStar Unplugged: Large-Scale Offline Reinforcement LearningCode2
FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex ManipulationCode2
Dungeons and Data: A Large-Scale NetHack DatasetCode2
Diffusion Policies as an Expressive Policy Class for Offline Reinforcement LearningCode2
Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement LearningCode2
Challenges and Opportunities in Offline Reinforcement Learning from Visual ObservationsCode2
Offline RL for Natural Language Generation with Implicit Language Q LearningCode2
CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement LearningCode2
VRL3: A Data-Driven Framework for Visual Deep Reinforcement LearningCode2
Flowformer: Linearizing Transformers with Conservation FlowsCode2
Rethinking Attention with PerformersCode2
D4RL: Datasets for Deep Data-Driven Reinforcement LearningCode2
Reformer: The Efficient TransformerCode2
ImagineBench: Evaluating Reinforcement Learning with Large Language Model RolloutsCode1
NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic ScenariosCode1
GNN-DT: Graph Neural Network Enhanced Decision Transformer for Efficient Optimization in Dynamic EnvironmentsCode1
Constraint-Adaptive Policy Switching for Offline Safe Reinforcement LearningCode1
Are Expressive Models Truly Necessary for Offline RL?Code1
In-Dataset Trajectory Return Regularization for Offline Preference-based Reinforcement LearningCode1
Doubly Mild Generalization for Offline Reinforcement LearningCode1
Offline Reinforcement Learning with OOD State Correction and OOD Action SuppressionCode1
Steering Your Generalists: Improving Robotic Foundation Models via Value GuidanceCode1
Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model PretrainingCode1
DMC-VB: A Benchmark for Representation Learning for Control with Visual DistractorsCode1
PlanDQ: Hierarchical Plan Orchestration via D-Conductor and Q-PerformerCode1
Strategically Conservative Q-LearningCode1
Diffusion Policies creating a Trust Region for Offline Reinforcement LearningCode1
Reinforcement Learning in Dynamic Treatment Regimes Needs Critical ReexaminationCode1
Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RLCode1
Q-value Regularized Transformer for Offline Reinforcement LearningCode1
Show:102550
← PrevPage 1 of 16Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1KFCAverage Reward81.8Unverified
2ADMPOAverage Reward81Unverified
3Decision Transformer (DT)Average Reward73.5Unverified
#ModelMetricClaimedVerifiedStatus
1ParPID4RL Normalized Score151.4Unverified