| CROP: Conservative Reward for Model-based Offline Policy Optimization | Oct 26, 2023 | D4RLOffline RL | CodeCode Available | 1 | 5 |
| Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets | Oct 6, 2023 | D4RLDecision Making | CodeCode Available | 1 | 5 |
| Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning | Oct 25, 2022 | D4RLOffline RL | CodeCode Available | 1 | 5 |
| Curricular Subgoals for Inverse Reinforcement Learning | Jun 14, 2023 | Autonomous DrivingD4RL | CodeCode Available | 1 | 5 |
| Conservative Offline Distributional Reinforcement Learning | Jul 12, 2021 | D4RLDistributional Reinforcement Learning | CodeCode Available | 1 | 5 |
| Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control | Jul 12, 2024 | continuous-controlContinuous Control | CodeCode Available | 1 | 5 |
| Adversarially Trained Actor Critic for Offline Reinforcement Learning | Feb 5, 2022 | continuous-controlContinuous Control | CodeCode Available | 1 | 5 |
| Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning | May 30, 2024 | D4RLreinforcement-learning | CodeCode Available | 1 | 5 |
| Behavior Proximal Policy Optimization | Feb 22, 2023 | D4RLOffline RL | CodeCode Available | 1 | 5 |
| Are Expressive Models Truly Necessary for Offline RL? | Dec 15, 2024 | D4RLOffline RL | CodeCode Available | 1 | 5 |