| CROP: Conservative Reward for Model-based Offline Policy Optimization | Oct 26, 2023 | D4RLOffline RL | CodeCode Available | 1 |
| cosFormer: Rethinking Softmax in Attention | Feb 17, 2022 | D4RLLanguage Modeling | CodeCode Available | 1 |
| Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning | Oct 25, 2022 | D4RLOffline RL | CodeCode Available | 1 |
| Curricular Subgoals for Inverse Reinforcement Learning | Jun 14, 2023 | Autonomous DrivingD4RL | CodeCode Available | 1 |
| Critic-Guided Decision Transformer for Offline Reinforcement Learning | Dec 21, 2023 | D4RLOffline RL | CodeCode Available | 1 |
| Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control | Jul 12, 2024 | continuous-controlContinuous Control | CodeCode Available | 1 |
| Adversarially Trained Actor Critic for Offline Reinforcement Learning | Feb 5, 2022 | continuous-controlContinuous Control | CodeCode Available | 1 |
| Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning | May 30, 2024 | D4RLreinforcement-learning | CodeCode Available | 1 |
| Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning | Apr 25, 2023 | D4RLImage Generation | CodeCode Available | 1 |
| Are Expressive Models Truly Necessary for Offline RL? | Dec 15, 2024 | D4RLOffline RL | CodeCode Available | 1 |