| Conservative State Value Estimation for Offline Reinforcement Learning | Feb 14, 2023 | D4RLreinforcement-learning | CodeCode Available | 0 | 5 |
| Learning to Trust Bellman Updates: Selective State-Adaptive Regularization for Offline RL | May 26, 2025 | D4RLOffline RL | CodeCode Available | 0 | 5 |
| Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination | Jun 16, 2022 | D4RLOffline RL | CodeCode Available | 0 | 5 |
| Grid-Mapping Pseudo-Count Constraint for Offline Reinforcement Learning | Apr 3, 2024 | D4RLreinforcement-learning | CodeCode Available | 0 | 5 |
| Learning from Sparse Offline Datasets via Conservative Density Estimation | Jan 16, 2024 | D4RLDensity Estimation | CodeCode Available | 0 | 5 |
| A Pragmatic Look at Deep Imitation Learning | Aug 4, 2021 | Behavioural cloningD4RL | CodeCode Available | 0 | 5 |
| Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization | Oct 7, 2022 | continuous-controlContinuous Control | CodeCode Available | 0 | 5 |
| A2PO: Towards Effective Offline Reinforcement Learning from an Advantage-aware Perspective | Mar 12, 2024 | D4RLreinforcement-learning | CodeCode Available | 0 | 5 |
| Learning on One Mode: Addressing Multi-Modality in Offline Reinforcement Learning | Dec 4, 2024 | D4RLImitation Learning | CodeCode Available | 0 | 5 |
| Mildly Constrained Evaluation Policy for Offline Reinforcement Learning | Jun 6, 2023 | D4RLMuJoCo | CodeCode Available | 0 | 5 |