SOTAVerified

Distributional Reinforcement Learning

Value distribution is the distribution of the random return received by a reinforcement learning agent. it been used for a specific purpose such as implementing risk-aware behaviour.

We have random return Z whose expectation is the value Q. This random return is also described by a recursive equation, but one of a distributional nature

Papers

Showing 2650 of 137 papers

TitleStatusHype
A Cramér Distance perspective on Quantile Regression based Distributional Reinforcement LearningCode0
Information-Directed Exploration for Deep Reinforcement LearningCode0
CTD4 -- A Deep Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple CriticsCode0
Tackling Uncertainties in Multi-Agent Reinforcement Learning through Integration of Agent Termination DynamicsCode0
EX-DRL: Hedging Against Heavy Losses with EXtreme Distributional Reinforcement LearningCode0
Distributional Model Equivalence for Risk-Sensitive Reinforcement LearningCode0
Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State ObservationsCode0
Distributional constrained reinforcement learning for supply chain optimizationCode0
Distributional Bellman Operators over Mean EmbeddingsCode0
Echoes of Socratic Doubt: Embracing Uncertainty in Calibrated Evidential Reinforcement LearningCode0
Distributional Off-policy Evaluation with Bellman Residual MinimizationCode0
Estimating Risk and Uncertainty in Deep Reinforcement LearningCode0
Estimation and Inference in Distributional Reinforcement LearningCode0
A Robust Quantile Huber Loss With Interpretable Parameter Adjustment In Distributional Reinforcement LearningCode0
Distributional Reinforcement Learning for Energy-Based Sequential ModelsCode0
IGN : Implicit Generative NetworksCode0
Fully Parameterized Quantile Function for Distributional Reinforcement LearningCode0
The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement LearningCode0
Deep Distributional Learning with Non-crossing Quantile Network0
CTRLS: Chain-of-Thought Reasoning via Latent State-Transition0
A Point-Based Algorithm for Distributional Reinforcement Learning in Partially Observable Domains0
Cramer Type Distances for Learning Gaussian Mixture Models by Gradient Descent0
An introduction to reinforcement learning for neuroscience0
Controlling Synthetic Characters in Simulations: A Case for Cognitive Architectures and Sigma0
Distributional Reinforcement Learning with Ensembles0
Show:102550
← PrevPage 2 of 6Next →

No leaderboard results yet.