SOTAVerified

Efficient Exploration

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Papers

Showing 361370 of 514 papers

TitleStatusHype
Biased Estimates of Advantages over Path Ensembles0
BooVI: Provably Efficient Bootstrapped Value Iteration0
Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization0
CAE: Repurposing the Critic as an Explorer in Deep Reinforcement Learning0
Causal Information Prioritization for Efficient Reinforcement Learning0
CBOL-Tuner: Classifier-pruned Bayesian optimization to explore temporally structured latent spaces for particle accelerator tuning0
HelixMO: Sample-Efficient Molecular Optimization in Scene-Sensitive Latent Space0
CIM: Constrained Intrinsic Motivation for Sparse-Reward Continuous Control0
Clustered Reinforcement Learning0
Comprehensive decision-strategy space exploration for efficient territorial planning strategies0
Show:102550
← PrevPage 37 of 52Next →

No leaderboard results yet.