SOTAVerified

Efficient Exploration

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Papers

Showing 291300 of 514 papers

TitleStatusHype
SFP: State-free Priors for Exploration in Off-Policy Reinforcement Learning0
Feature and Instance Joint Selection: A Reinforcement Learning Perspective0
Fire Burns, Sword Cuts: Commonsense Inductive Bias for Exploration in Text-based GamesCode0
On Machine Learning-Driven Surrogates for Sound Transmission Loss SimulationsCode0
A Variational Approach to Bayesian Phylogenetic InferenceCode0
Efficient Exploration via First-Person Behavior Cloning Assisted Rapidly-Exploring Random Trees0
TANDEM: Learning Joint Exploration and Decision Making with Tactile Sensors0
Collaborative Training of Heterogeneous Reinforcement Learning Agents in Environments with Sparse Rewards: What and When to Share?Code0
Learning Causal Overhypotheses through Exploration in Children and Computational Models0
A Unified Perspective on Value Backup and Exploration in Monte-Carlo Tree Search0
Show:102550
← PrevPage 30 of 52Next →

No leaderboard results yet.