SOTAVerified

Efficient Exploration

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Papers

Showing 501514 of 514 papers

TitleStatusHype
Intrinsically Guided Exploration in Meta Reinforcement Learning0
Intrinsic Rewards for Exploration without Harm from Observational Noise: A Simulation Study Based on the Free Energy Principle0
Is a Good Foundation Necessary for Efficient Reinforcement Learning? The Computational Role of the Base Model in Exploration0
Joint channel estimation and data detection in massive MIMO systems based on diffusion models0
Joint Falsification and Fidelity Settings Optimization for Validation of Safety-Critical Systems: A Theoretical Analysis0
JueWu-MC: Playing Minecraft with Sample-efficient Hierarchical Reinforcement Learning0
KEA: Keeping Exploration Alive by Proactively Coordinating Exploration Strategies0
K-Means Clustering using Tabu Search with Quantized Means0
Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?0
Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics0
Latent Action Priors for Locomotion with Deep Reinforcement Learning0
Learn2Hop: Learned Optimization on Rough Landscapes0
Learning Action Translator for Meta Reinforcement Learning on Sparse-Reward Tasks0
Learning Causal Overhypotheses through Exploration in Children and Computational Models0
Show:102550
← PrevPage 11 of 11Next →

No leaderboard results yet.