SOTAVerified

Efficient Exploration

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Papers

Showing 191200 of 514 papers

TitleStatusHype
Noisy Spiking Actor Network for Exploration0
ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization0
Efficient Low-Rank Matrix Estimation, Experimental Design, and Arm-Set-Dependent Low-Rank BanditsCode0
Diffusion Models Meet Contextual Bandits with Large Action Spaces0
Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian OptimizationCode0
TopoNav: Topological Navigation for Efficient Exploration in Sparse Reward Environments0
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgentCode0
Efficient Exploration for LLMs0
Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy Learning0
FIT-SLAM -- Fisher Information and Traversability estimation-based Active SLAM for exploration in 3D environments0
Show:102550
← PrevPage 20 of 52Next →

No leaderboard results yet.