SOTAVerified

Efficient Exploration

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Papers

Showing 421430 of 514 papers

TitleStatusHype
A Straightforward Gradient-Based Approach for High-Tc Superconductor Design: Leveraging Domain Knowledge via Adaptive Constraints0
Efficient Exploration of Image Classifier Failures with Bayesian Optimization and Text-to-Image Models0
Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization0
Efficient exploration of zero-sum stochastic games0
Efficient Exploration through Intrinsic Motivation Learning for Unsupervised Subgoal Discovery in Model-Free Hierarchical Reinforcement Learning0
Efficient Exploration Using Extra Safety Budget in Constrained Policy Optimization0
Efficient Exploration using Model-Based Quality-Diversity with Gradients0
Efficient Exploration via Epistemic-Risk-Seeking Policy Optimization0
Efficient exploration with Double Uncertain Value Networks0
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards0
Show:102550
← PrevPage 43 of 52Next →

No leaderboard results yet.