SOTAVerified

Montezuma's Revenge

Montezuma's Revenge is an ATARI 2600 Benchmark game that is known to be difficult to perform on for reinforcement learning algorithms. Solutions typically employ algorithms that incentivise environment exploration in different ways.

For the state-of-the art tables, please consult the parent Atari Games task.

( Image credit: Q-map )

Papers

Showing 150 of 61 papers

TitleStatusHype
Rainbow: Combining Improvements in Deep Reinforcement LearningCode3
A Study of Global and Episodic Bonuses for Exploration in Contextual MDPsCode1
PoE-World: Compositional World Modeling with Products of Programmatic ExpertsCode1
Cell-Free Latent Go-ExploreCode1
Exploration by Random Network DistillationCode1
First return, then exploreCode1
Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement LearningCode1
Go-Explore: a New Approach for Hard-Exploration ProblemsCode1
Hybrid RL: Using Both Offline and Online Data Can Make RL EfficientCode1
NovelD: A Simple yet Effective Exploration CriterionCode1
Open-Ended Reinforcement Learning with Neural Reward FunctionsCode1
Playing hard exploration games by watching YouTubeCode1
Redeeming Intrinsic Rewards via Constrained OptimizationCode1
Reinforcement Learning with Latent FlowCode1
Deep Curiosity Search: Intra-Life Exploration Can Improve Performance on Challenging Deep Reinforcement Learning Problems0
Learning High-level Representations from Demonstrations0
Learning Montezuma's Revenge from a Single Demonstration0
Learning Representations in Model-Free Hierarchical Reinforcement Learning0
Micro-Objective Learning : Accelerating Deep Reinforcement Learning through the Discovery of Continuous Subgoals0
MIME: Mutual Information Minimisation Exploration0
Observe and Look Further: Achieving Consistent Performance on Atari0
On Bonus Based Exploration Methods In The Arcade Learning Environment0
On Bonus-Based Exploration Methods in the Arcade Learning Environment0
Parametrically Retargetable Decision-Makers Tend To Seek Power0
Paused Agent Replay Refresh0
Benchmarking Bonus-Based Exploration Methods on the Arcade Learning Environment0
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations0
Understanding and Preventing Capacity Loss in Reinforcement Learning0
Contingency-Aware Exploration in Reinforcement Learning0
Creativity of AI: Hierarchical Planning Model Learning for Facilitating Deep Reinforcement Learning0
Curiosity in Hindsight: Intrinsic Exploration in Stochastic Environments0
Deep Abstract Q-Networks0
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards0
Entropic Desired Dynamics for Intrinsic Control0
Escape Room: A Configurable Testbed for Hierarchical Reinforcement Learning0
Exploration by Random Network Distillation0
Exploration in Feature Space for Reinforcement Learning0
Action-Dependent Optimality-Preserving Reward Shaping0
GAN-based Intrinsic Exploration For Sample Efficient Reinforcement Learning0
Generative Adversarial Exploration for Reinforcement Learning0
Hierarchical Imitation and Reinforcement Learning0
Sample Efficient Deep Reinforcement Learning via Local Planning0
Int-HRL: Towards Intention-based Hierarchical Reinforcement Learning0
Learning and Exploiting Multiple Subgoals for Fast Exploration in Hierarchical Reinforcement Learning0
DeepSynth: Automata Synthesis for Automatic Task Segmentation in Deep Reinforcement LearningCode0
Count-Based Exploration with Neural Density ModelsCode0
Empowerment-driven Exploration using Mutual Information EstimationCode0
Combining Experience Replay with Exploration by Random Network DistillationCode0
Uncertainty - sensitive learning and planning with ensemblesCode0
Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural NetworksCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.