SOTAVerified

Montezuma's Revenge

Montezuma's Revenge is an ATARI 2600 Benchmark game that is known to be difficult to perform on for reinforcement learning algorithms. Solutions typically employ algorithms that incentivise environment exploration in different ways.

For the state-of-the art tables, please consult the parent Atari Games task.

( Image credit: Q-map )

Papers

Showing 150 of 61 papers

TitleStatusHype
Action-Dependent Optimality-Preserving Reward Shaping0
PoE-World: Compositional World Modeling with Products of Programmatic ExpertsCode1
A Study of Plasticity Loss in On-Policy Deep Reinforcement LearningCode0
Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation ProblemCode0
Int-HRL: Towards Intention-based Hierarchical Reinforcement Learning0
Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement LearningCode1
A Study of Global and Episodic Bonuses for Exploration in Contextual MDPsCode1
Sample Efficient Deep Reinforcement Learning via Local Planning0
Curiosity in Hindsight: Intrinsic Exploration in Stochastic Environments0
Redeeming Intrinsic Rewards via Constrained OptimizationCode1
Hybrid RL: Using Both Offline and Online Data Can Make RL EfficientCode1
Paused Agent Replay Refresh0
Cell-Free Latent Go-ExploreCode1
GAN-based Intrinsic Exploration For Sample Efficient Reinforcement Learning0
Parametrically Retargetable Decision-Makers Tend To Seek Power0
Understanding and Preventing Capacity Loss in Reinforcement Learning0
Open-Ended Reinforcement Learning with Neural Reward FunctionsCode1
Generative Adversarial Exploration for Reinforcement Learning0
Exploration by Random Network Distillation0
Creativity of AI: Hierarchical Planning Model Learning for Facilitating Deep Reinforcement Learning0
Entropic Desired Dynamics for Intrinsic Control0
NovelD: A Simple yet Effective Exploration CriterionCode1
On Bonus-Based Exploration Methods in the Arcade Learning Environment0
Reinforcement Learning with Latent FlowCode1
Learning Abstract Models for Strategic Exploration and Fast Reward TransferCode0
First return, then exploreCode1
Exploring Unknown States with Action BalanceCode0
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations0
MIME: Mutual Information Minimisation Exploration0
On Bonus Based Exploration Methods In The Arcade Learning Environment0
Uncertainty-sensitive Learning and Planning with EnsemblesCode0
DeepSynth: Automata Synthesis for Automatic Task Segmentation in Deep Reinforcement LearningCode0
Uncertainty - sensitive learning and planning with ensemblesCode0
Benchmarking Bonus-Based Exploration Methods on the Arcade Learning Environment0
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards0
Combining Experience Replay with Exploration by Random Network DistillationCode0
Learning and Exploiting Multiple Subgoals for Fast Exploration in Hierarchical Reinforcement Learning0
Using Natural Language for Reward Shaping in Reinforcement LearningCode0
Go-Explore: a New Approach for Hard-Exploration ProblemsCode1
Escape Room: A Configurable Testbed for Hierarchical Reinforcement Learning0
Learning Montezuma's Revenge from a Single Demonstration0
Contingency-Aware Exploration in Reinforcement Learning0
Exploration by Random Network DistillationCode1
Learning Representations in Model-Free Hierarchical Reinforcement Learning0
Empowerment-driven Exploration using Mutual Information EstimationCode0
Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural NetworksCode0
Deep Curiosity Search: Intra-Life Exploration Can Improve Performance on Challenging Deep Reinforcement Learning Problems0
Observe and Look Further: Achieving Consistent Performance on Atari0
Playing hard exploration games by watching YouTubeCode1
Hierarchical Imitation and Reinforcement Learning0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.