SOTAVerified

Atari Games

The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.

( Image credit: Playing Atari with Deep Reinforcement Learning )

Papers

Showing 151200 of 625 papers

TitleStatusHype
Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy OptimizationCode0
Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?Code0
Echoes of Socratic Doubt: Embracing Uncertainty in Calibrated Evidential Reinforcement LearningCode0
Can Differentiable Decision Trees Enable Interpretable Reward Learning from Human Feedback?Code0
Increasing the Action Gap: New Operators for Reinforcement LearningCode0
Improving Experience Replay through Modeling of Similar Transitions' SetsCode0
Information-Directed Exploration for Deep Reinforcement LearningCode0
Improve Agents without Retraining: Parallel Tree Search with Off-Policy CorrectionCode0
An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning AgentsCode0
Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to ATARI gamesCode0
Learning Abstract Models for Strategic Exploration and Fast Reward TransferCode0
Distributional Reinforcement Learning with Quantile RegressionCode0
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgentCode0
IGN : Implicit Generative NetworksCode0
Benchmarking Perturbation-based Saliency Maps for Explaining Atari AgentsCode0
Distributional Bellman Operators over Mean EmbeddingsCode0
Adaptive Action Duration with Contextual Bandits for Deep Reinforcement Learning in Dynamic EnvironmentsCode0
Distributed Deep Reinforcement Learning: Learn how to play Atari games in 21 minutesCode0
Implementing the Deep Q-NetworkCode0
How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning AgentsCode0
Human level control through deep reinforcement learningCode0
Distributional Reinforcement Learning with Regularized Wasserstein LossCode0
Hindsight Trust Region Policy OptimizationCode0
DinerDash Gym: A Benchmark for Policy Learning in High-Dimensional Action SpaceCode0
Beating the World's Best at Super Smash Bros. with Deep Reinforcement LearningCode0
Beating Atari with Natural Language Guided Reinforcement LearningCode0
Boosting Object Representation Learning via Motion and Object ContinuityCode0
Importance Prioritized Policy DistillationCode0
Adapting to Reward Progressivity via Spectral Reinforcement LearningCode0
Hybrid Reinforcement Learning with Expert State SequencesCode0
Incentivizing Exploration In Reinforcement Learning With Deep Predictive ModelsCode0
Dueling Network Architectures for Deep Reinforcement LearningCode0
Implicit Quantile Networks for Distributional Reinforcement LearningCode0
Learning from the memory of Atari 2600Code0
Active inference: demystified and comparedCode0
Accelerating Reinforcement Learning through GPU Atari EmulationCode0
Deep Reinforcement Learning with Swin TransformersCode0
Is Deep Reinforcement Learning Really Superhuman on Atari? Leveling the playing fieldCode0
Reconciling λ-Returns with Experience ReplayCode0
Challenges of Context and Time in Reinforcement Learning: Introducing Space Fortress as a BenchmarkCode0
Adapting Auxiliary Losses Using Gradient SimilarityCode0
Graph Backup: Data Efficient Backup Exploiting Markovian TransitionsCode0
Characterizing Attacks on Deep Reinforcement LearningCode0
Generalization and Regularization in DQNCode0
A Metric Learning Approach to Anomaly Detection in Video GamesCode0
ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy GamesCode0
Combinational Q-Learning for Dou Di ZhuCode0
Learning Relational Rules from RewardsCode0
A Laplacian Framework for Option Discovery in Reinforcement LearningCode0
Generalization Tower Network: A Novel Deep Neural Network Architecture for Multi-Task LearningCode0
Show:102550
← PrevPage 4 of 13Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GDI-H3(200M frames)Score864Unverified
2GDI-H3Score864Unverified
3GDI-I3(200M frames)Score864Unverified
4GDI-I3Score864Unverified
5Bootstrapped DQNScore855Unverified
6FQFScore854.2Unverified
7R2D2Score837.7Unverified
8Ape-XScore800.9Unverified
9Agent57Score790.4Unverified
10IMPALA (deep)Score787.34Unverified
#ModelMetricClaimedVerifiedStatus
1GDI-I3Score34Unverified
2NoisyNet-DuelingScore34Unverified
3GDI-H3Score34Unverified
4TRPO-hashScore34Unverified
5IQNScore34Unverified
6QR-DQN-1Score34Unverified
7GDI-H3(200M frames)Score34Unverified
8Go-ExploreScore34Unverified
9ASL DDQNScore33.9Unverified
10C51 noopScore33.9Unverified
#ModelMetricClaimedVerifiedStatus
1Agent57Score580,328.14Unverified
2QR-DQN-1Score572,510Unverified
3R2D2Score408,850Unverified
4IMPALA (deep)Score351,200.12Unverified
5Ape-XScore302,391.3Unverified
6A2C + SILScore104,975.6Unverified
7MuZero (Res2 Adam)Score94,906.25Unverified
8DreamerV2Score94,688Unverified
9MuZeroScore72,276Unverified
10DNAScore52,398Unverified
#ModelMetricClaimedVerifiedStatus
1GDI-H3(200M frames)Score1,000,000Unverified
2GDI-H3Score1,000,000Unverified
3Agent57Score999,997.63Unverified
4R2D2Score999,996.7Unverified
5MuZeroScore999,976.52Unverified
6MuZero (Res2 Adam)Score999,659.18Unverified
7GDI-I3Score943,910Unverified
8Ape-XScore392,952.3Unverified
9C51 noopScore266,434Unverified
10Duel noopScore50,254.2Unverified