SOTAVerified

Atari Games

The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.

( Image credit: Playing Atari with Deep Reinforcement Learning )

Papers

Showing 451500 of 625 papers

TitleStatusHype
Generalized Data Distribution Iteration0
Generalized Munchausen Reinforcement Learning using Tsallis KL Divergence0
GMAC: A Distributional Perspective on Actor-Critic Framework0
Gradient Monitored Reinforcement Learning0
GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal0
Group-Agent Reinforcement Learning with Heterogeneous Agents0
High Performance Across Two Atari Paddle Games Using the Same Perceptual Control Architecture Without Training0
Machine versus Human Attention in Deep Reinforcement Learning Tasks0
Improving Experience Replay with Successor Representation0
Improving On-policy Learning with Statistical Reward Accumulation0
Improving Performance of Spike-based Deep Q-Learning using Ternary Neurons0
Improving the Diversity of Bootstrapped DQN by Replacing Priors With Noise0
Improving width-based planning with compact policies0
InferNet for Delayed Reinforcement Tasks: Addressing the Temporal Credit Assignment Problem0
In Hindsight: A Smooth Reward for Steady Exploration0
Interpretable end-to-end Neurosymbolic Reinforcement Learning agents0
Interpreting the Learned Model in MuZero Planning0
Investigating Recurrence and Eligibility Traces in Deep Q-Networks0
Iterated Q-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning0
KF-LAX: Kronecker-factored curvature estimation for control variate optimization in reinforcement learning0
Latent forward model for Real-time Strategy game planning with incomplete information0
Lazy-MDPs: Towards Interpretable Reinforcement Learning by Learning When to Act0
Automatic Reward Shaping from Confounded Offline Data0
Learning Abstract Models for Long-Horizon Exploration0
Learning Abstract Models for Strategic Exploration and Fast Reward Transfer0
Learning Actions and Control of Focus of Attention with a Log-Polar-like Sensor0
Learning and Querying Fast Generative Models for Reinforcement Learning0
Learning Dialog Policies from Weak Demonstrations0
Learning Dynamic State Abstractions for Model-Based Reinforcement Learning0
Learning Efficient Planning-based Rewards for Imitation Learning0
Learning Finite State Representations of Recurrent Policy Networks0
Learning Key Steps to Attack Deep Reinforcement Learning Agents0
Learning objects from pixels0
Learning Self-Game-Play Agents for Combinatorial Optimization Problems0
Learning Shared Dynamics with Meta-World Models0
Learning to Control Visual Abstractions for Structured Exploration in Deep Reinforcement Learning0
Learning To Explore With Predictive World Model Via Self-Supervised Learning0
Learning to play slot cars and Atari 2600 games in just minutes0
Learning to predict where to look in interactive environments using deep recurrent q-learning0
Learning to Represent Action Values as a Hypergraph on the Action Vertices0
Learning to Stop: Dynamic Simulation Monte-Carlo Tree Search0
Learning values across many orders of magnitude0
LeDeepChef: Deep Reinforcement Learning Agent for Families of Text-Based Games0
Leveraging the Variance of Return Sequences for Exploration Policy0
Local-Guided Global: Paired Similarity Representation for Visual Reinforcement Learning0
The Indoor-Training Effect: unexpected gains from distribution shifts in the transition function0
Look Before Leap: Look-Ahead Planning with Uncertainty in Reinforcement Learning0
Loss of Plasticity in Continual Deep Reinforcement Learning0
Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loop with Neuromorphic Computing0
Mask Atari for Deep Reinforcement Learning as POMDP Benchmarks0
Show:102550
← PrevPage 10 of 13Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GDI-H3(200M frames)Score864Unverified
2GDI-H3Score864Unverified
3GDI-I3(200M frames)Score864Unverified
4GDI-I3Score864Unverified
5Bootstrapped DQNScore855Unverified
6FQFScore854.2Unverified
7R2D2Score837.7Unverified
8Ape-XScore800.9Unverified
9Agent57Score790.4Unverified
10IMPALA (deep)Score787.34Unverified
#ModelMetricClaimedVerifiedStatus
1GDI-I3Score34Unverified
2NoisyNet-DuelingScore34Unverified
3GDI-H3Score34Unverified
4TRPO-hashScore34Unverified
5IQNScore34Unverified
6QR-DQN-1Score34Unverified
7GDI-H3(200M frames)Score34Unverified
8Go-ExploreScore34Unverified
9ASL DDQNScore33.9Unverified
10C51 noopScore33.9Unverified
#ModelMetricClaimedVerifiedStatus
1Agent57Score580,328.14Unverified
2QR-DQN-1Score572,510Unverified
3R2D2Score408,850Unverified
4IMPALA (deep)Score351,200.12Unverified
5Ape-XScore302,391.3Unverified
6A2C + SILScore104,975.6Unverified
7MuZero (Res2 Adam)Score94,906.25Unverified
8DreamerV2Score94,688Unverified
9MuZeroScore72,276Unverified
10DNAScore52,398Unverified
#ModelMetricClaimedVerifiedStatus
1GDI-H3(200M frames)Score1,000,000Unverified
2GDI-H3Score1,000,000Unverified
3Agent57Score999,997.63Unverified
4R2D2Score999,996.7Unverified
5MuZeroScore999,976.52Unverified
6MuZero (Res2 Adam)Score999,659.18Unverified
7GDI-I3Score943,910Unverified
8Ape-XScore392,952.3Unverified
9C51 noopScore266,434Unverified
10Duel noopScore50,254.2Unverified