SOTAVerified

Atari Games

The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.

( Image credit: Playing Atari with Deep Reinforcement Learning )

Papers

Showing 151200 of 625 papers

TitleStatusHype
GRIm-RePR: Prioritising Generating Important Features for Pseudo-Rehearsal0
High Performance Across Two Atari Paddle Games Using the Same Perceptual Control Architecture Without Training0
Interpreting the Learned Model in MuZero Planning0
From "What" to "When" -- a Spiking Neural Network Predicting Rare Events and Time to their Occurrence0
From Code to Play: Benchmarking Program Search for Games Using Large Language Models0
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning0
CASA: Bridging the Gap between Policy Improvement and Policy Evaluation with Conflict Averse Policy Iteration0
Adaptive Q-Network: On-the-fly Target Selection for Deep Reinforcement Learning0
FRESH: Interactive Reward Shaping in High-Dimensional State Spaces using Human Feedback0
GDI: Rethinking What Makes Reinforcement Learning Different from Supervised Learning0
An Entropy Regularization Free Mechanism for Policy-based Reinforcement Learning0
DQN with model-based exploration: efficient learning on environments with sparse rewards0
Fast Retinomorphic Event Stream for Video Recognition and Reinforcement Learning0
Double A3C: Deep Reinforcement Learning on OpenAI Gym Games0
Adaptive N-step Bootstrapping with Off-policy Data0
Biased Estimates of Advantages over Path Ensembles0
A Convergent Variant of the Boltzmann Softmax Operator in Reinforcement Learning0
Exploration by Uncertainty in Reward Space0
Beyond Exponentially Discounted Sum: Automatic Learning of Return Function0
Distributional Reinforcement Learning for Efficient Exploration0
An Approach to Partial Observability in Games: Learning to Both Act and Observe0
Expressiveness in Deep Reinforcement Learning0
Distributional Perturbation for Efficient Exploration in Distributional Reinforcement Learning0
Analysis of Q-learning with Adaptation and Momentum Restart for Gradient Descent0
Accelerated Target Updates for Q-learning0
Exploration by Random Network Distillation0
Disentangling the Causes of Plasticity Loss in Neural Networks0
Double Prioritized State Recycled Experience Replay0
Disentangling Controllable Object through Video Prediction Improves Visual Reinforcement Learning0
DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization0
DSAC-C: Constrained Maximum Entropy for Robust Discrete Soft-Actor Critic0
CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY0
Dynamic Frame skip Deep Q Network0
A new Potential-Based Reward Shaping for Reinforcement Learning Agent0
Effects of Different Optimization Formulations in Evolutionary Reinforcement Learning on Diverse Behavior Generation0
Efficient decorrelation of features using Gramian in Reinforcement Learning0
Cautious Policy Programming: Exploiting KL Regularization in Monotonic Policy Improvement for Reinforcement Learning0
Efficient Diversity-based Experience Replay for Deep Reinforcement Learning0
An initial attempt of combining visual selective attention with deep reinforcement learning0
Efficient Entropy for Policy Gradient with Multidimensional Action Space0
Benchmarking Bonus-Based Exploration Methods on the Arcade Learning Environment0
Smaller World Models for Reinforcement Learning0
Efficiently Guiding Imitation Learning Agents with Human Gaze0
Analysing Results from AI Benchmarks: Key Indicators and How to Obtain Them0
A Comparison of learning algorithms on the Arcade Learning Environment0
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear0
Emergence of Novelty in Evolutionary Algorithms0
Emphatic Algorithms for Deep Reinforcement Learning0
Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions0
An advantage actor-critic algorithm for robotic motion planning in dense and dynamic scenarios0
Show:102550
← PrevPage 4 of 13Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1GDI-I3(200M frames)Score864Unverified
2GDI-H3(200M frames)Score864Unverified
3GDI-H3Score864Unverified
4GDI-I3Score864Unverified
5Bootstrapped DQNScore855Unverified
6FQFScore854.2Unverified
7R2D2Score837.7Unverified
8Ape-XScore800.9Unverified
9Agent57Score790.4Unverified
10IMPALA (deep)Score787.34Unverified
#ModelMetricClaimedVerifiedStatus
1IQNScore34Unverified
2QR-DQN-1Score34Unverified
3TRPO-hashScore34Unverified
4NoisyNet-DuelingScore34Unverified
5GDI-H3Score34Unverified
6GDI-H3(200M frames)Score34Unverified
7GDI-I3Score34Unverified
8Go-ExploreScore34Unverified
9Bootstrapped DQNScore33.9Unverified
10C51 noopScore33.9Unverified
#ModelMetricClaimedVerifiedStatus
1Agent57Score580,328.14Unverified
2QR-DQN-1Score572,510Unverified
3R2D2Score408,850Unverified
4IMPALA (deep)Score351,200.12Unverified
5Ape-XScore302,391.3Unverified
6A2C + SILScore104,975.6Unverified
7MuZero (Res2 Adam)Score94,906.25Unverified
8DreamerV2Score94,688Unverified
9MuZeroScore72,276Unverified
10DNAScore52,398Unverified
#ModelMetricClaimedVerifiedStatus
1GDI-H3Score1,000,000Unverified
2GDI-H3(200M frames)Score1,000,000Unverified
3Agent57Score999,997.63Unverified
4R2D2Score999,996.7Unverified
5MuZeroScore999,976.52Unverified
6MuZero (Res2 Adam)Score999,659.18Unverified
7GDI-I3Score943,910Unverified
8Ape-XScore392,952.3Unverified
9C51 noopScore266,434Unverified
10Duel noopScore50,254.2Unverified