SOTAVerified

Continuous Control

Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.

Papers

Showing 110 of 1161 papers

TitleStatusHype
Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved)0
rQdia: Regularizing Q-Value Distributions With Image Augmentation0
Sparse-Reg: Improving Sample Complexity in Offline Reinforcement Learning using SparsityCode0
Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute0
Scaling Algorithm Distillation for Continuous Control with Mamba0
DR-SAC: Distributionally Robust Soft Actor-Critic for Reinforcement Learning under UncertaintyCode0
Wasserstein Barycenter Soft Actor-Critic0
Reinforcement Learning via Implicit Imitation Guidance0
BEAST: Efficient Tokenization of B-Splines Encoded Action Sequences for Imitation Learning0
AutoQD: Automatic Discovery of Diverse Behaviors with Quality-Diversity Optimization0
Show:102550
← PrevPage 1 of 117Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DrQReturn963Unverified
2PlaNetReturn914Unverified