SOTAVerified

Continuous Control

Continuous control in the context of playing games, especially within artificial intelligence (AI) and machine learning (ML), refers to the ability to make a series of smooth, ongoing adjustments or actions to control a game or a simulation. This is in contrast to discrete control, where the actions are limited to a set of specific, distinct choices. Continuous control is crucial in environments where precision, timing, and the magnitude of actions matter, such as driving a car in a racing game, controlling a character in a simulation, or managing the flight of an aircraft in a flight simulator.

Papers

Showing 201250 of 1161 papers

TitleStatusHype
Continuous control with deep reinforcement learningCode1
Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement LearningCode1
Softmax Deep Double Deterministic Policy GradientsCode1
Continuous Deep Q-Learning with Model-based AccelerationCode1
Continuous descriptor-based control for deep audio synthesisCode1
Amortizing intractable inference in diffusion models for vision, language, and controlCode1
Towards Safe Reinforcement Learning via Constraining Conditional Value-at-RiskCode1
Continuous MDP Homomorphisms and Homomorphic Policy GradientCode1
Transformers are Meta-Reinforcement LearnersCode1
UAV Obstacle Avoidance by Human-in-the-Loop Reinforcement in Arbitrary 3D EnvironmentCode1
Continuous-Time Fitted Value Iteration for Robust PoliciesCode1
Hierarchical Skills for Efficient ExplorationCode1
Variational Imitation Learning with Diverse-quality DemonstrationsCode1
Hamilton-Jacobi Deep Q-Learning for Deterministic Continuous-Time Systems with Lipschitz Continuous ControlsCode1
Addressing Function Approximation Error in Actor-Critic MethodsCode1
How to Learn a Useful Critic? Model-based Action-Gradient-Estimator Policy OptimizationCode1
Contrastive Variational Reinforcement Learning for Complex ObservationsCode1
How Crucial is Transformer in Decision Transformer?Code1
Bisimulation metric for Model Predictive ControlCode0
Dataset Clustering for Improved Offline Policy LearningCode0
Learning State Representations via Retracing in Reinforcement LearningCode0
Learning with Expert Abstractions for Efficient Multi-Task Continuous ControlCode0
Learning Sparse Rewarded Tasks from Sub-Optimal DemonstrationsCode0
Learning Stabilizable Nonlinear Dynamics with Contraction-Based RegularizationCode0
Better Exploration with Optimistic Actor CriticCode0
Learning model-based planning from scratchCode0
Learning State Abstractions for Transfer in Continuous ControlCode0
Leveraging exploration in off-policy algorithms via normalizing flowsCode0
Learning Continuous Control Policies by Stochastic Value GradientsCode0
Learning Bellman Complete Representations for Offline Policy EvaluationCode0
Learning Continuous Control Policies for Information-Theoretic Active PerceptionCode0
Learning-Based Model Predictive Control for Piecewise Affine Systems with Feasibility GuaranteesCode0
Benchmarking Reinforcement Learning Algorithms on Real-World RobotsCode0
Learning Belief Representations for Imitation Learning in POMDPsCode0
Learning Provably Stabilizing Neural Controllers for Discrete-Time Stochastic SystemsCode0
CrystalBox: Future-Based Explanations for Input-Driven Deep RL SystemsCode0
Co-training for Policy LearningCode0
CTD4 -- A Deep Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple CriticsCode0
Behaviour DistillationCode0
CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculumCode0
Inverse reinforcement learning for video gamesCode0
Information Theoretic Regret Bounds for Online Nonlinear ControlCode0
Improving Value Estimation Critically Enhances Vanilla Policy GradientCode0
Learning Action-Transferable Policy with Action EmbeddingCode0
Understanding and Mitigating the Limitations of Prioritized Experience ReplayCode0
Lipschitzness Is All You Need To Tame Off-policy Generative Adversarial Imitation LearningCode0
Control Regularization for Reduced Variance Reinforcement LearningCode0
Bayesian Policy Gradients via Alpha Divergence Dropout InferenceCode0
Contrasting Exploration in Parameter and Action Space: A Zeroth-Order Optimization PerspectiveCode0
High-Dimensional Continuous Control Using Generalized Advantage EstimationCode0
Show:102550
← PrevPage 5 of 24Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SAC gSDEReturn3,459Unverified
2TD3 gSDEReturn3,267Unverified
3TD3Return2,865Unverified
4SACReturn2,859Unverified
5PPO gSDEReturn2,587Unverified
6A2C gSDEReturn2,560Unverified
7PPOReturn2,160Unverified
8A2CReturn1,967Unverified
#ModelMetricClaimedVerifiedStatus
1SACReturn2,883Unverified
2SAC gSDEReturn2,850Unverified
3PPO + gSDEReturn2,760Unverified
4TD3Return2,687Unverified
5TD3 gSDEReturn2,578Unverified
6PPOReturn2,254Unverified
7A2C + gSDEReturn2,028Unverified
8A2CReturn1,652Unverified
#ModelMetricClaimedVerifiedStatus
1SAC gSDEReturn2,646Unverified
2PPO gSDEReturn2,508Unverified
3SACReturn2,477Unverified
4TD3Return2,470Unverified
5TD3 gSDEReturn2,353Unverified
6PPOReturn1,622Unverified
7A2CReturn1,559Unverified
8A2C gSDEReturn1,448Unverified
#ModelMetricClaimedVerifiedStatus
1SAC gSDEReturn2,341Unverified
2SACReturn2,215Unverified
3TD3Return2,106Unverified
4TD3 gSDEReturn1,989Unverified
5PPO gSDEReturn1,776Unverified
6PPOReturn1,238Unverified
7A2C gSDEReturn694Unverified
8A2CReturn443Unverified
#ModelMetricClaimedVerifiedStatus
1DreamerV1Return800Unverified
2SLACReturn700Unverified
3DrQReturn660Unverified
4PlaNetReturn650Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn998.14Unverified
2DREAMERReturn853Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn868.87Unverified
2MuZero UnpluggedReturn594.3Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn914.39Unverified
2MuZero UnpluggedReturn869.9Unverified
#ModelMetricClaimedVerifiedStatus
1DrQReturn963Unverified
2PlaNetReturn914Unverified
#ModelMetricClaimedVerifiedStatus
1DrQReturn921Unverified
2PlaNetReturn890Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn963.07Unverified
2MuZero UnpluggedReturn759Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn987.79Unverified
2MuZero UnpluggedReturn887.2Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn975.46Unverified
2MuZero UnpluggedReturn949.5Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore1,353.8Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-326Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-83.3Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-149.6Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn417.52Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-170.9Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore730.2Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-0.4Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore0Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn977.38Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore769Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore959Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn984.86Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore4,869.8Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore960.2Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore606.2Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore980.3Unverified
#ModelMetricClaimedVerifiedStatus
1MACScore178.3Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore582Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore841Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn846.91Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore299Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore518Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore4,412.4Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn986.38Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore767Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore926Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn972.53Unverified
#ModelMetricClaimedVerifiedStatus
1MuZero UnpluggedReturn681.6Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore287Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore1,914Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore1,183.3Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn528.24Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn926.5Unverified
#ModelMetricClaimedVerifiedStatus
1MuZero UnpluggedReturn643.1Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore247.2Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore4.5Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore10.4Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore14.1Unverified
#ModelMetricClaimedVerifiedStatus
1MACScore163.5Unverified
#ModelMetricClaimedVerifiedStatus
1MuZero UnpluggedReturn659.2Unverified
#ModelMetricClaimedVerifiedStatus
1MuZero UnpluggedReturn556Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-61.7Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-64.2Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-60.2Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore-61.6Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn837.76Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn923.54Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn933.77Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn982.26Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore538Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore929Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn971.53Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore269.7Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore96Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore0Unverified
#ModelMetricClaimedVerifiedStatus
1TRPOScore0Unverified
#ModelMetricClaimedVerifiedStatus
1SMuZeroReturn931.06Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore403Unverified
#ModelMetricClaimedVerifiedStatus
1CURLScore902Unverified