SOTAVerified

Control with Prametrised Actions

Most reinforcement learning research papers focus on environments where the agent’s actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For example, a set of high-level discrete actions (ex: move, jump, fire), each of them being associated with continuous parameters (ex: target coordinates for the move action, direction for the jump action, aiming angle for the fire action). These kinds of tasks are included in Control with Parameterised Actions.

Papers

Showing 12 of 2 papers

TitleStatusHype
Discrete and Continuous Action Representation for Practical RL in Video GamesCode0
Multi-Pass Q-Networks for Deep Reinforcement Learning with Parameterised Action SpacesCode0
Show:102550

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1MP-DQNGoal Probability0.79Unverified
2Hybrid SACGoal Probability0.73Unverified