SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 14111420 of 2122 papers

TitleStatusHype
Data-Driven Simulation of Ride-Hailing Services using Imitation and Reinforcement Learning0
AMP: Adversarial Motion Priors for Stylized Physics-Based Character ControlCode2
No Need for Interactions: Robust Model-Based Imitation Learning using Neural ODECode0
UAV-Assisted Communication in Remote Disaster Areas using Imitation Learning0
Learning Online from Corrective Feedback: A Meta-Algorithm for Robotics0
Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic GraspingCode0
Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images for Autonomous DrivingCode1
DEALIO: Data-Efficient Adversarial Learning for Imitation from Observation0
iCurb: Imitation Learning-based Detection of Road Curbs using Aerial Images for Autonomous DrivingCode1
LazyDAgger: Reducing Context Switching in Interactive Imitation Learning0
Show:102550
← PrevPage 142 of 213Next →

No leaderboard results yet.