SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 10211030 of 2122 papers

TitleStatusHype
Good Data Is All Imitation Learning Needs0
Lagrangian Generative Adversarial Imitation Learning with Safety0
Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration0
Good Better Best: Self-Motivated Imitation Learning for noisy Demonstrations0
Cross-Episodic Curriculum for Transformer Agents0
Autonomous Navigation through intersections with Graph ConvolutionalNetworks and Conditional Imitation Learning for Self-driving Cars0
AGIL: Learning Attention from Human for Visuomotor Tasks0
Goal-Driven Imitation Learning from Observation by Inferring Goal Proximity0
Goal-Directed Design Agents: Integrating Visual Imitation with One-Step Lookahead Optimization for Generative Design0
Cross-Domain Imitation Learning with a Dual Structure0
Show:102550
← PrevPage 103 of 213Next →

No leaderboard results yet.