SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 12911300 of 2122 papers

TitleStatusHype
Stabilized Likelihood-based Imitation Learning via Denoising Continuous Normalizing Flow0
Imitation Learning from Pixel Observations for Continuous Control0
What Would the Expert do()?: Causal Imitation Learning0
Fight fire with fire: countering bad shortcuts in imitation learning with good shortcuts0
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations0
Meta-Imitation Learning by Watching Video Demonstrations0
Benchmarking Sample Selection Strategies for Batch Reinforcement Learning0
CrowdPlay: Crowdsourcing human demonstration data for offline learning in Atari games0
Language Model Pre-training Improves Generalization in Policy Learning0
Distributional Decision Transformer for Hindsight Information Matching0
Show:102550
← PrevPage 130 of 213Next →

No leaderboard results yet.