SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 101110 of 2122 papers

TitleStatusHype
DeformPAM: Data-Efficient Learning for Long-horizon Deformable Object Manipulation via Preference-based Action AlignmentCode1
DEMO: Reframing Dialogue Interaction with Fine-grained Element ModelingCode1
All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RLCode1
Confidence-Aware Imitation Learning from Demonstrations with Varying OptimalityCode1
Learning Constrained Adaptive Differentiable Predictive Control Policies With GuaranteesCode1
Diffusing States and Matching Scores: A New Framework for Imitation LearningCode1
Cross-Domain Imitation Learning via Optimal TransportCode1
Discriminator Soft Actor Critic without Extrinsic RewardsCode1
Coherent Soft Imitation LearningCode1
A GAN-Like Approach for Physics-Based Imitation Learning and Interactive Character ControlCode1
Show:102550
← PrevPage 11 of 213Next →

No leaderboard results yet.