SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 351360 of 2122 papers

TitleStatusHype
Latent Weight Diffusion: Generating reactive policies instead of trajectories0
Diffusing States and Matching Scores: A New Framework for Imitation LearningCode1
DDIL: Diversity Enhancing Diffusion Distillation With Imitation Learning0
DeformPAM: Data-Efficient Learning for Long-horizon Deformable Object Manipulation via Preference-based Action AlignmentCode1
ILAEDA: An Imitation Learning Based Approach for Automatic Exploratory Data Analysis0
How to Leverage Demonstration Data in Alignment for Large Language Model? A Self-Imitation Learning PerspectiveCode0
Conformalized Interactive Imitation Learning: Handling Expert Shift and Intermittent Feedback0
Zero-Shot Offline Imitation Learning via Optimal TransportCode1
ARCap: Collecting High-quality Human Demonstrations for Robot Learning with Augmented Reality Feedback0
UNIQ: Offline Inverse Q-learning for Avoiding Undesirable DemonstrationsCode0
Show:102550
← PrevPage 36 of 213Next →

No leaderboard results yet.