SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 11311140 of 2122 papers

TitleStatusHype
Divide & Conquer Imitation LearningCode0
Understanding Game-Playing Agents with Natural Language AnnotationsCode0
What Matters in Language Conditioned Robotic Imitation Learning over Unstructured DataCode1
Causal Confusion and Reward Misidentification in Preference-Based Reward Learning0
When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?0
Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale0
Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning0
Demonstrate Once, Imitate Immediately (DOME): Learning Visual Servoing for One-Shot Imitation Learning0
Learning to Drive by Watching YouTube Videos: Action-Conditioned Contrastive Policy PretrainingCode1
Learning Generalizable Dexterous Manipulation from Human Grasp Affordance0
Show:102550
← PrevPage 114 of 213Next →

No leaderboard results yet.