SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 19311940 of 2122 papers

TitleStatusHype
Addressing reward bias in Adversarial Imitation Learning with neutral reward functionsCode0
Policy Improvement using Language Feedback ModelsCode0
Decentralized policy learning with partial observation and mechanical constraints for multiperson modelingCode0
InfoGAIL: Interpretable Imitation Learning from Visual DemonstrationsCode0
Third-Person Imitation LearningCode0
Inferring Versatile Behavior from Demonstrations by Matching Geometric DescriptorsCode0
Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical StudyCode0
Third-Person Visual Imitation Learning via Decoupled Hierarchical ControllerCode0
Beyond spiking networks: the computational advantages of dendritic amplification and input segregationCode0
Imitation Learning with Human Eye Gaze via Multi-Objective PredictionCode0
Show:102550
← PrevPage 194 of 213Next →

No leaderboard results yet.