SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 501510 of 2122 papers

TitleStatusHype
Vision-and-Language Navigation Generative Pretrained Transformer0
Provably Efficient Off-Policy Adversarial Imitation Learning with Convergence Guarantees0
Multi-Agent Inverse Reinforcement Learning in Real World Unstructured Pedestrian Crowds0
Diffusion-Reward Adversarial Imitation Learning0
How to Leverage Diverse Demonstrations in Offline Imitation LearningCode1
OLLIE: Imitation Learning from Offline Pretraining to Online FinetuningCode1
Amortized nonmyopic active search via deep imitation learning0
Efficient Imitation Learning with Conservative World Models0
RuleFuser: An Evidential Bayes Approach for Rule Injection in Imitation Learned Planners and Predictors for Robustness under Distribution Shifts0
Reducing Risk for Assistive Reinforcement Learning Policies with Diffusion Models0
Show:102550
← PrevPage 51 of 213Next →

No leaderboard results yet.