SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 551560 of 2122 papers

TitleStatusHype
Grounding Language Plans in Demonstrations Through Counterfactual Perturbations0
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination0
IBCB: Efficient Inverse Batched Contextual Bandit for Behavioral Evolution History0
Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling0
Automated Feature Selection for Inverse Reinforcement Learning0
Self-Improvement for Neural Combinatorial Optimization: Sample without Replacement, but ImprovementCode1
Rethinking Adversarial Inverse Reinforcement Learning: Policy Imitation, Transferable Reward Recovery and Algebraic Equilibrium ProofCode0
Augmented Reality Demonstrations for Scalable Robot Imitation Learning0
Information-Theoretic Distillation for Reference-less Summarization0
What AIs are not Learning (and Why)0
Show:102550
← PrevPage 56 of 213Next →

No leaderboard results yet.