SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 131140 of 2122 papers

TitleStatusHype
DeeCap: Dynamic Early Exiting for Efficient Image CaptioningCode1
Active Imitation Learning with Noisy GuidanceCode1
Adversarial Soft Advantage Fitting: Imitation Learning without Policy OptimizationCode1
Curriculum Offline Imitation LearningCode1
Adversarial Option-Aware Hierarchical Imitation LearningCode1
Cross-Domain Imitation Learning via Optimal TransportCode1
Crossway Diffusion: Improving Diffusion-based Visuomotor Policy via Self-supervised LearningCode1
CRIL: Continual Robot Imitation Learning via Generative and Prediction ModelCode1
When should we prefer Decision Transformers for Offline Reinforcement Learning?Code1
Critic Guided Segmentation of Rewarding Objects in First-Person ViewsCode1
Show:102550
← PrevPage 14 of 213Next →

No leaderboard results yet.