SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 821830 of 2122 papers

TitleStatusHype
SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking0
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations0
Orca: Progressive Learning from Complex Explanation Traces of GPT-4Code1
Data Quality in Imitation Learning0
On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control0
PAGAR: Taming Reward Misalignment in Inverse Reinforcement Learning-Based Imitation Learning with Protagonist Antagonist Guided Adversarial Reward0
Preference-grounded Token-level Guidance for Language Model Fine-tuningCode1
Thought Cloning: Learning to Think while Acting by Imitating Human ThinkingCode2
LIV: Language-Image Representations and Rewards for Robotic ControlCode1
Causal Imitability Under Context-Specific Independence Relations0
Show:102550
← PrevPage 83 of 213Next →

No leaderboard results yet.