SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 801810 of 2122 papers

TitleStatusHype
Fighting Uncertainty with Gradients: Offline Reinforcement Learning via Diffusion Score Matching0
Comparing the Efficacy of Fine-Tuning and Meta-Learning for Few-Shot Policy ImitationCode0
CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning0
One-shot Imitation Learning via Interaction Warping0
Reasoning over the Air: A Reasoning-based Implicit Semantic-Aware Communication FrameworkCode1
SeMAIL: Eliminating Distractors in Visual Imitation via Separated Models0
Active Policy Improvement from Multiple Black-box OraclesCode0
Learning Space-Time Semantic Correspondences0
Mimicking Better by Matching the Approximate Action DistributionCode0
Residual Q-Learning: Offline and Online Policy Customization without Value0
Show:102550
← PrevPage 81 of 213Next →

No leaderboard results yet.