SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 611620 of 2122 papers

TitleStatusHype
Concave Utility Reinforcement Learning: the Mean-Field Game Viewpoint0
ConBaT: Control Barrier Transformer for Safe Policy Learning0
Accelerating Federated Edge Learning via Topology Optimization0
Feedback in Imitation Learning: The Three Regimes of Covariate Shift0
f-GAIL: Learning f-Divergence for Generative Adversarial Imitation Learning0
A Survey on Imitation Learning for Contact-Rich Tasks in Robotics0
Computational-Statistical Tradeoffs at the Next-Token Prediction Barrier: Autoregressive and Imitation Learning under Misspecification0
FDPP: Fine-tune Diffusion Policy with Human Preference0
Compressed imitation learning0
A Survey on Autonomous Vehicle Control in the Era of Mixed-Autonomy: From Physics-Based to AI-Guided Driving Policy Learning0
Show:102550
← PrevPage 62 of 213Next →

No leaderboard results yet.