SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 16011610 of 2122 papers

TitleStatusHype
Navigation with QPHIL: Quantizing Planner for Hierarchical Implicit Q-Learning0
PLANRL: A Motion Planning and Imitation Learning Framework to Bootstrap Reinforcement Learning0
NEARL: Non-Explicit Action Reinforcement Learning for Robotic Control0
On Generalization of Adversarial Imitation Learning and Beyond0
NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis0
Nested-Wasserstein Self-Imitation Learning for Sequence Generation0
Neural Column Generation for Capacitated Vehicle Routing0
Neural Differentiable Integral Control Barrier Functions for Unknown Nonlinear Systems with Input Constraints0
Neural Dynamic Policies for End-to-End Sensorimotor Learning0
Neural Multivariate Regression: Qualitative Insights from the Unconstrained Feature Model0
Show:102550
← PrevPage 161 of 213Next →

No leaderboard results yet.