SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 12611270 of 2122 papers

TitleStatusHype
Tackling the Low-resource Challenge for Canonical Segmentation0
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models0
TamedPUMA: safe and stable imitation learning with geometric fabrics0
TarGF: Learning Target Gradient Field to Rearrange Objects without Explicit Goal Specification0
Task-Driven Semantic Quantization and Imitation Learning for Goal-Oriented Communications0
Task-Induced Representation Learning0
Task-Relevant Adversarial Imitation Learning0
Task Tokens: A Flexible Approach to Adapting Behavior Foundation Models0
TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object Interaction for Generalizable Robotic Manipulation0
Teaching UAVs to Race: End-to-End Regression of Agile Controls in Simulation0
Show:102550
← PrevPage 127 of 213Next →

No leaderboard results yet.