SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 591600 of 2122 papers

TitleStatusHype
HiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent PathfindingCode1
Path Planning based on 2D Object Bounding-box0
BeTAIL: Behavior Transformer Adversarial Imitation Learning from Human Racing Gameplay0
CyberDemo: Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation0
Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future DirectionsCode2
DINOBot: Robot Manipulation via Retrieval and Alignment with Vision Foundation Models0
Align Your Intents: Offline Imitation Learning via Optimal Transport0
Tiny Reinforcement Learning for Quadruped Locomotion using Decision TransformersCode0
SPRINQL: Sub-optimal Demonstrations driven Offline Imitation LearningCode0
PRISE: LLM-Style Sequence Compression for Learning Temporal Action Abstractions in ControlCode1
Show:102550
← PrevPage 60 of 213Next →

No leaderboard results yet.