SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 110 of 2122 papers

TitleStatusHype
Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved)0
The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner0
Fast Bilateral Teleoperation and Imitation Learning Using Sensorless Force Control via Accurate Dynamics Model0
LeAD: The LLM Enhanced Planning System Converged with End-to-end Autonomous Driving0
EC-Flow: Enabling Versatile Robotic Manipulation from Action-Unlabeled Videos via Embodiment-Centric Flow0
Advancing Learnable Multi-Agent Pathfinding Solvers with Active Fine-TuningCode2
World-aware Planning Narratives Enhance Large Vision-Language Model Planner0
Beyond-Expert Performance with Limited Demonstrations: Efficient Imitation Learning with Double Exploration0
Ark: An Open-source Python-based Framework for Robot Learning0
CodeDiffuser: Attention-Enhanced Diffusion Policy via VLM-Generated Code for Instruction Ambiguity0
Show:102550
← PrevPage 1 of 213Next →

No leaderboard results yet.