SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 11211130 of 2122 papers

TitleStatusHype
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated CharactersCode3
KING: Generating Safety-Critical Driving Scenarios for Robust Imitation via Kinematics GradientsCode1
Learning Value Functions from Undirected State-only Experience0
From One Hand to Multiple Hands: Imitation Learning for Dexterous Manipulation from Single-Camera Teleoperation0
Task-Induced Representation Learning0
Imitation Learning from Observations under Transition Model DisparityCode0
The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human ModelsCode1
Learning to Fold Real Garments with One Arm: A Case Study in Cloud-Based Robotics Research0
Non-Parallel Text Style Transfer with Self-Parallel SupervisionCode0
Evaluating the Effectiveness of Corrective Demonstrations and a Low-Cost Sensor for Dexterous ManipulationCode0
Show:102550
← PrevPage 113 of 213Next →

No leaderboard results yet.