SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 12411250 of 2122 papers

TitleStatusHype
Latent Policies for Adversarial Imitation Learning0
Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming0
Imitate then Transcend: Multi-Agent Optimal Execution with Dual-Window Denoise PPO0
Model-Based Imitation Learning Using Entropy Regularization of Model and Policy0
Learning Multi-Task Transferable Rewards via Variational Inverse Reinforcement Learning0
Deep Reinforcement Learning for Exact Combinatorial Optimization: Learning to Branch0
Case-Based Inverse Reinforcement Learning Using Temporal CoherenceCode0
Model-based Offline Imitation Learning with Non-expert Data0
Optimal Solutions for Joint Beamforming and Antenna Selection: From Branch and Bound to Graph Neural Imitation Learning0
Precise Affordance Annotation for Egocentric Action Video Datasets0
Show:102550
← PrevPage 125 of 213Next →

No leaderboard results yet.