SOTAVerified

Imitation Learning

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Papers

Showing 15211530 of 2122 papers

TitleStatusHype
An Analysis of Logit Learning with the r-Lambert Function0
An Energy-Aware Online Learning Framework for Resource Management in Heterogeneous Platforms0
A New Corpus and Imitation Learning Framework for Context-Dependent Semantic Parsing0
A New Framework for Query Efficient Active Imitation Learning0
A New Path: Scaling Vision-and-Language Navigation with Synthetic Instructions and Imitation Learning0
An Imitation Learning Approach for Cache Replacement0
An Imitation Learning Based Algorithm Enabling Priori Knowledge Transfer in Modern Electricity Markets for Bayesian Nash Equilibrium Estimation0
An Imitation Learning Curriculum for Text Editing with Non-Autoregressive Models0
An Improved Reinforcement Learning Algorithm for Learning to Branch0
An Integrated Imitation and Reinforcement Learning Methodology for Robust Agile Aircraft Control with Limited Pilot Demonstration Data0
Show:102550
← PrevPage 153 of 213Next →

No leaderboard results yet.