SOTAVerified

Model-based Reinforcement Learning with Parametrized Physical Models and Optimism-Driven Exploration

2015-09-23Unverified0· sign in to hype

Christopher Xie, Sachin Patil, Teodor Moldovan, Sergey Levine, Pieter Abbeel

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we present a robotic model-based reinforcement learning method that combines ideas from model identification and model predictive control. We use a feature-based representation of the dynamics that allows the dynamics model to be fitted with a simple least squares procedure, and the features are identified from a high-level specification of the robot's morphology, consisting of the number and connectivity structure of its links. Model predictive control is then used to choose the actions under an optimistic model of the dynamics, which produces an efficient and goal-directed exploration strategy. We present real time experimental results on standard benchmark problems involving the pendulum, cartpole, and double pendulum systems. Experiments indicate that our method is able to learn a range of benchmark tasks substantially faster than the previous best methods. To evaluate our approach on a realistic robotic control task, we also demonstrate real time control of a simulated 7 degree of freedom arm.

Tasks

Reproductions