SOTAVerified

Reinforcement Learning for Contact-Rich Tasks: Robotic Peg Insertion Strategies

2020-12-14CUHK Course IERG5350 2020Code Available1· sign in to hype

Jianbang Liu, Ang ZHANG

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vision and touch are especially important when doing contact-rich manipulation tasks in unstructured environments. It is non-trivial to manually design a robot con- troller that combines these modalities which have very different characteristics. In this project, to connect vision and touch, we first equip robots with both visual and tactile sensors and collect a large-scale dataset of corresponding vision and tactile sequences. We use self-supervision to learn a compact and multimodal representa- tion of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We will train a policy in a simulation environment using deep reinforcement learning algorithms. The learned policy is also transferable to handle real-world tasks. The peg insertion is chosen as the task for demonstration in this project. A preliminary version of our python implementation is avail- able at: https://github.com/Henry1iu/ierg5350_rl_course_project. A video introducing our project is available at: https://mycuhk-my. sharepoint.com/:v:/g/personal/1155071948_link_cuhk_edu_hk/ EaKiGmkUvjJOoSqdWxrqjXYBpz3dCSAfOD9Co8krttyqUQ?e=RXsHD2

Tasks

Reproductions