SOTAVerified

Multi-View Dreaming: Multi-View World Model with Contrastive Learning

2022-03-15Unverified0· sign in to hype

Akira Kinose, Masashi Okada, Ryo Okumura, Tadahiro Taniguchi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose Multi-View Dreaming, a novel reinforcement learning agent for integrated recognition and control from multi-view observations by extending Dreaming. Most current reinforcement learning method assumes a single-view observation space, and this imposes limitations on the observed data, such as lack of spatial information and occlusions. This makes obtaining ideal observational information from the environment difficult and is a bottleneck for real-world robotics applications. In this paper, we use contrastive learning to train a shared latent space between different viewpoints, and show how the Products of Experts approach can be used to integrate and control the probability distributions of latent states for multiple viewpoints. We also propose Multi-View DreamingV2, a variant of Multi-View Dreaming that uses a categorical distribution to model the latent state instead of the Gaussian distribution. Experiments show that the proposed method outperforms simple extensions of existing methods in a realistic robot control task.

Tasks

Reproductions