SOTAVerified

Rotation Transformation Network: Learning View-Invariant Point Cloud for Classification and Segmentation

2021-07-07Code Available0· sign in to hype

Shuang Deng, Bo Liu, Qiulei Dong, Zhanyi Hu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Many recent works show that a spatial manipulation module could boost the performances of deep neural networks (DNNs) for 3D point cloud analysis. In this paper, we aim to provide an insight into spatial manipulation modules. Firstly, we find that the smaller the rotational degree of freedom (RDF) of objects is, the more easily these objects are handled by these DNNs. Then, we investigate the effect of the popular T-Net module and find that it could not reduce the RDF of objects. Motivated by the above two issues, we propose a rotation transformation network for point cloud analysis, called RTN, which could reduce the RDF of input 3D objects to 0. The RTN could be seamlessly inserted into many existing DNNs for point cloud analysis. Extensive experimental results on 3D point cloud classification and segmentation tasks demonstrate that the proposed RTN could improve the performances of several state-of-the-art methods significantly.

Tasks

Reproductions