BeyondRPC: A Contrastive and Augmentation-Driven Framework for Robust Point Cloud Understanding
Oddy Virgantara Putra, Hanugra Aulia Sidharta, Diah Risqiwati, Moch. Iskandar Riansyah, Yuni Yamasari
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/virgantara/BeyondRPCpytorch★ 0
Abstract
Robust perception of 3D point clouds remains a significant challenge in real-world environments where sensor data is often corrupted. While recent models and augmentation strategies have improved robustness individually, their isolated use still limits performance under severe distortions. In this work, we introduce BeyondRPC, a contrastive and augmentation-driven framework for robust point cloud classification. Our approach combines AdaCrossNet for adaptive cross-modal contrastive pretraining with WOLFMix-based fine-tuning to improve generalization under corruption. Specifically, AdaCrossNet employs a dynamic weighting mechanism to balance intra- and cross-modal learning, while WOLFMix integrates both deformation-based and rigid-mix augmentations. Experiments on the ModelNet-C benchmark demonstrate that BeyondRPC achieves a mean Corruption Error of 0.455, outperforming state-of-the-art methods, including RPC, GDANet, and CurveNet, while maintaining high clean overall accuracy at 0.930. These results underscore the importance of joint contrastive representation learning and corruption- aware augmentation for robust 3D point cloud understanding.