SOTAVerified

ObitoNet: Multimodal High-Resolution Point Cloud Reconstruction

2024-12-25Code Available0· sign in to hype

Apoorv Thapliyal, Vinay Lanka, Swathi Baskaran

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

ObitoNet employs a Cross Attention mechanism to integrate multimodal inputs, where Vision Transformers (ViT) extract semantic features from images and a point cloud tokenizer processes geometric information using Farthest Point Sampling (FPS) and K Nearest Neighbors (KNN) for spatial structure capture. The learned multimodal features are fed into a transformer-based decoder for high-resolution point cloud reconstruction. This approach leverages the complementary strengths of both modalities rich image features and precise geometric details ensuring robust point cloud generation even in challenging conditions such as sparse or noisy data.

Tasks

Reproductions