LOSC: LiDAR Open-voc Segmentation Consolidator
Nermin Samet, Gilles Puy, Renaud Marlet
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/valeoai/loscOfficialIn paper★ 6
Abstract
We study the use of image-based Vision-Language Models (VLMs) for open-vocabulary segmentation of lidar scans in driving settings. Classically, image semantics can be back-projected onto 3D point clouds. Yet, resulting point labels are noisy and sparse. We consolidate these labels to enforce both spatio-temporal consistency and robustness to image-level augmentations. We then train a 3D network based on these refined labels. This simple method, called LOSC, outperforms the SOTA of zero-shot open-vocabulary semantic and panoptic segmentation on both nuScenes and SemanticKITTI, with significant margins. Code is available at https://github.com/valeoai/LOSC.