SOTAVerified

Modeling and Measuring Redundancy in Multisource Multimodal Data for Autonomous Driving

2026-03-06Code Available0· sign in to hype

Yuhan Zhou, Mehri Sattari, Haihua Chen, Kewei Sha

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Next-generation autonomous vehicles (AVs) rely on large volumes of multisource and multimodal (M^2) data to support real-time decision-making. In practice, data quality (DQ) varies across sources and modalities due to environmental conditions and sensor limitations, yet AV research has largely prioritized algorithm design over DQ analysis. This work focuses on redundancy as a fundamental but underexplored DQ issue in AV datasets. Using the nuScenes and Argoverse 2 (AV2) datasets, we model and measure redundancy in multisource camera data and multimodal image-LiDAR data, and evaluate how removing redundant labels affects the YOLOv8 object detection task. Experimental results show that selectively removing redundant multisource image object labels from cameras with shared fields of view improves detection. In nuScenes, mAP50 gains from 0.66 to 0.70, 0.64 to 0.67, and from 0.53 to 0.55, on three representative overlap regions, while detection on other overlapping camera pairs remains at the baseline even under stronger pruning. In AV2, 4.1-8.6\% of labels are removed, and mAP50 stays near the 0.64 baseline. Multimodal analysis also reveals substantial redundancy between image and LiDAR data. These findings demonstrate that redundancy is a measurable and actionable DQ factor with direct implications for AV performance. This work highlights the role of redundancy as a data quality factor in AV perception and motivates a data-centric perspective for evaluating and improving AV datasets. Code, data, and implementation details are publicly available at: https://github.com/yhZHOU515/RedundancyAD

Reproductions