SOTAVerified

RELLIS-3D Dataset: Data, Benchmarks and Analysis

2020-11-17Code Available1· sign in to hype

Peng Jiang, Philip Osteen, Maggie Wigness, Srikanth Saripalli

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Semantic scene understanding is crucial for robust and safe autonomous navigation, particularly so in off-road environments. Recent deep learning advances for 3D semantic segmentation rely heavily on large sets of training data, however existing autonomy datasets either represent urban environments or lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal dataset collected in an off-road environment, which contains annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A\&M University and presents challenges to existing algorithms related to class imbalance and environmental topography. Additionally, we evaluate the current state-of-the-art deep learning semantic segmentation models on this dataset. Experimental results show that RELLIS-3D presents challenges for algorithms designed for segmentation in urban environments. This novel dataset provides the resources needed by researchers to continue to develop more advanced algorithms and investigate new research directions to enhance autonomous navigation in off-road environments. RELLIS-3D is available at https://github.com/unmannedlab/RELLIS-3D

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
RELLIS-3D DatasetsalsanextMean IoU (class)43.07Unverified
RELLIS-3D DatasetkpconvMean IoU (class)19.97Unverified

Reproductions