Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways
Weikai Tan, Nannan Qin, Lingfei Ma, Ying Li, Jing Du, Guorong Cai, Ke Yang, Jonathan Li
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/WeikaiTan/Toronto-3DOfficialIn papertf★ 291
Abstract
Semantic segmentation of large-scale outdoor point clouds is essential for urban scene understanding in various applications, especially autonomous driving and urban high-definition (HD) mapping. With rapid developments of mobile laser scanning (MLS) systems, massive point clouds are available for scene understanding, but publicly accessible large-scale labeled datasets, which are essential for developing learning-based methods, are still limited. This paper introduces Toronto-3D, a large-scale urban outdoor point cloud dataset acquired by a MLS system in Toronto, Canada for semantic segmentation. This dataset covers approximately 1 km of point clouds and consists of about 78.3 million points with 8 labeled object classes. Baseline experiments for semantic segmentation were conducted and the results confirmed the capability of this dataset to train deep learning models effectively. Toronto-3D is released to encourage new research, and the labels will be improved and updated with feedback from the research community.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Toronto-3D | KPFCNN | OA | 91.71 | — | Unverified |
| Toronto-3D | TGNet | OA | 91.64 | — | Unverified |
| Toronto-3D | MS-PCNN | OA | 91.53 | — | Unverified |
| Toronto-3D | PointNet++ | OA | 91.21 | — | Unverified |
| Toronto-3D | DGCNN | OA | 89 | — | Unverified |