TwinLiteNetPlus: A Stronger Model for Real-time Drivable Area and Lane Segmentation
Quang-Huy Che, Duc-Tri Le, Minh-Quan Pham, Vinh-Tiep Nguyen, Duc-Khai Lam
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/chequanghuy/TwinLiteNetPlusOfficialpytorch★ 55
- github.com/chequanghuy/TwinLiteNetpytorch★ 179
Abstract
Semantic segmentation is crucial for autonomous driving, particularly for Drivable Area and Lane Segmentation, ensuring safety and navigation. To address the high computational costs of current state-of-the-art (SOTA) models, this paper introduces TwinLiteNetPlus (TwinLiteNet^+), a model adept at balancing efficiency and accuracy. TwinLiteNet^+ incorporates standard and depth-wise separable dilated convolutions, reducing complexity while maintaining high accuracy. It is available in four configurations, from the robust 1.94 million-parameter TwinLiteNet^+_Large to the ultra-compact 34K-parameter TwinLiteNet^+_Nano. Notably, TwinLiteNet^+_Large attains a 92.9\% mIoU for Drivable Area Segmentation and a 34.2\% IoU for Lane Segmentation. These results notably outperform those of current SOTA models while requiring a computational cost that is approximately 11 times lower in terms of Floating Point Operations (FLOPs) compared to the existing SOTA model. Extensively tested on various embedded devices, TwinLiteNet^+ demonstrates promising latency and power efficiency, underscoring its suitability for real-world autonomous vehicle applications.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| BDD100K val | TwinLiteNetPlus-Large | mIoU | 92.9 | — | Unverified |
| BDD100K val | TwinLiteNetPlus-Medium | mIoU | 92 | — | Unverified |
| BDD100K val | TwinLiteNetPlus-Small | mIoU | 90.6 | — | Unverified |
| BDD100K val | TwinLiteNetPlus-Nano | mIoU | 87.3 | — | Unverified |