SOTAVerified

DLNet: Direction-Aware Feature Integration for Robust Lane Detection in Complex Environments

2025-06-09Prepare for publication in TITS 2025Code Available0· sign in to hype

Zhaoxuan Lu, Lyuchao Liao, Ruimin Li, Fumin Zou, Sijing Cai and Guangjie Han.

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The rapid advancement of autonomous driving systems has created a pressing need for accurate and robust lane detection to ensure driving safety and reliability. However, lane detection still faces several critical challenges in real-world scenarios: (1) severe occlusions caused by urban traffic and complex road layouts; (2) the difficulty of handling sharp curves and large curvature variations; and (3) varying lighting conditions that blur or degrade lane markings. To address these challenges, we propose DLNet, a novel direction-aware feature integration framework that integrates both low-level geometric details and high-level semantic cues. In particular, the approach includes: (i) a Multi-Skip Feature Attention Block (MSFAB) to refine local lane features by adaptively fusing multi-scale representations, (ii) a Context-Aware Feature Pyramid Network (CAFPN) to enhance global context modeling under adverse conditions, and (iii) a Directional Lane IoU (DLIoU) loss function that explicitly encodes lane directionality and curvature, providing more accurate lane overlap estimation. Extensive experiments conducted on two benchmark datasets, CULane and CurveLanes, show DLNet achieves new state-of-the-art results, with F150 and F175 scores of 81.23% and 64.75% on CULane, an F150 score of 86.51% on CurveLanes and a high F1 score of 97.62 on the TUSimple dataset. The source code and pretrained models will be made publicly available at https://github.com/RDXiaoLu/DLNet.git.

Tasks

Reproductions