SOTAVerified

How Do Drivers Allocate Their Potential Attention? Driving Fixation Prediction via Convolutional Neural Networks

2019-05-30IEEE Transactions on Intelligent Transportation Systems 2019Code Available0· sign in to hype

Tao Deng, Hongmei Yan, Long Qin, Thuyen Ngo, and B. S. Manjunath

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The traffic driving environment is a complex and dynamic changing scene in which drivers have to pay close attention to salient and important targets or regions for safe driving. Modeling drivers’ eye movements and attention allocation in traffic driving can also help guiding unmanned intelligent vehicles. However, until now, few studies have modeled drivers’ true fixations and allocations while driving. To this end, we collect an eye tracking dataset from a total of 28 experienced drivers viewing 16 traffic driving videos. Based on the multiple drivers’ attention allocation dataset, we propose a convolutional-deconvolutional neural network (CDNN) to predict the drivers’ eye fixations. The experimental results indicate that the proposed CDNN outperforms the state-of-the-art saliency models and predicts drivers’ attentional locations more accurately. The proposed CDNN can predict the major fixation location and shows excellent detection of secondary important information or regions that cannot be ignored during driving if they exist. Compared with the present object detection models in autonomous and assisted driving systems, our human-like driving model does not detect all of the objects appearing in the driving scenes, but it provides the most relevant regions or targets, which can largely reduce the interference of irrelevant scene information.

Tasks

Reproductions