SOTAVerified

Faster gaze prediction with dense networks and Fisher pruning

2018-01-17Twitter 2018Code Available0· sign in to hype

Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition. However, as we show in this paper, these networks are highly overparameterized for the task of fixation prediction. We first present a simple yet principled greedy pruning method which we call Fisher pruning. Through a combination of knowledge distillation and Fisher pruning, we obtain much more runtime-efficient architectures for saliency prediction, achieving a 10x speedup for the same AUC performance as a state of the art network on the CAT2000 dataset. Speeding up single-image gaze prediction is important for many real-world applications, but it is also a crucial step in the development of video saliency models, where the amount of data to be processed is substantially larger.

Tasks

Reproductions