Task-Specific Normalization for Continual Learning of Blind Image Quality Models
Weixia Zhang, Kede Ma, Guangtao Zhai, Xiaokang Yang
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/zwx8981/tsn-iqaOfficialIn paperpytorch★ 6
- github.com/yangyucheng000/MSpaper/tree/main/tsnmindspore★ 0
Abstract
In this paper, we present a simple yet effective continual learning method for blind image quality assessment (BIQA) with improved quality prediction accuracy, plasticity-stability trade-off, and task-order/-length robustness. The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability, and learn task-specific normalization parameters for plasticity. We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score. The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight K-means gating mechanism. Extensive experiments on six IQA datasets demonstrate the advantages of the proposed method in comparison to previous training techniques for BIQA.