SOTAVerified

AIA: Attention in Attention Within Collaborate Domains

2022-10-07Pattern Recognition and Computer Vision 2022Code Available0· sign in to hype

Le Zhang, Qi Feng, Yao Lu, Chang Liu, and Guangming Lu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Attention mechanisms can effectively improve the performance of the mobile networks with a limited computational complexity cost. However, existing attention methods extract importance from only one domain of the networks, hindering further performance improvement. In this paper, we propose the Attention in Attention (AIA) mechanism integrating One Dimension Frequency Channel Attention (1D FCA) with Joint Coordinate Attention (JCA) to collaboratively adjust the channel and coordinate weights in frequency and spatial domains, respectively. Specifically, 1D FCA using 1D Discrete Cosine Transform (DCT) adaptively extract and enhance the necessary channel information in the frequency domain. The JCA using explicit and implicit coordinate information extract and embed position feature into frequency channel attention. Extensive experiments on different datasets demonstrate that the proposed AIA mechanism can effectively improve the accuracy with only a limited computation complexity cost.

Tasks

Reproductions