SOTAVerified

SAGA: Selective Adaptive Gating for Efficient and Expressive Linear Attention

2026-03-07Unverified0· sign in to hype

Yuan Cao, Dong Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

While Transformer architecture excel at modeling long-range dependencies contributing to its widespread adoption in vision tasks the quadratic complexity of softmax-based attention mechanisms imposes a major bottleneck, particularly when processing high-resolution images. Linear attention presents a promising alternative by reformulating the attention computation from (QK)V to Q(KV), thereby reducing the complexity from O(N^2) to O(N) while preserving the global receptive field. However, most existing methods compress historical key-value (KV) information uniformly, which can lead to feature redundancy and the loss of directional alignment with the query (Q). This uniform compression results in low-rank KV feature maps, contributing to a performance gap compared to softmax attention. To mitigate this limitation, we propose Selective Adaptive GAting for Efficient and Expressive Linear Attention (SAGA) , which introduces input-adaptive learnable gates to selectively modulate information aggregation into the KV feature map. These gates enhance semantic diversity and alleviate the low-rank constraint inherent in conventional linear attention. Additionally, we propose an efficient Hadamard-product decomposition method for gate computation, which introduces no additional memory overhead. Experiments demonstrate that SAGA achieves a 1.76 improvement in throughput and a 2.69 reduction in peak GPU memory compared to PVT-T at a resolution of 1280 1280. Moreover, it improves top-1 accuracy by up to 4.4\% on the ImageNet dataset, demonstrating both computational efficiency and model effectiveness.

Reproductions