Beyond Linearity in Attention Projections: The Case for Nonlinear Queries
Marko Karbevski
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Recent algebraic analysis shows that in decoder-only and encoder-only transformers, the Query projection W_Q may be set to identity without noticeable performance deterioration. This is possible because attention depends on X only through the products XW_Q, XW_K, XW_V, allowing basis transformations to be absorbed by adjacent layers and propagated through the network. We replace W_Q R^d d with a nonlinear residual of the form Q(X) = X + f_θ(X), where f_θ is a bottleneck MLP with d^2 + O(d) parameters. The identity term anchors the nonlinearity to a known-good prior. Experiments on GPT-3 small style models show consistent improvement over the baseline, comfortably outperforming a model with 12.5% more non-embedding parameters. These results motivate investigation at larger scales and across modalities.