SOTAVerified

Exclusive Self Attention

2026-03-10Unverified0· sign in to hype

Shuangfei Zhai

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves Transformer's sequence modeling performance. The key idea is to constrain attention to capture only information orthogonal to the token's own value vector (thus excluding information of self position), encouraging better context modeling. Evaluated on the standard language modeling task, XSA consistently outperforms SA across model sizes up to 2.7B parameters and shows increasingly larger gains as sequence length grows.

Reproductions