A Probabilistic Interpretation of Transformers
2022-04-28Unverified0· sign in to hype
Alexander Shim
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We propose a probabilistic interpretation of exponential dot product attention of transformers and contrastive learning based off of exponential families. The attention sublayer of transformers is equivalent to a gradient ascent step of the log normalizer, which is the log-sum-exp term in the Hopfield theory of attention. This ascent step induces a parallel expansion of points, which is counterbalanced by a contraction from layer normalization. We also state theoretical limitations of our theory and the Hopfield theory and suggest directions for resolution.