| Learning Texture Transformer Network for Image Super-Resolution | Jun 7, 2020 | Hard AttentionImage Generation | CodeCode Available | 1 |
| Exact Hard Monotonic Attention for Character-Level Transduction | May 15, 2019 | Hard AttentionInductive Bias | CodeCode Available | 1 |
| Hard Non-Monotonic Attention for Character-Level Transduction | Aug 29, 2018 | Hard AttentionImage Captioning | CodeCode Available | 1 |
| Recurrent Models of Visual Attention | Jun 24, 2014 | Hard Attentionimage-classification | CodeCode Available | 1 |
| Comparison of different Unique hard attention transformer models by the formal languages they can recognize | Jun 3, 2025 | Hard AttentionSurvey | —Unverified | 0 |
| Characterizing the Expressivity of Transformer Language Models | May 29, 2025 | Hard Attention | —Unverified | 0 |
| Exact Expressive Power of Transformers with Padding | May 25, 2025 | Hard Attention | —Unverified | 0 |
| Emergence of Fixational and Saccadic Movements in a Multi-Level Recurrent Attention Model for Vision | May 19, 2025 | Hard Attentionimage-classification | —Unverified | 0 |
| NoPE: The Counting Power of Transformers with No Positional Encodings | May 16, 2025 | Hard Attention | —Unverified | 0 |
| Neuroevolution of Self-Attention Over Proto-Objects | Apr 30, 2025 | Hard AttentionImage Segmentation | —Unverified | 0 |