SOTAVerified

Hard Attention

Papers

Showing 8190 of 100 papers

TitleStatusHype
Theoretical Limitations of Self-Attention in Neural Sequence Models0
Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training0
Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTSCode0
Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams0
Surprisingly Easy Hard-Attention for Sequence to Sequence LearningCode0
A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax0
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts0
Learning Visual Question Answering by Bootstrapping Hard AttentionCode0
Latent Alignment and Variational AttentionCode0
Sequence-to-sequence Models for Cache Transition SystemsCode0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.