SOTAVerified

Hard Attention

Papers

Showing 76100 of 100 papers

TitleStatusHype
Graph Representation Learning via Hard and Channel-Wise Attention NetworksCode0
Look Harder: A Neural Machine Translation Model with Hard Attention0
Theoretical Limitations of Self-Attention in Neural Sequence Models0
Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training0
Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTSCode0
Exact Hard Monotonic Attention for Character-Level TransductionCode1
Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams0
Surprisingly Easy Hard-Attention for Sequence to Sequence LearningCode0
A Differentiable Self-disambiguated Sense Embedding Model via Scaled Gumbel Softmax0
Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts0
Hard Non-Monotonic Attention for Character-Level TransductionCode1
Learning Visual Question Answering by Bootstrapping Hard AttentionCode0
Latent Alignment and Variational AttentionCode0
Sequence-to-sequence Models for Cache Transition SystemsCode0
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence ModelingCode0
Overcoming catastrophic forgetting with hard attention to the taskCode0
Hierarchical Multi-scale Attention Networks for Action Recognition0
An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing0
Learning Hard Alignments with Variational Inference0
Morphological Inflection Generation with Hard Monotonic AttentionCode0
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation0
Progressive Attention Networks for Visual Attribute PredictionCode0
Hierarchical Memory Networks0
Deep Attention Recurrent Q-NetworkCode0
Recurrent Models of Visual AttentionCode1
Show:102550
← PrevPage 4 of 4Next →

No leaderboard results yet.