SOTAVerified

Hard Attention

Papers

Showing 5175 of 100 papers

TitleStatusHype
Neuroevolution of Self-Attention Over Proto-Objects0
Object Guided External Memory Network for Video Object Detection0
Saturated Transformers are Constant-Depth Threshold Circuits0
Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams0
Robust Brain Magnetic Resonance Image Segmentation for Hydrocephalus Patients: Hard and Soft Attention0
Sharp Attention for Sequence to Sequence Learning0
Simulating Hard Attention Using Soft Attention0
Soft-Hard Attention U-Net Model and Benchmark Dataset for Multiscale Image Shadow Removal0
Specialized Transformers: Faster, Smaller and more Accurate NLP Models0
Text as Environment: A Deep Reinforcement Learning Text Readability Assessment Model0
Theoretical Limitations of Self-Attention in Neural Sequence Models0
Transformers as Transducers0
Transformers in Uniform TC^00
Unique Hard Attention: A Tale of Two Sides0
Upper, Middle and Lower Region Learning for Facial Action Unit Detection0
Video Violence Recognition and Localization Using a Semi-Supervised Hard Attention Model0
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation0
Hierarchical Memory Networks0
Hierarchical Multi-scale Attention Networks for Action Recognition0
Improved Attention Models for Memory Augmented Neural Network Adaptive Controllers0
Learning deep graph matching with channel-independent embedding and Hungarian attention0
Learning Hard Alignments with Variational Inference0
Logical Languages Accepted by Transformer Encoders with Hard Attention0
Graph Representation Learning via Hard and Channel-Wise Attention NetworksCode0
Dual Attention Networks for Few-Shot Fine-Grained RecognitionCode0
Show:102550
← PrevPage 3 of 4Next →

No leaderboard results yet.