SOTAVerified

Hard Attention

Papers

Showing 51100 of 100 papers

TitleStatusHype
Neuroevolution of Self-Attention Over Proto-Objects0
Object Guided External Memory Network for Video Object Detection0
Saturated Transformers are Constant-Depth Threshold Circuits0
Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams0
Robust Brain Magnetic Resonance Image Segmentation for Hydrocephalus Patients: Hard and Soft Attention0
Sharp Attention for Sequence to Sequence Learning0
Simulating Hard Attention Using Soft Attention0
Soft-Hard Attention U-Net Model and Benchmark Dataset for Multiscale Image Shadow Removal0
Specialized Transformers: Faster, Smaller and more Accurate NLP Models0
Text as Environment: A Deep Reinforcement Learning Text Readability Assessment Model0
Theoretical Limitations of Self-Attention in Neural Sequence Models0
Transformers as Transducers0
Transformers in Uniform TC^00
Unique Hard Attention: A Tale of Two Sides0
Upper, Middle and Lower Region Learning for Facial Action Unit Detection0
Video Violence Recognition and Localization Using a Semi-Supervised Hard Attention Model0
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation0
Hierarchical Memory Networks0
Hierarchical Multi-scale Attention Networks for Action Recognition0
Improved Attention Models for Memory Augmented Neural Network Adaptive Controllers0
Learning deep graph matching with channel-independent embedding and Hungarian attention0
Learning Hard Alignments with Variational Inference0
Logical Languages Accepted by Transformer Encoders with Hard Attention0
Look Harder: A Neural Machine Translation Model with Hard Attention0
Graph Representation Learning via Hard and Channel-Wise Attention NetworksCode0
HAT-CL: A Hard-Attention-to-the-Task PyTorch Library for Continual LearningCode0
Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTSCode0
Recurrent Alignment with Hard Attention for Hierarchical Text RatingCode0
On the Learning Dynamics of Attention NetworksCode0
Dual Attention Networks for Few-Shot Fine-Grained RecognitionCode0
Latent Alignment and Variational AttentionCode0
Surprisingly Easy Hard-Attention for Sequence to Sequence LearningCode0
AxFormer: Accuracy-driven Approximation of Transformers for Faster, Smaller and more Accurate NLP ModelsCode0
Deep Attention Recurrent Q-NetworkCode0
Learning to Perceive in Deep Model-Free Reinforcement LearningCode0
Learning Visual Question Answering by Bootstrapping Hard AttentionCode0
Overcoming catastrophic forgetting with hard attention to the taskCode0
Saccader: Improving Accuracy of Hard Attention Models for VisionCode0
Progressive Attention Networks for Visual Attribute PredictionCode0
Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based ApproachCode0
Binding Actions to Objects in World ModelsCode0
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence ModelingCode0
Morphological Inflection Generation with Hard Monotonic AttentionCode0
TRIP: Trainable Region-of-Interest Prediction for Hardware-Efficient Neuromorphic Processing on Event-based VisionCode0
A Probabilistic Hard Attention Model For Sequentially Observed ScenesCode0
Sequence-to-sequence Models for Cache Transition SystemsCode0
Consistency driven Sequential Transformers Attention Model for Partially Observable ScenesCode0
Understanding Interlocking Dynamics of Cooperative RationalizationCode0
Vamos: Versatile Action Models for Video UnderstandingCode0
Neural Architectures for Nested NER through LinearizationCode0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.