SOTAVerified

Hard Attention

Papers

Showing 150 of 100 papers

TitleStatusHype
Table Retrieval May Not Necessitate Table-specific Model DesignCode1
Recurrent Models of Visual AttentionCode1
FANet: A Feedback Attention Network for Improved Biomedical Image SegmentationCode1
Hard Non-Monotonic Attention for Character-Level TransductionCode1
AMR Parsing with Action-Pointer TransformerCode1
Exact Hard Monotonic Attention for Character-Level TransductionCode1
Self-Attention Networks Can Process Bounded Hierarchical LanguagesCode1
Hard-Attention for Scalable Image ClassificationCode1
Hard-Attention Gates with Gradient Routing for Endoscopic Image ComputingCode1
Investigation of Architectures and Receptive Fields for Appearance-based Gaze EstimationCode1
Learning Texture Transformer Network for Image Super-ResolutionCode1
A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action LocalizationCode1
Mutual Distillation Learning For Person Re-IdentificationCode1
Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion DiagnosisCode1
Learning to Perceive in Deep Model-Free Reinforcement LearningCode0
Surprisingly Easy Hard-Attention for Sequence to Sequence LearningCode0
Learning Visual Question Answering by Bootstrapping Hard AttentionCode0
Deep Attention Recurrent Q-NetworkCode0
Recurrent Alignment with Hard Attention for Hierarchical Text RatingCode0
TRIP: Trainable Region-of-Interest Prediction for Hardware-Efficient Neuromorphic Processing on Event-based VisionCode0
Understanding Interlocking Dynamics of Cooperative RationalizationCode0
Dual Attention Networks for Few-Shot Fine-Grained RecognitionCode0
Saccader: Improving Accuracy of Hard Attention Models for VisionCode0
Vamos: Versatile Action Models for Video UnderstandingCode0
Graph Representation Learning via Hard and Channel-Wise Attention NetworksCode0
Morphological Inflection Generation with Hard Monotonic AttentionCode0
Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTSCode0
HAT-CL: A Hard-Attention-to-the-Task PyTorch Library for Continual LearningCode0
Neural Architectures for Nested NER through LinearizationCode0
Consistency driven Sequential Transformers Attention Model for Partially Observable ScenesCode0
On the Learning Dynamics of Attention NetworksCode0
AxFormer: Accuracy-driven Approximation of Transformers for Faster, Smaller and more Accurate NLP ModelsCode0
Overcoming catastrophic forgetting with hard attention to the taskCode0
Progressive Attention Networks for Visual Attribute PredictionCode0
Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based ApproachCode0
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence ModelingCode0
Latent Alignment and Variational AttentionCode0
Sequence-to-sequence Models for Cache Transition SystemsCode0
Binding Actions to Objects in World ModelsCode0
A Probabilistic Hard Attention Model For Sequentially Observed ScenesCode0
Multi-View Unsupervised Image Generation with Cross Attention Guidance0
Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training0
Near-Optimal Glimpse Sequences for Training Hard Attention Neural Networks0
Neuroevolution of Self-Attention Over Proto-Objects0
Object Guided External Memory Network for Video Object Detection0
Saturated Transformers are Constant-Depth Threshold Circuits0
Patchwork: A Patch-wise Attention Network for Efficient Object Detection and Segmentation in Video Streams0
Robust Brain Magnetic Resonance Image Segmentation for Hydrocephalus Patients: Hard and Soft Attention0
Sharp Attention for Sequence to Sequence Learning0
Simulating Hard Attention Using Soft Attention0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.