SOTAVerified

Hard Attention

Papers

Showing 125 of 100 papers

TitleStatusHype
Hard-Attention Gates with Gradient Routing for Endoscopic Image ComputingCode1
Mutual Distillation Learning For Person Re-IdentificationCode1
Investigation of Architectures and Receptive Fields for Appearance-based Gaze EstimationCode1
Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion DiagnosisCode1
Table Retrieval May Not Necessitate Table-specific Model DesignCode1
Self-Attention Networks Can Process Bounded Hierarchical LanguagesCode1
AMR Parsing with Action-Pointer TransformerCode1
FANet: A Feedback Attention Network for Improved Biomedical Image SegmentationCode1
Hard-Attention for Scalable Image ClassificationCode1
A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action LocalizationCode1
Learning Texture Transformer Network for Image Super-ResolutionCode1
Exact Hard Monotonic Attention for Character-Level TransductionCode1
Hard Non-Monotonic Attention for Character-Level TransductionCode1
Recurrent Models of Visual AttentionCode1
Comparison of different Unique hard attention transformer models by the formal languages they can recognize0
Characterizing the Expressivity of Transformer Language Models0
Exact Expressive Power of Transformers with Padding0
Emergence of Fixational and Saccadic Movements in a Multi-Level Recurrent Attention Model for Vision0
NoPE: The Counting Power of Transformers with No Positional Encodings0
Neuroevolution of Self-Attention Over Proto-Objects0
AttentionDrop: A Novel Regularization Method for Transformer Models0
Center-guided Classifier for Semantic Segmentation of Remote Sensing ImagesCode0
Unique Hard Attention: A Tale of Two Sides0
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers0
Ehrenfeucht-Haussler Rank and Chain of Thought0
Show:102550
← PrevPage 1 of 4Next →

No leaderboard results yet.