| Logical Languages Accepted by Transformer Encoders with Hard Attention | Oct 5, 2023 | Hard Attention | —Unverified | 0 |
| Investigation of Architectures and Receptive Fields for Appearance-based Gaze Estimation | Aug 18, 2023 | Contrastive LearningDisentanglement | CodeCode Available | 1 |
| Average-Hard Attention Transformers are Constant-Depth Uniform Threshold Circuits | Aug 6, 2023 | Hard Attention | —Unverified | 0 |
| On the Learning Dynamics of Attention Networks | Jul 25, 2023 | Hard Attention | CodeCode Available | 0 |
| HAT-CL: A Hard-Attention-to-the-Task PyTorch Library for Continual Learning | Jul 18, 2023 | Continual LearningHard Attention | CodeCode Available | 0 |
| Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis | Apr 10, 2023 | DiagnosticHard Attention | CodeCode Available | 1 |
| MESAHA-Net: Multi-Encoders based Self-Adaptive Hard Attention Network with Maximum Intensity Projections for Lung Nodule Segmentation in CT Scan | Apr 4, 2023 | Computed Tomography (CT)Decoder | —Unverified | 0 |
| Dual Attention Model with Reinforcement Learning for Classification of Histology Whole-Slide Images | Feb 19, 2023 | Hard Attentionwhole slide images | —Unverified | 0 |
| Learning to Perceive in Deep Model-Free Reinforcement Learning | Jan 10, 2023 | Atari GamesHard Attention | CodeCode Available | 0 |
| Deep Pneumonia: Attention-Based Contrastive Learning for Class-Imbalanced Pneumonia Lesion Recognition in Chest X-rays | Jul 23, 2022 | Contrastive LearningHard Attention | —Unverified | 0 |
| Dual Attention Networks for Few-Shot Fine-Grained Recognition | Jun 28, 2022 | Hard AttentionMeta-Learning | CodeCode Available | 0 |
| Table Retrieval May Not Necessitate Table-specific Model Design | May 19, 2022 | Hard AttentionNatural Questions | CodeCode Available | 1 |
| Binding Actions to Objects in World Models | Apr 27, 2022 | Hard AttentionObject | CodeCode Available | 0 |
| Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity | Apr 13, 2022 | Hard Attention | —Unverified | 0 |
| Consistency driven Sequential Transformers Attention Model for Partially Observable Scenes | Apr 1, 2022 | Hard Attention | CodeCode Available | 0 |
| Video Violence Recognition and Localization Using a Semi-Supervised Hard Attention Model | Feb 4, 2022 | Activity RecognitionHard Attention | —Unverified | 0 |
| You Only Need One Model for Open-domain Question Answering | Dec 14, 2021 | Hard AttentionNatural Questions | —Unverified | 0 |
| CLAWS: Contrastive Learning with hard Attention and Weak Supervision | Dec 1, 2021 | Anomaly DetectionContrastive Learning | —Unverified | 0 |
| A Probabilistic Hard Attention Model For Sequentially Observed Scenes | Nov 15, 2021 | Hard Attention | CodeCode Available | 0 |
| Understanding Interlocking Dynamics of Cooperative Rationalization | Oct 26, 2021 | Hard Attention | CodeCode Available | 0 |
| Sharp Attention for Sequence to Sequence Learning | Sep 29, 2021 | Hard AttentionScene Text Recognition | —Unverified | 0 |
| Specialized Transformers: Faster, Smaller and more Accurate NLP Models | Sep 29, 2021 | Hard AttentionQuantization | —Unverified | 0 |
| Saturated Transformers are Constant-Depth Threshold Circuits | Jun 30, 2021 | Hard Attention | —Unverified | 0 |
| Multimodal Emergent Fake News Detection via Meta Neural Process Networks | Jun 22, 2021 | Fake News DetectionHard Attention | —Unverified | 0 |
| Self-Attention Networks Can Process Bounded Hierarchical Languages | May 24, 2021 | Hard Attention | CodeCode Available | 1 |