SOTAVerified

Deep Attention

Papers

Showing 150 of 109 papers

TitleStatusHype
SGFormer: Single-Layer Graph Transformers with Approximation-Free Linear ComplexityCode3
AllWeatherNet:Unified Image Enhancement for Autonomous Driving under Adverse Weather and Lowlight-conditionsCode2
Towards Deep Attention in Graph Neural Networks: Problems and RemediesCode1
Weight Excitation: Built-in Attention Mechanisms in Convolutional Neural NetworksCode1
Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic ParsingCode1
Detecting Attended Visual Targets in VideoCode1
Image Search With Text Feedback by Visiolinguistic Attention LearningCode1
Deep multi-stations weather forecasting: explainable recurrent convolutional neural networksCode1
Hard-Attention for Scalable Image ClassificationCode1
Deep Attention-guided Graph Clustering with Dual Self-supervisionCode1
Open-CyKG: An Open Cyber Threat Intelligence Knowledge GraphCode1
Learning to Segment from Scribbles using Multi-scale Adversarial Attention GatesCode1
Whole Slide Images based Cancer Survival Prediction using Attention Guided Deep Multiple Instance Learning NetworksCode1
End-to-end Prostate Cancer Detection in bpMRI via 3D CNNs: Effects of Attention Mechanisms, Clinical Priori and Decoupled False Positive ReductionCode1
Label Cleaning Multiple Instance Learning: Refining Coarse Annotations on Single Whole-Slide ImagesCode1
PREDATOR: Registration of 3D Point Clouds with Low OverlapCode1
Dan: Deep attention neural network for news recommendationCode1
Lipschitz Normalization for Self-Attention Layers with Application to Graph Neural NetworksCode1
Deep Attention Based Semi-Supervised 2D-Pose Estimation for Surgical InstrumentsCode0
Processing Megapixel Images with Deep Attention-Sampling ModelsCode0
Deep Attention Driven Reinforcement Learning (DAD-RL) for Autonomous Decision-Making in Dynamic EnvironmentCode0
Visual Attention Methods in Deep Learning: An In-Depth SurveyCode0
Thermal Image Super-Resolution Using Second-Order Channel Attention with Varying Receptive FieldsCode0
Do Transformers Need Deep Long-Range MemoryCode0
Deep attention-based classification network for robust depth predictionCode0
An Ensemble Framework for Probabilistic Short-Term Load Forecasting Based on BiTCN and Deep Attention NetworksCode0
RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scansCode0
Deep Attention Aware Feature Learning for Person Re-IdentificationCode0
Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative ModelsCode0
Restoring Snow-Degraded Single Images With Wavelet in Vision TransformerCode0
Multi-scale self-guided attention for medical image segmentationCode0
Multi-modal Collaborative Optimization and Expansion Network for Event-assisted Single-eye Expression RecognitionCode0
PDANet: Polarity-consistent Deep Attention Network for Fine-grained Visual Emotion RegressionCode0
Searching for Ambiguous Objects in Videos using Relational Referring ExpressionsCode0
Deep Attention Recurrent Q-NetworkCode0
HIT - A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language RepresentationCode0
Deep Attention Q-Network for Personalized Treatment RecommendationCode0
HIT: A Hierarchically Fused Deep Attention Network for Robust Code-mixed Language RepresentationCode0
AIA: Attention in Attention Within Collaborate DomainsCode0
Infinite attention: NNGP and NTK for deep attention networksCode0
Deep attention networks reveal the rules of collective motion in zebrafishCode0
Compact Global Descriptor for Neural NetworksCode0
Geometry of Lightning Self-Attention: Identifiability and DimensionCode0
Fundamental limits of learning in sequence multi-index models and deep attention networks: High-dimensional asymptotics and sharp thresholdsCode0
Bayes optimal learning of attention-indexed modelsCode0
Grid Partitioned Attention: Efficient TransformerApproximation with Inductive Bias for High Resolution Detail GenerationCode0
Deep attention-guided fusion network for lesion segmentation0
Classification of Hand Movements from EEG using a Deep Attention-based LSTM Network0
Deep Attention Fusion Feature for Speech Separation with End-to-End Post-filter Method0
Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks for Early Rumor Detection0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.