SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 5175 of 213 papers

TitleStatusHype
Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI DevelopmentCode1
MMEA: Entity Alignment for Multi-Modal Knowledge GraphsCode1
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion RecognitionCode1
More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery ClassificationCode1
Image Search With Text Feedback by Visiolinguistic Attention LearningCode1
HYDRA: A multimodal deep learning framework for malware classificationCode1
Analysis of Social Media Data using Multimodal Deep Learning for Disaster ResponseCode1
Are These Birds Similar: Learning Branched Networks for Fine-grained RepresentationsCode1
Audio-Conditioned U-Net for Position Estimation in Full Sheet ImagesCode1
Ontology-based knowledge representation for bone disease diagnosis: a foundation for safe and sustainable medical artificial intelligence systems0
Unified Cross-Modal Attention-Mixer Based Structural-Functional Connectomics Fusion for Neuropsychiatric Disorder Diagnosis0
Multimodal Fusion of Glucose Monitoring and Food Imagery for Caloric Content Prediction0
NewsNet-SDF: Stochastic Discount Factor Estimation with Pretrained Language Model News Embeddings via Adversarial Networks0
BMMDetect: A Multimodal Deep Learning Framework for Comprehensive Biomedical Misconduct Detection0
Multimodal Deep Learning for Stroke Prediction and Detection using Retinal Imaging and Clinical Data0
Multimodal Deep Learning-Empowered Beam Prediction in Future THz ISAC Systems0
Timing Is Everything: Finding the Optimal Fusion Points in Multimodal Medical Imaging0
Multimodal Doctor-in-the-Loop: A Clinically-Guided Explainable Framework for Predicting Pathological Response in Non-Small Cell Lung Cancer0
A Multimodal Deep Learning Approach for White Matter Shape Prediction in Diffusion MRI Tractography0
Integrating Vision and Location with Transformers: A Multimodal Deep Learning Framework for Medical Wound Analysis0
Gaze-Guided Learning: Avoiding Shortcut Bias in Visual ClassificationCode0
Improving Neonatal Care: An Active Dry-Contact Electrode-based Continuous EEG Monitoring System with Seizure Detection0
Multimodal Deep Learning for Subtype Classification in Breast Cancer Using Histopathological Images and Gene Expression DataCode0
TabulaTime: A Novel Multimodal Deep Learning Framework for Advancing Acute Coronary Syndrome Prediction through Environmental and Clinical Data Integration0
Evolution of Data-driven Single- and Multi-Hazard Susceptibility Mapping and Emergence of Deep Learning Methods0
Show:102550
← PrevPage 3 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified