SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 125 of 213 papers

TitleStatusHype
Ontology-based knowledge representation for bone disease diagnosis: a foundation for safe and sustainable medical artificial intelligence systems0
Unified Cross-Modal Attention-Mixer Based Structural-Functional Connectomics Fusion for Neuropsychiatric Disorder Diagnosis0
Multimodal Fusion of Glucose Monitoring and Food Imagery for Caloric Content Prediction0
NewsNet-SDF: Stochastic Discount Factor Estimation with Pretrained Language Model News Embeddings via Adversarial Networks0
BMMDetect: A Multimodal Deep Learning Framework for Comprehensive Biomedical Misconduct Detection0
Multimodal Deep Learning-Empowered Beam Prediction in Future THz ISAC Systems0
Multimodal Deep Learning for Stroke Prediction and Detection using Retinal Imaging and Clinical Data0
Timing Is Everything: Finding the Optimal Fusion Points in Multimodal Medical Imaging0
Multimodal Doctor-in-the-Loop: A Clinically-Guided Explainable Framework for Predicting Pathological Response in Non-Small Cell Lung Cancer0
A Multimodal Deep Learning Approach for White Matter Shape Prediction in Diffusion MRI Tractography0
Integrating Vision and Location with Transformers: A Multimodal Deep Learning Framework for Medical Wound Analysis0
Gaze-Guided Learning: Avoiding Shortcut Bias in Visual ClassificationCode0
Improving Neonatal Care: An Active Dry-Contact Electrode-based Continuous EEG Monitoring System with Seizure Detection0
Multimodal Deep Learning for Subtype Classification in Breast Cancer Using Histopathological Images and Gene Expression DataCode0
TabulaTime: A Novel Multimodal Deep Learning Framework for Advancing Acute Coronary Syndrome Prediction through Environmental and Clinical Data Integration0
Evolution of Data-driven Single- and Multi-Hazard Susceptibility Mapping and Emergence of Deep Learning Methods0
ADMN: A Layer-Wise Adaptive Multimodal Network for Dynamic Input Noise and Compute Resources0
A Multimodal PDE Foundation Model for Prediction and Scientific Text DescriptionsCode0
Innovative Framework for Early Estimation of Mental Disorder Scores to Enable Timely Interventions0
A Self-supervised Multimodal Deep Learning Approach to Differentiate Post-radiotherapy Progression from Pseudoprogression in Glioblastoma0
Multimodal Prescriptive Deep Learning0
Multimodal Marvels of Deep Learning in Medical Diagnosis: A Comprehensive Review of COVID-19 DetectionCode0
CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information RetrievalCode1
Frozen Large-scale Pretrained Vision-Language Models are the Effective Foundational Backbone for Multimodal Breast Cancer PredictionCode0
CardioLab: Laboratory Values Estimation and Monitoring from Electrocardiogram Signals -- A Multimodal Deep Learning ApproachCode1
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified