SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 101125 of 213 papers

TitleStatusHype
Exploring Multimodal Features and Fusion Strategies for Analyzing Disaster Tweets0
Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset0
From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation0
Geometric Multimodal Deep Learning with Multi-Scaled Graph Wavelet Convolutional Network0
How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning0
Hybrid Attention based Multimodal Network for Spoken Language Classification0
Identification of Cognitive Workload during Surgical Tasks with Multimodal Deep Learning0
Improved Multimodal Deep Learning with Variation of Information0
Improving Neonatal Care: An Active Dry-Contact Electrode-based Continuous EEG Monitoring System with Seizure Detection0
Innovative Framework for Early Estimation of Mental Disorder Scores to Enable Timely Interventions0
Integrating Chemical Language and Molecular Graph in Multimodal Fused Deep Learning for Drug Property Prediction0
Integrating Vision and Location with Transformers: A Multimodal Deep Learning Framework for Medical Wound Analysis0
Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting0
Leveraging Audio Gestalt to Predict Media Memorability0
Listen to Your Favorite Melodies with img2Mxml, Producing MusicXML from Sheet Music Image by Measure-based Multimodal Deep Learning-driven Assembly0
M2D: A Multi-modal Framework for Automatic Medical Diagnosis0
MalFake: A Multimodal Fake News Identification for Malayalam using Recurrent Neural Networks and VGG-160
MDL-CW: A Multimodal Deep Learning Framework With Cross Weights0
P-Transformer: A Prompt-based Multimodal Transformer Architecture For Medical Tabular Data0
Multimodal Deep Unfolding for Guided Image Super-Resolution0
Multi-Modal Detection of Alzheimer's Disease from Speech and Text0
Multimodal Doctor-in-the-Loop: A Clinically-Guided Explainable Framework for Predicting Pathological Response in Non-Small Cell Lung Cancer0
Multimodal Emotion Recognition Using Multimodal Deep Learning0
Multimodal Fusion of Glucose Monitoring and Food Imagery for Caloric Content Prediction0
Multimodal Intelligence: Representation Learning, Information Fusion, and Applications0
Show:102550
← PrevPage 5 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified