SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 76100 of 213 papers

TitleStatusHype
Deep Learning for Technical Document Classification0
A scoping review on multimodal deep learning in biomedical images and texts0
A Multimodal Deep Learning Approach for White Matter Shape Prediction in Diffusion MRI Tractography0
A Systematic Review of Intermediate Fusion in Multimodal Deep Learning for Biomedical Applications0
MultiCrossViT: Multimodal Vision Transformer for Schizophrenia Prediction using Structural MRI and Functional Network Connectivity Data0
A multimodal deep learning approach for named entity recognition from social media0
A Review on Methods and Applications in Multimodal Deep Learning0
MalFake: A Multimodal Fake News Identification for Malayalam using Recurrent Neural Networks and VGG-160
Deep Coupled-Representation Learning for Sparse Linear Inverse Problems with Side Information0
Improving Neonatal Care: An Active Dry-Contact Electrode-based Continuous EEG Monitoring System with Seizure Detection0
Improved Multimodal Deep Learning with Variation of Information0
Data-driven geophysics: from dictionary learning to deep learning0
MDL-CW: A Multimodal Deep Learning Framework With Cross Weights0
Innovative Framework for Early Estimation of Mental Disorder Scores to Enable Timely Interventions0
Integrating Chemical Language and Molecular Graph in Multimodal Fused Deep Learning for Drug Property Prediction0
Integrating Vision and Location with Transformers: A Multimodal Deep Learning Framework for Medical Wound Analysis0
Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting0
Deep learning evaluation using deep linguistic processing0
AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video0
Identification of Cognitive Workload during Surgical Tasks with Multimodal Deep Learning0
ARC-NLP at Multimodal Hate Speech Event Detection 2023: Multimodal Methods Boosted by Ensemble Learning, Syntactical and Entity Features0
ADMN: A Layer-Wise Adaptive Multimodal Network for Dynamic Input Noise and Compute Resources0
A Novel Site-Agnostic Multimodal Deep Learning Model to Identify Pro-Eating Disorder Content on Social Media0
Hybrid Attention based Multimodal Network for Spoken Language Classification0
How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning0
Show:102550
← PrevPage 4 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified