SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 151175 of 213 papers

TitleStatusHype
Multimodal Fusion of Glucose Monitoring and Food Imagery for Caloric Content Prediction0
Multimodal Intelligence: Representation Learning, Information Fusion, and Applications0
Toxicity Prediction by Multimodal Deep Learning0
Multimodal Urban Areas of Interest Generation via Remote Sensing Imagery and Geographical Prior0
A Systematic Review of Intermediate Fusion in Multimodal Deep Learning for Biomedical Applications0
Multimodal Learning To Improve Cardiac Late Mechanical Activation Detection From Cine MR Images0
Where and When: Space-Time Attention for Audio-Visual Explanations0
A Novel Site-Agnostic Multimodal Deep Learning Model to Identify Pro-Eating Disorder Content on Social Media0
Multimodal Prescriptive Deep Learning0
Multimodal self-supervised learning for lesion localization0
A Multimodal Intermediate Fusion Network with Manifold Learning for Stress Detection0
Multi-objective optimization determines when, which and how to fuse deep networks: an application to predict COVID-19 outcomes0
A Multimodal Deep Learning Model for Cardiac Resynchronisation Therapy Response Prediction0
Unified Cross-Modal Attention-Mixer Based Structural-Functional Connectomics Fusion for Neuropsychiatric Disorder Diagnosis0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
NewsNet-SDF: Stochastic Discount Factor Estimation with Pretrained Language Model News Embeddings via Adversarial Networks0
A multimodal deep learning architecture for smoking detection with a small data approach0
A Multimodal Deep Learning Approach for White Matter Shape Prediction in Diffusion MRI Tractography0
Performance Optimization using Multimodal Modeling and Heterogeneous GNN0
A multimodal deep learning approach for named entity recognition from social media0
AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video0
Predicting Online Video Advertising Effects with Multimodal Deep Learning0
Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models0
Predicting the Skies: A Novel Model for Flight-Level Passenger Traffic Forecasting0
Language-Assisted Deep Learning for Autistic Behaviors Recognition0
Show:102550
← PrevPage 7 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified