SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 201213 of 213 papers

TitleStatusHype
Emotion Based Hate Speech Detection using Multimodal Learning0
EmoNets: Multimodal deep learning approaches for emotion recognition in video0
Improved Multimodal Deep Learning with Variation of Information0
Improving Neonatal Care: An Active Dry-Contact Electrode-based Continuous EEG Monitoring System with Seizure Detection0
Innovative Framework for Early Estimation of Mental Disorder Scores to Enable Timely Interventions0
Integrating Chemical Language and Molecular Graph in Multimodal Fused Deep Learning for Drug Property Prediction0
Integrating Vision and Location with Transformers: A Multimodal Deep Learning Framework for Medical Wound Analysis0
Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting0
TabulaTime: A Novel Multimodal Deep Learning Framework for Advancing Acute Coronary Syndrome Prediction through Environmental and Clinical Data Integration0
ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction0
Digital Taxonomist: Identifying Plant Species in Community Scientists' Photographs0
Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes0
Temporal Multimodal Learning in Audiovisual Speech Recognition0
Show:102550
← PrevPage 5 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified