SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 121130 of 213 papers

TitleStatusHype
TextAug: Test time Text Augmentation for Multimodal Person Re-identification0
Multimodal deep learning for mapping forest dominant height by fusing GEDI with earth observation data0
Advancing Drug Discovery with Enhanced Chemical Understanding via Asymmetric Contrastive Multimodal LearningCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
MalFake: A Multimodal Fake News Identification for Malayalam using Recurrent Neural Networks and VGG-160
HyMNet: a Multimodal Deep Learning System for Hypertension Classification using Fundus Photographs and Cardiometabolic Risk FactorsCode0
Multimodal Deep Learning for Scientific Imaging Interpretation0
A multimodal deep learning architecture for smoking detection with a small data approach0
Multimodal Guidance Network for Missing-Modality Inference in Content ModerationCode0
ARC-NLP at Multimodal Hate Speech Event Detection 2023: Multimodal Methods Boosted by Ensemble Learning, Syntactical and Entity Features0
Show:102550
← PrevPage 13 of 22Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified