SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 2650 of 213 papers

TitleStatusHype
Multimodal Deep LearningCode1
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
Learning Multimodal Data Augmentation in Feature SpaceCode1
Common Practices and Taxonomy in Deep Multi-view Fusion for Remote Sensing ApplicationsCode1
Medical Diagnosis with Large Scale Multimodal Transformers: Leveraging Diverse Data for More Accurate DiagnosisCode1
aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range PerceptionCode1
Bayesian Prompt Learning for Image-Language Model GeneralizationCode1
LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language ModelsCode1
TMSS: An End-to-End Transformer-based Multimodal Network for Segmentation and Survival PredictionCode1
Multi-Modal Experience Inspired AI CreationCode1
Multimodal Attention-based Deep Learning for Alzheimer's Disease DiagnosisCode1
Contrastive Language-Image Pre-training for the Italian LanguageCode1
Pan-Cancer Integrative Histology-Genomic Analysis via Interpretable Multimodal Deep LearningCode1
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment AnalysisCode1
Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-TrainingCode1
Supervised Video Summarization via Multiple Feature Sets with Parallel AttentionCode1
Distilling Audio-Visual Knowledge by Compositional Contrastive LearningCode1
"Subverting the Jewtocracy": Online Antisemitism Detection Using Multimodal Deep LearningCode1
MinkLoc++: Lidar and Monocular Image Fusion for Place RecognitionCode1
Deep Learning for Android Malware Defenses: a Systematic Literature ReviewCode1
Piano Skills AssessmentCode1
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes ChallengeCode1
Image and Text fusion for UPMC Food-101 \ BERT and CNNsCode1
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine LearningCode1
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature FusionCode1
Show:102550
← PrevPage 2 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified