SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 5175 of 213 papers

TitleStatusHype
HEALNet: Multimodal Fusion for Heterogeneous Biomedical DataCode1
Multimodal Deep LearningCode1
HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology Datasets with Foundational Embedding ModelsCode1
MinkLoc++: Lidar and Monocular Image Fusion for Place RecognitionCode1
Pan-Cancer Integrative Histology-Genomic Analysis via Interpretable Multimodal Deep LearningCode1
Audio-Conditioned U-Net for Position Estimation in Full Sheet ImagesCode1
Multimodal-Enhanced Objectness Learner for Corner Case Detection in Autonomous DrivingCode1
Distilling Audio-Visual Knowledge by Compositional Contrastive LearningCode1
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion RecognitionCode1
Multimodal Guidance Network for Missing-Modality Inference in Content ModerationCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
Dual-Level Cross-Modal Contrastive ClusteringCode0
Automatic Fused Multimodal Deep Learning for Plant IdentificationCode0
Multimodal Learning for Hateful Memes DetectionCode0
A multimodal deep learning framework for scalable content based visual media retrievalCode0
Multimodal Marvels of Deep Learning in Medical Diagnosis: A Comprehensive Review of COVID-19 DetectionCode0
Advancing Drug Discovery with Enhanced Chemical Understanding via Asymmetric Contrastive Multimodal LearningCode0
Multimodal Deep Learning for Robust RGB-D Object RecognitionCode0
Multimodal Deep Learning for Personalized Renal Cell Carcinoma Prognosis: Integrating CT Imaging and Clinical DataCode0
Multimodal Deep Learning for Subtype Classification in Breast Cancer Using Histopathological Images and Gene Expression DataCode0
Modeling of spatially embedded networks via regional spatial graph convolutional networksCode0
Multimodal Age and Gender Classification Using Ear and Profile Face ImagesCode0
Multimodal deep networks for text and image-based document classificationCode0
DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing DataCode0
Learn to Combine Modalities in Multimodal Deep LearningCode0
Show:102550
← PrevPage 3 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified