SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 201213 of 213 papers

TitleStatusHype
Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep LearningCode0
LAVIS: A Library for Language-Vision IntelligenceCode0
In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing DataCode0
HyMNet: a Multimodal Deep Learning System for Hypertension Classification using Fundus Photographs and Cardiometabolic Risk FactorsCode0
Gaze-Guided Learning: Avoiding Shortcut Bias in Visual ClassificationCode0
Uncovering the Genetic Basis of Glioblastoma Heterogeneity through Multimodal Analysis of Whole Slide Images and RNA Sequencing DataCode0
A multimodal deep learning framework for scalable content based visual media retrievalCode0
Multimodal Guidance Network for Missing-Modality Inference in Content ModerationCode0
XFlow: Cross-modal Deep Neural Networks for Audiovisual ClassificationCode0
Multimodal Learning for Hateful Memes DetectionCode0
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Multimodal Marvels of Deep Learning in Medical Diagnosis: A Comprehensive Review of COVID-19 DetectionCode0
Frozen Large-scale Pretrained Vision-Language Models are the Effective Foundational Backbone for Multimodal Breast Cancer PredictionCode0
Show:102550
← PrevPage 5 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified