SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 51100 of 213 papers

TitleStatusHype
"Subverting the Jewtocracy": Online Antisemitism Detection Using Multimodal Deep LearningCode1
TMSS: An End-to-End Transformer-based Multimodal Network for Segmentation and Survival PredictionCode1
Supervised Video Summarization via Multiple Feature Sets with Parallel AttentionCode1
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes ChallengeCode1
Learning Semantic Relationship Among Instances for Image-Text MatchingCode1
LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language ModelsCode1
On the Adversarial Robustness of Multi-Modal Foundation ModelsCode1
Distilling Audio-Visual Knowledge by Compositional Contrastive LearningCode1
HEALNet: Multimodal Fusion for Heterogeneous Biomedical DataCode1
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
Dual-Level Cross-Modal Contrastive ClusteringCode0
Automatic Fused Multimodal Deep Learning for Plant IdentificationCode0
XFlow: Cross-modal Deep Neural Networks for Audiovisual ClassificationCode0
A multimodal deep learning framework for scalable content based visual media retrievalCode0
Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep LearningCode0
Towards Precision Healthcare: Robust Fusion of Time Series and Image DataCode0
Uncovering the Genetic Basis of Glioblastoma Heterogeneity through Multimodal Analysis of Whole Slide Images and RNA Sequencing DataCode0
Zorro: the masked multimodal transformerCode0
Advancing Drug Discovery with Enhanced Chemical Understanding via Asymmetric Contrastive Multimodal LearningCode0
ShapeWorld - A new test methodology for multimodal language understandingCode0
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous VehiclesCode0
Restoring Ancient Ideograph: A Multimodal Multitask Neural Network ApproachCode0
LAVIS: A Library for Language-Vision IntelligenceCode0
Predicting the Leading Political Ideology of YouTube Channels Using Acoustic, Textual, and Metadata InformationCode0
In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing DataCode0
DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing DataCode0
MVX-ViT: Multimodal Collaborative Perception for 6G V2X Network Management Decisions Using Vision Transformer.Code0
Multimodal Guidance Network for Missing-Modality Inference in Content ModerationCode0
Multimodal Learning for Hateful Memes DetectionCode0
Multimodal Marvels of Deep Learning in Medical Diagnosis: A Comprehensive Review of COVID-19 DetectionCode0
Cultural-Aware AI Model for Emotion RecognitionCode0
Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language LearningCode0
Multimodal Deep Learning for Subtype Classification in Breast Cancer Using Histopathological Images and Gene Expression DataCode0
Multimodal deep networks for text and image-based document classificationCode0
HyMNet: a Multimodal Deep Learning System for Hypertension Classification using Fundus Photographs and Cardiometabolic Risk FactorsCode0
Multimodal Deep Learning for Personalized Renal Cell Carcinoma Prognosis: Integrating CT Imaging and Clinical DataCode0
Multimodal Deep Learning for Robust RGB-D Object RecognitionCode0
An Interpretable Adaptive Multiscale Attention Deep Neural Network for Tabular DataCode0
Gaze-Guided Learning: Avoiding Shortcut Bias in Visual ClassificationCode0
Frozen Large-scale Pretrained Vision-Language Models are the Effective Foundational Backbone for Multimodal Breast Cancer PredictionCode0
Modeling of spatially embedded networks via regional spatial graph convolutional networksCode0
Focus on Focus: Focus-oriented Representation Learning and Multi-view Cross-modal Alignment for Glioma GradingCode0
Feature importance to explain multimodal prediction models. A clinical use caseCode0
A Multimodal PDE Foundation Model for Prediction and Scientific Text DescriptionsCode0
Building Multimodal AI ChatbotsCode0
Learn to Combine Modalities in Multimodal Deep LearningCode0
Multimodal Age and Gender Classification Using Ear and Profile Face ImagesCode0
EmoNets: Multimodal deep learning approaches for emotion recognition in video0
ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified