SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 51100 of 213 papers

TitleStatusHype
In the Search for Optimal Multi-view Learning Models for Crop Classification with Global Remote Sensing DataCode0
Describe-and-Dissect: Interpreting Neurons in Vision Networks with Language Models0
Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting0
MoPE: Mixture of Prompt Experts for Parameter-Efficient and Scalable Multimodal FusionCode1
A Multimodal Intermediate Fusion Network with Manifold Learning for Stress Detection0
Restoring Ancient Ideograph: A Multimodal Multitask Neural Network ApproachCode0
Multimodal deep learning approach to predicting neurological recovery from coma after cardiac arrest0
DeepSeek-VL: Towards Real-World Vision-Language UnderstandingCode7
Cultural-Aware AI Model for Emotion RecognitionCode0
Multimodal Learning To Improve Cardiac Late Mechanical Activation Detection From Cine MR Images0
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language ModelsCode2
Multimodal-Enhanced Objectness Learner for Corner Case Detection in Autonomous DrivingCode1
InstructIR: High-Quality Image Restoration Following Human InstructionsCode4
Multimodal Deep Learning of Word-of-Mouth Text and Demographics to Predict Customer Rating: Handling Consumer Heterogeneity in Marketing0
Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep LearningCode0
Multimodal Urban Areas of Interest Generation via Remote Sensing Imagery and Geographical Prior0
Predicting the Skies: A Novel Model for Flight-Level Passenger Traffic Forecasting0
Multimodal self-supervised learning for lesion localization0
Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal AlignmentCode2
Integrating Chemical Language and Molecular Graph in Multimodal Fused Deep Learning for Drug Property Prediction0
A graph-based multimodal framework to predict gentrification0
SynthScribe: Deep Multimodal Tools for Synthesizer Sound Retrieval and Exploration0
TextAug: Test time Text Augmentation for Multimodal Person Re-identification0
Enhancing Scene Graph Generation with Hierarchical Relationships and Commonsense KnowledgeCode1
Multimodal deep learning for mapping forest dominant height by fusing GEDI with earth observation data0
HEALNet: Multimodal Fusion for Heterogeneous Biomedical DataCode1
Advancing Drug Discovery with Enhanced Chemical Understanding via Asymmetric Contrastive Multimodal LearningCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
MalFake: A Multimodal Fake News Identification for Malayalam using Recurrent Neural Networks and VGG-160
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic AlignmentCode4
HyMNet: a Multimodal Deep Learning System for Hypertension Classification using Fundus Photographs and Cardiometabolic Risk FactorsCode0
Multimodal Deep Learning for Scientific Imaging Interpretation0
A multimodal deep learning architecture for smoking detection with a small data approach0
Formalizing Multimedia Recommendation through Multimodal Deep LearningCode1
Multimodal Guidance Network for Missing-Modality Inference in Content ModerationCode0
Multimodal Foundation Models For Echocardiogram InterpretationCode1
On the Adversarial Robustness of Multi-Modal Foundation ModelsCode1
PromptStyler: Prompt-driven Style Generation for Source-free Domain GeneralizationCode1
ARC-NLP at Multimodal Hate Speech Event Detection 2023: Multimodal Methods Boosted by Ensemble Learning, Syntactical and Entity Features0
A scoping review on multimodal deep learning in biomedical images and texts0
Multimodal Deep Learning for Personalized Renal Cell Carcinoma Prognosis: Integrating CT Imaging and Clinical DataCode0
A Novel Site-Agnostic Multimodal Deep Learning Model to Identify Pro-Eating Disorder Content on Social Media0
MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep LearningCode2
Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language LearningCode0
Towards Balanced Active Learning for Multimodal ClassificationCode1
ImageBind: One Embedding Space To Bind Them AllCode5
Multimodal Neural DatabasesCode1
Performance Optimization using Multimodal Modeling and Heterogeneous GNN0
Building Multimodal AI ChatbotsCode0
Towards Unified AI Drug Discovery with Multiple Knowledge Modalities0
Show:102550
← PrevPage 2 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified