SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 125 of 213 papers

TitleStatusHype
DeepSeek-VL: Towards Real-World Vision-Language UnderstandingCode7
ImageBind: One Embedding Space To Bind Them AllCode5
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init AttentionCode5
InstructIR: High-Quality Image Restoration Following Human InstructionsCode4
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic AlignmentCode4
PHemoNet: A Multimodal Network for Physiological SignalsCode2
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language ModelsCode2
Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal AlignmentCode2
MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep LearningCode2
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question AnsweringCode2
Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic SegmentationCode2
CLASP: Contrastive Language-Speech Pretraining for Multilingual Multimodal Information RetrievalCode1
CardioLab: Laboratory Values Estimation and Monitoring from Electrocardiogram Signals -- A Multimodal Deep Learning ApproachCode1
LUMA: A Benchmark Dataset for Learning from Uncertain and Multimodal DataCode1
HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology Datasets with Foundational Embedding ModelsCode1
MoPE: Mixture of Prompt Experts for Parameter-Efficient and Scalable Multimodal FusionCode1
Multimodal-Enhanced Objectness Learner for Corner Case Detection in Autonomous DrivingCode1
Enhancing Scene Graph Generation with Hierarchical Relationships and Commonsense KnowledgeCode1
HEALNet: Multimodal Fusion for Heterogeneous Biomedical DataCode1
Formalizing Multimedia Recommendation through Multimodal Deep LearningCode1
Multimodal Foundation Models For Echocardiogram InterpretationCode1
On the Adversarial Robustness of Multi-Modal Foundation ModelsCode1
PromptStyler: Prompt-driven Style Generation for Source-free Domain GeneralizationCode1
Towards Balanced Active Learning for Multimodal ClassificationCode1
Multimodal Neural DatabasesCode1
Show:102550
← PrevPage 1 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified