SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 5175 of 213 papers

TitleStatusHype
TMSS: An End-to-End Transformer-based Multimodal Network for Segmentation and Survival PredictionCode1
Towards Balanced Active Learning for Multimodal ClassificationCode1
HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology Datasets with Foundational Embedding ModelsCode1
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes ChallengeCode1
Enhancing Scene Graph Generation with Hierarchical Relationships and Commonsense KnowledgeCode1
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion RecognitionCode1
Learning Multimodal Data Augmentation in Feature SpaceCode1
LUMA: A Benchmark Dataset for Learning from Uncertain and Multimodal DataCode1
More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery ClassificationCode1
EmoNets: Multimodal deep learning approaches for emotion recognition in video0
ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction0
A C++ library for Multimodal Deep Learning0
A Multimodal Deep Learning Model for Cardiac Resynchronisation Therapy Response Prediction0
Improved Multimodal Deep Learning with Variation of Information0
Digital Taxonomist: Identifying Plant Species in Community Scientists' Photographs0
Audio-Visual Approach For Multimodal Concurrent Speaker Detection0
Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes0
A graph-based multimodal framework to predict gentrification0
Improving Neonatal Care: An Active Dry-Contact Electrode-based Continuous EEG Monitoring System with Seizure Detection0
Detecting Deception in Political Debates Using Acoustic and Textual Features0
A multimodal deep learning architecture for smoking detection with a small data approach0
Describe-and-Dissect: Interpreting Neurons in Vision Networks with Language Models0
DeepStroke: An Efficient Stroke Screening Framework for Emergency Rooms with Multimodal Adversarial Deep Learning0
A survey on knowledge-enhanced multimodal learning0
Advanced Multimodal Deep Learning Architecture for Image-Text Matching0
Show:102550
← PrevPage 3 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified