SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 151175 of 213 papers

TitleStatusHype
The Influence of Audio on Video Memorability with an Audio Gestalt Regulated Video Memorability System0
Supervised Video Summarization via Multiple Feature Sets with Parallel AttentionCode1
Distilling Audio-Visual Knowledge by Compositional Contrastive LearningCode1
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous VehiclesCode0
"Subverting the Jewtocracy": Online Antisemitism Detection Using Multimodal Deep LearningCode1
MinkLoc++: Lidar and Monocular Image Fusion for Place RecognitionCode1
Deep Learning for Android Malware Defenses: a Systematic Literature ReviewCode1
Piano Skills AssessmentCode1
Leveraging Audio Gestalt to Predict Media Memorability0
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes ChallengeCode1
Predicting Online Video Advertising Effects with Multimodal Deep Learning0
Image and Text fusion for UPMC Food-101 \ BERT and CNNsCode1
Multi-Modal Detection of Alzheimer's Disease from Speech and Text0
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine LearningCode1
Multimodal Learning for Hateful Memes DetectionCode0
Exploring Multimodal Features and Fusion Strategies for Analyzing Disaster Tweets0
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature FusionCode1
M2D: A Multi-modal Framework for Automatic Medical Diagnosis0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models0
Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI DevelopmentCode1
Multimodal Deep Learning for Flaw Detection in Software Programs0
MMEA: Entity Alignment for Multi-Modal Knowledge GraphsCode1
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion RecognitionCode1
More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery ClassificationCode1
Show:102550
← PrevPage 7 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified