SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 151200 of 213 papers

TitleStatusHype
The Influence of Audio on Video Memorability with an Audio Gestalt Regulated Video Memorability System0
Supervised Video Summarization via Multiple Feature Sets with Parallel AttentionCode1
Distilling Audio-Visual Knowledge by Compositional Contrastive LearningCode1
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous VehiclesCode0
"Subverting the Jewtocracy": Online Antisemitism Detection Using Multimodal Deep LearningCode1
MinkLoc++: Lidar and Monocular Image Fusion for Place RecognitionCode1
Deep Learning for Android Malware Defenses: a Systematic Literature ReviewCode1
Piano Skills AssessmentCode1
Leveraging Audio Gestalt to Predict Media Memorability0
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes ChallengeCode1
Predicting Online Video Advertising Effects with Multimodal Deep Learning0
Image and Text fusion for UPMC Food-101 \ BERT and CNNsCode1
Multi-Modal Detection of Alzheimer's Disease from Speech and Text0
Detecting Video Game Player Burnout with the Use of Sensor Data and Machine LearningCode1
Multimodal Learning for Hateful Memes DetectionCode0
Exploring Multimodal Features and Fusion Strategies for Analyzing Disaster Tweets0
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature FusionCode1
M2D: A Multi-modal Framework for Automatic Medical Diagnosis0
New Ideas and Trends in Deep Multimodal Content Understanding: A Review0
Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models0
Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI DevelopmentCode1
Multimodal Deep Learning for Flaw Detection in Software Programs0
MMEA: Entity Alignment for Multi-Modal Knowledge GraphsCode1
Jointly Fine-Tuning “BERT-like” Self Supervised Models to Improve Multimodal Speech Emotion RecognitionCode1
More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery ClassificationCode1
Data-driven geophysics: from dictionary learning to deep learning0
Image Search With Text Feedback by Visiolinguistic Attention LearningCode1
HYDRA: A multimodal deep learning framework for malware classificationCode1
Analysis of Social Media Data using Multimodal Deep Learning for Disaster ResponseCode1
Multimodal Deep Unfolding for Guided Image Super-Resolution0
A multimodal deep learning approach for named entity recognition from social media0
Are These Birds Similar: Learning Branched Networks for Fine-grained RepresentationsCode1
Multimodal Intelligence: Representation Learning, Information Fusion, and Applications0
Predicting the Leading Political Ideology of YouTube Channels Using Acoustic, Textual, and Metadata InformationCode0
Audio-Conditioned U-Net for Position Estimation in Full Sheet ImagesCode1
Detecting Deception in Political Debates Using Acoustic and Textual Features0
Multimodal Deep Learning for Mental Disorders Prediction from Audio Speech Samples0
Multimodal Age and Gender Classification Using Ear and Profile Face ImagesCode0
Toxicity Prediction by Multimodal Deep Learning0
Multimodal deep networks for text and image-based document classificationCode0
Deep Coupled-Representation Learning for Sparse Linear Inverse Problems with Side Information0
Multimodal Deep Learning for Finance: Integrating and Forecasting International Stock Markets0
Multimodal deep learning for short-term stock volatility prediction0
Hybrid Attention based Multimodal Network for Spoken Language Classification0
Correlation Net: Spatiotemporal multimodal deep learning for action recognition0
Learn to Combine Modalities in Multimodal Deep LearningCode0
Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset0
A Hybrid Method for Traffic Flow Forecasting Using Multimodal Deep Learning0
AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video0
XFlow: Cross-modal Deep Neural Networks for Audiovisual ClassificationCode0
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified