SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 151200 of 213 papers

TitleStatusHype
Timing Is Everything: Finding the Optimal Fusion Points in Multimodal Medical Imaging0
Toxicity Prediction by Multimodal Deep Learning0
Unified Cross-Modal Attention-Mixer Based Structural-Functional Connectomics Fusion for Neuropsychiatric Disorder Diagnosis0
Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models0
Validation & Exploration of Multimodal Deep-Learning Camera-Lidar Calibration models0
Variational methods for Conditional Multimodal Deep Learning0
Vision-Aided Frame-Capture-Based CSI Recomposition for WiFi Sensing: A Multimodal Approach0
Where and When: Space-Time Attention for Audio-Visual Explanations0
Multimodal Approach for Metadata Extraction from German Scientific Publications0
Multimodal Co-learning: Challenges, Applications with Datasets, Recent Advances and Future Directions0
Multimodal deep learning approach for joint EEG-EMG data compression and classification0
Multimodal deep learning approach to predicting neurological recovery from coma after cardiac arrest0
Multimodal Deep Learning-Empowered Beam Prediction in Future THz ISAC Systems0
Multimodal Deep Learning for Finance: Integrating and Forecasting International Stock Markets0
Multimodal Deep Learning for Flaw Detection in Software Programs0
Multimodal Deep Learning for Low-Resource Settings: A Vector Embedding Alignment Approach for Healthcare Applications0
Multimodal deep learning for mapping forest dominant height by fusing GEDI with earth observation data0
Multimodal Deep Learning for Mental Disorders Prediction from Audio Speech Samples0
Multimodal Deep Learning for Scientific Imaging Interpretation0
Multimodal deep learning for short-term stock volatility prediction0
Multimodal Deep Learning for Stroke Prediction and Detection using Retinal Imaging and Clinical Data0
Multimodal Deep Learning Framework for Image Popularity Prediction on Social Media0
Multimodal Deep Learning of Word-of-Mouth Text and Demographics to Predict Customer Rating: Handling Consumer Heterogeneity in Marketing0
Multimodal Deep Learning to Differentiate Tumor Recurrence from Treatment Effect in Human Glioblastoma0
Multimodal Age and Gender Classification Using Ear and Profile Face ImagesCode0
Restoring Ancient Ideograph: A Multimodal Multitask Neural Network ApproachCode0
Modeling of spatially embedded networks via regional spatial graph convolutional networksCode0
Building Multimodal AI ChatbotsCode0
Learn to Combine Modalities in Multimodal Deep LearningCode0
Focus on Focus: Focus-oriented Representation Learning and Multi-view Cross-modal Alignment for Glioma GradingCode0
Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous VehiclesCode0
Feature importance to explain multimodal prediction models. A clinical use caseCode0
MVX-ViT: Multimodal Collaborative Perception for 6G V2X Network Management Decisions Using Vision Transformer.Code0
An Interpretable Adaptive Multiscale Attention Deep Neural Network for Tabular DataCode0
ShapeWorld - A new test methodology for multimodal language understandingCode0
Dynamic Task and Weight Prioritization Curriculum Learning for Multimodal ImageryCode0
Dual-Level Cross-Modal Contrastive ClusteringCode0
Multimodal Deep Learning for Personalized Renal Cell Carcinoma Prognosis: Integrating CT Imaging and Clinical DataCode0
Multimodal Deep Learning for Robust RGB-D Object RecognitionCode0
A Multimodal PDE Foundation Model for Prediction and Scientific Text DescriptionsCode0
DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing DataCode0
Cultural-Aware AI Model for Emotion RecognitionCode0
Multimodal Deep Learning for Subtype Classification in Breast Cancer Using Histopathological Images and Gene Expression DataCode0
Automatic Fused Multimodal Deep Learning for Plant IdentificationCode0
Predicting the Leading Political Ideology of YouTube Channels Using Acoustic, Textual, and Metadata InformationCode0
Advancing Drug Discovery with Enhanced Chemical Understanding via Asymmetric Contrastive Multimodal LearningCode0
Multimodal deep networks for text and image-based document classificationCode0
Towards Precision Healthcare: Robust Fusion of Time Series and Image DataCode0
Zorro: the masked multimodal transformerCode0
Cross-Modal Attribute Insertions for Assessing the Robustness of Vision-and-Language LearningCode0
Show:102550
← PrevPage 4 of 5Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified