SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 2650 of 213 papers

TitleStatusHype
Uncovering the Genetic Basis of Glioblastoma Heterogeneity through Multimodal Analysis of Whole Slide Images and RNA Sequencing DataCode0
Reducing Overtreatment of Indeterminate Thyroid Nodules Using a Multimodal Deep Learning Model0
Validation & Exploration of Multimodal Deep-Learning Camera-Lidar Calibration models0
PHemoNet: A Multimodal Network for Physiological SignalsCode2
Dual-Level Cross-Modal Contrastive ClusteringCode0
MVX-ViT: Multimodal Collaborative Perception for 6G V2X Network Management Decisions Using Vision Transformer.Code0
Focus on Focus: Focus-oriented Representation Learning and Multi-view Cross-modal Alignment for Glioma GradingCode0
A Systematic Review of Intermediate Fusion in Multimodal Deep Learning for Biomedical Applications0
Audio-Visual Approach For Multimodal Concurrent Speaker Detection0
Modeling of spatially embedded networks via regional spatial graph convolutional networksCode0
LUMA: A Benchmark Dataset for Learning from Uncertain and Multimodal DataCode1
Advanced Multimodal Deep Learning Architecture for Image-Text Matching0
Research on Optimization of Natural Language Processing Model Based on Multimodal Deep Learning0
CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling0
Automatic Fused Multimodal Deep Learning for Plant IdentificationCode0
Multimodal Deep Learning for Low-Resource Settings: A Vector Embedding Alignment Approach for Healthcare Applications0
Towards Precision Healthcare: Robust Fusion of Time Series and Image DataCode0
The Role of Emotions in Informational Support Question-Response Pairs in Online Health Communities: A Multimodal Deep Learning Approach0
Comprehensive Multimodal Deep Learning Survival Prediction Enabled by a Transformer Architecture: A Multicenter Study in Glioblastoma0
An Interpretable Adaptive Multiscale Attention Deep Neural Network for Tabular DataCode0
HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology Datasets with Foundational Embedding ModelsCode1
Research on Image Recognition Technology Based on Multimodal Deep Learning0
Feature importance to explain multimodal prediction models. A clinical use caseCode0
ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction0
ViTextVQA: A Large-Scale Visual Question Answering Dataset for Evaluating Vietnamese Text Comprehension in ImagesCode0
Show:102550
← PrevPage 2 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified