SOTAVerified

Multimodal Deep Learning

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Papers

Showing 176200 of 213 papers

TitleStatusHype
Progress Estimation and Phase Detection for Sequential Processes0
A Hybrid Method for Traffic Flow Forecasting Using Multimodal Deep Learning0
R2D2 at SemEval-2022 Task 5: Attention is only as good as its Values! A multimodal system for identifying misogynist memes0
Recent Advances and Trends in Multimodal Deep Learning: A Review0
Reducing Overtreatment of Indeterminate Thyroid Nodules Using a Multimodal Deep Learning Model0
Research on Image Recognition Technology Based on Multimodal Deep Learning0
Research on Optimization of Natural Language Processing Model Based on Multimodal Deep Learning0
Validation & Exploration of Multimodal Deep-Learning Camera-Lidar Calibration models0
A graph-based multimodal framework to predict gentrification0
Variational methods for Conditional Multimodal Deep Learning0
Scalable multimodal convolutional networks for brain tumour segmentation0
Ontology-based knowledge representation for bone disease diagnosis: a foundation for safe and sustainable medical artificial intelligence systems0
Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction0
Advanced Multimodal Deep Learning Architecture for Image-Text Matching0
ADMN: A Layer-Wise Adaptive Multimodal Network for Dynamic Input Noise and Compute Resources0
SynthScribe: Deep Multimodal Tools for Synthesizer Sound Retrieval and Exploration0
Geometric Multimodal Deep Learning with Multi-Scaled Graph Wavelet Convolutional Network0
From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation0
Fine-grained Video Attractiveness Prediction Using Multimodal Deep Learning on a Large Real-world Dataset0
Exploring Multimodal Features and Fusion Strategies for Analyzing Disaster Tweets0
How to select and use tools? : Active Perception of Target Objects Using Multimodal Deep Learning0
Hybrid Attention based Multimodal Network for Spoken Language Classification0
Evolution of Data-driven Single- and Multi-Hazard Susceptibility Mapping and Emergence of Deep Learning Methods0
TabulaTime: A Novel Multimodal Deep Learning Framework for Advancing Acute Coronary Syndrome Prediction through Environmental and Clinical Data Integration0
Identification of Cognitive Workload during Surgical Tasks with Multimodal Deep Learning0
Show:102550
← PrevPage 8 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Two Branch Network (Text - Bert + Image - Nts-Net)Accuracy96.81Unverified