SOTAVerified

Multimodal Association

Multimodal association refers to the process of associating multiple modalities or types of data in time series analysis. In time series analysis, multiple modalities or types of data can be collected, such as sensor data, images, audio, and text. Multimodal association aims to integrate these different types of data to improve the understanding and prediction of the time series.

For example, in a smart home application, sensor data from temperature, humidity, and motion sensors can be combined with images from cameras to monitor the activities of residents. By analyzing the multimodal data together, the system can detect anomalies or patterns that may not be visible in individual modalities alone.

Multimodal association can be achieved using various techniques, including deep learning models, statistical models, and graph-based models. These models can be trained on the multimodal data to learn the associations and dependencies between the different types of data.

Papers

Showing 17 of 7 papers

TitleStatusHype
MASS: Overcoming Language Bias in Image-Text Matching0
Personalized 2D Binary Patient Codes of Tissue Images and Immunogenomic Data Through Multimodal Self-Supervised Fusion0
ViTag: Online WiFi Fine Time Measurements Aided Vision-Motion Identity Association in Multi-person EnvironmentsCode0
WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language ModelsCode0
Vi-Fi: Associating Moving Subjects across Vision and Wireless SensorsCode0
A unified software/hardware scalable architecture for brain-inspired computing based on self-organizing neural models0
Brain-inspired self-organization with cellular neuromorphic computing for multimodal unsupervised learning0
Show:102550

No leaderboard results yet.