Weakly-Supervised Multimodal Learning on MIMIC-CXR
2024-11-15Code Available0· sign in to hype
Andrea Agostini, Daphné Chopard, Yang Meng, Norbert Fortin, Babak Shahbaba, Stephan Mandt, Thomas M. Sutter, Julia E. Vogt
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/agostini335/mmvmvae-mimicOfficialpytorch★ 6
Abstract
Multimodal data integration and label scarcity pose significant challenges for machine learning in medical settings. To address these issues, we conduct an in-depth evaluation of the newly proposed Multimodal Variational Mixture-of-Experts (MMVM) VAE on the challenging MIMIC-CXR dataset. Our analysis demonstrates that the MMVM VAE consistently outperforms other multimodal VAEs and fully supervised approaches, highlighting its strong potential for real-world medical applications.