SOTAVerified

Explore the Limits of Omni-modal Pretraining at Scale

2024-06-13Code Available2· sign in to hype

Yiyuan Zhang, Handong Li, Jing Liu, Xiangyu Yue

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose to build omni-modal intelligence, which is capable of understanding any modality and learning universal representations. In specific, we propose a scalable pretraining paradigm, named Multimodal Context (MiCo), which can scale up the numbers of modalities and amount of data, together with the model parameters, in the pretraining process. With MiCo, the pretrained models show significant emergent abilities in multimodal learning, which are evaluated on the following tasks: i) single-modality perception benchmarks of 10 different modalities, ii) 25 cross-modality understanding tasks of retrieval, question-answering, captioning, and iii) 18 multimodal large language model benchmarks. Our models establish 37 new records for state-of-the-art performance. We hope that our research could contribute to the development of omni-modal intelligence. Code and Models are at https://github.com/invictus717/MiCo

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MM-VetMiCo-Chat-7BGPT-4 score31.4Unverified

Reproductions