SOTAVerified

MindOmni: Unleashing Reasoning Generation in Vision Language Models with RGPO

2025-05-19Code Available0· sign in to hype

Yicheng Xiao, Lin Song, Yukang Chen, Yingmin Luo, Yuxin Chen, Yukang Gan, Wei Huang, Xiu Li, Xiaojuan Qi, Ying Shan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent text-to-image systems face limitations in handling multimodal inputs and complex reasoning tasks. We introduce MindOmni, a unified multimodal large language model that addresses these challenges by incorporating reasoning generation through reinforcement learning. MindOmni leverages a three-phase training strategy: i) design of a unified vision language model with a decoder-only diffusion module, ii) supervised fine-tuning with Chain-of-Thought (CoT) instruction data, and iii) our proposed Reasoning Generation Policy Optimization (RGPO) algorithm, utilizing multimodal feedback to effectively guide policy updates. Experimental results demonstrate that MindOmni outperforms existing models, achieving impressive performance on both understanding and generation benchmarks, meanwhile showcasing advanced fine-grained reasoning generation capabilities, especially with mathematical reasoning instruction. All codes will be made public at https://github.com/EasonXiao-888/MindOmni.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
WISEMindOmni (w/ cot)Overall0.71Unverified
WISEMindOmni (w/o cot)Overall0.43Unverified

Reproductions