SOTAVerified

multimodal generation

Multimodal generation refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.

For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.

Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.

Papers

Showing 110 of 98 papers

TitleStatusHype
OmniGen2: Exploration to Advanced Multimodal GenerationCode7
PlanMoGPT: Flow-Enhanced Progressive Planning for Text to Motion Synthesis0
Pisces: An Auto-regressive Foundation Model for Image Understanding and Generation0
MADFormer: Mixed Autoregressive and Diffusion Transformers for Continuous Image Generation0
Muddit: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion ModelCode2
OmniGenBench: A Benchmark for Omnipotent Multimodal Generation across 50+ TasksCode1
MMMG: a Comprehensive and Reliable Evaluation Suite for Multitask Multimodal Generation0
Multimodal RAG-driven Anomaly Detection and Classification in Laser Powder Bed Fusion using Large Language Models0
Emerging Properties in Unified Multimodal PretrainingCode9
Preliminary Explorations with GPT-4o(mni) Native Image Generation0
Show:102550
← PrevPage 1 of 10Next →

No leaderboard results yet.