SOTAVerified

multimodal generation

Multimodal generation refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.

For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.

Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.

Papers

Showing 150 of 98 papers

TitleStatusHype
OmniGen2: Exploration to Advanced Multimodal GenerationCode7
PlanMoGPT: Flow-Enhanced Progressive Planning for Text to Motion Synthesis0
Pisces: An Auto-regressive Foundation Model for Image Understanding and Generation0
MADFormer: Mixed Autoregressive and Diffusion Transformers for Continuous Image Generation0
Muddit: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion ModelCode2
OmniGenBench: A Benchmark for Omnipotent Multimodal Generation across 50+ TasksCode1
MMMG: a Comprehensive and Reliable Evaluation Suite for Multitask Multimodal Generation0
Multimodal RAG-driven Anomaly Detection and Classification in Laser Powder Bed Fusion using Large Language Models0
Emerging Properties in Unified Multimodal PretrainingCode9
Preliminary Explorations with GPT-4o(mni) Native Image Generation0
Have we unified image generation and understanding yet? An empirical study of GPT-4o's image generation ability0
An Empirical Study of GPT-4o Image Generation CapabilitiesCode1
Stance-Driven Multimodal Controlled Statement Generation: New Dataset and Task0
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
Vision-to-Music Generation: A SurveyCode3
Generating Multimodal Driving Scenes via Next-Scene Prediction0
FusDreamer: Label-efficient Remote Sensing World Model for Multimodal Data ClassificationCode1
Multimodal Chain-of-Thought Reasoning: A Comprehensive SurveyCode4
OmniMamba: Efficient and Unified Multimodal Understanding and Generation via State Space ModelsCode2
ProtTeX: Structure-In-Context Reasoning and Editing of Proteins with Large Language Models0
ARMOR v0.1: Empowering Autoregressive Multimodal Understanding Model with Interleaved Multimodal Generation via Asymmetric Synergy0
Unlocking Pretrained LLMs for Motion-Related Multimodal Generation: A Fine-Tuning Approach to Unify Diffusion and Next-Token Prediction0
Unified Reward Model for Multimodal Understanding and GenerationCode4
WeGen: A Unified Model for Interactive Multimodal Generation as We ChatCode1
From Principles to Applications: A Comprehensive Survey of Discrete Tokenizers in Generation, Comprehension, Recommendation, and Information Retrieval0
A Survey on Bridging EEG Signals and Generative AI: From Image and Text to Beyond0
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented GenerationCode3
UniCMs: A Unified Consistency Model For Efficient Multimodal Generation and UnderstandingCode1
MRAMG-Bench: A Comprehensive Benchmark for Advancing Multimodal Retrieval-Augmented Multimodal GenerationCode1
Artificial Intelligence in Creative Industries: Advances Prior to 20250
RDPM: Solve Diffusion Probabilistic Models via Recurrent Token Prediction0
D-Judge: How Far Are We? Evaluating the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal GuidanceCode0
LMFusion: Adapting Pretrained Language Models for Multimodal Generation0
Multimodal Latent Language Modeling with Next-Token DiffusionCode0
OpenING: A Comprehensive Benchmark for Judging Open-ended Interleaved Image-Text GenerationCode1
Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis0
Multi-modal Retrieval Augmented Multi-modal Generation: A Benchmark, Evaluate Metrics and Strong BaselinesCode1
Benchmarking Multimodal Models for Ukrainian Language Understanding Across Academic and Cultural Domains0
A Survey on Vision Autoregressive Model0
A Survey of Emerging Approaches and Advances in Video Generation0
Efficient Diffusion Models: A Comprehensive Survey from Principles to PracticesCode1
Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation0
ACDC: Autoregressive Coherent Multimodal Generation using Diffusion Correction0
Characterizing and Efficiently Accelerating Multimodal Generation Model Inference0
MM2Latent: Text-to-facial image generation and editing in GANs with multimodal assistanceCode1
PixelBytes: Catching Unified Representation for Multimodal GenerationCode0
PixelBytes: Catching Unified Embedding for Multimodal GenerationCode0
Multimodal ELBO with Diffusion Decoders0
UniFashion: A Unified Vision-Language Model for Multimodal Fashion Retrieval and GenerationCode1
Learning Multimodal Latent Space with EBM Prior and MCMC Inference0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.