SOTAVerified

multimodal generation

Multimodal generation refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.

For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.

Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.

Papers

Showing 8190 of 98 papers

TitleStatusHype
From Principles to Applications: A Comprehensive Survey of Discrete Tokenizers in Generation, Comprehension, Recommendation, and Information Retrieval0
Stance-Driven Multimodal Controlled Statement Generation: New Dataset and Task0
CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation0
Generating Multimodal Driving Scenes via Next-Scene Prediction0
Characterizing and Efficiently Accelerating Multimodal Generation Model Inference0
CLIP Model for Images to Textual Prompts Based on Top-k Neighbors0
Have we unified image generation and understanding yet? An empirical study of GPT-4o's image generation ability0
I Want This Product but Different : Multimodal Retrieval with Synthetic Query Expansion0
Latent Dirichlet Allocation in Generative Adversarial Networks0
Learning Multimodal Latent Space with EBM Prior and MCMC Inference0
Show:102550
← PrevPage 9 of 10Next →

No leaderboard results yet.