SOTAVerified

multimodal generation

Multimodal generation refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.

For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.

Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.

Papers

Showing 150 of 98 papers

TitleStatusHype
Emerging Properties in Unified Multimodal PretrainingCode9
OmniGen2: Exploration to Advanced Multimodal GenerationCode7
Retrieval-Augmented Generation for AI-Generated Content: A SurveyCode5
4M-21: An Any-to-Any Vision Model for Tens of Tasks and ModalitiesCode5
LLMs Meet Multimodal Generation and Editing: A SurveyCode4
Unified Reward Model for Multimodal Understanding and GenerationCode4
Multimodal Chain-of-Thought Reasoning: A Comprehensive SurveyCode4
ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text GenerationCode4
Vision-to-Music Generation: A SurveyCode3
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented GenerationCode3
Grounding Language Models to Images for Multimodal Inputs and OutputsCode2
GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)Code2
OmniMamba: Efficient and Unified Multimodal Understanding and Generation via State Space ModelsCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal ControlsCode2
Text2Poster: Laying out Stylized Texts on Retrieved ImagesCode2
On Evaluating Adversarial Robustness of Large Vision-Language ModelsCode2
DreamLLM: Synergistic Multimodal Comprehension and CreationCode2
Muddit: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion ModelCode2
MiniGPT-5: Interleaved Vision-and-Language Generation via Generative VokensCode2
Making LLaMA SEE and Draw with SEED TokenizerCode2
Harmonizing Visual Text Comprehension and GenerationCode2
WeGen: A Unified Model for Interactive Multimodal Generation as We ChatCode1
An Empirical Study of GPT-4o Image Generation CapabilitiesCode1
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion ModelsCode1
Efficient Diffusion Models: A Comprehensive Survey from Principles to PracticesCode1
Finite Scalar Quantization: VQ-VAE Made SimpleCode1
FusDreamer: Label-efficient Remote Sensing World Model for Multimodal Data ClassificationCode1
OpenING: A Comprehensive Benchmark for Judging Open-ended Interleaved Image-Text GenerationCode1
Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image SynthesisCode1
EasyGen: Easing Multimodal Generation with BiDiffuser and LLMsCode1
MM2Latent: Text-to-facial image generation and editing in GANs with multimodal assistanceCode1
MRAMG-Bench: A Comprehensive Benchmark for Advancing Multimodal Retrieval-Augmented Multimodal GenerationCode1
Multi-modal Retrieval Augmented Multi-modal Generation: A Benchmark, Evaluate Metrics and Strong BaselinesCode1
OmniGenBench: A Benchmark for Omnipotent Multimodal Generation across 50+ TasksCode1
PathAsst: A Generative Foundation AI Assistant Towards Artificial General Intelligence of PathologyCode1
PMG : Personalized Multimodal Generation with Large Language ModelsCode1
UniCMs: A Unified Consistency Model For Efficient Multimodal Generation and UnderstandingCode1
UniFashion: A Unified Vision-Language Model for Multimodal Fashion Retrieval and GenerationCode1
Unified Discrete Diffusion for Simultaneous Vision-Language GenerationCode1
Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion ModelsCode1
PixelBytes: Catching Unified Representation for Multimodal GenerationCode0
Consistent Multimodal Generation via A Unified GAN FrameworkCode0
Empathic Grounding: Explorations using Multimodal Interaction and Large Language Models with Conversational AgentsCode0
Multimodal Latent Language Modeling with Next-Token DiffusionCode0
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning ServicesCode0
Multimedia Generative Script Learning for Task PlanningCode0
Unconditional Image-Text Pair Generation with Multimodal Cross QuantizerCode0
Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily LivingCode0
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creationCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.