SOTAVerified

multimodal generation

Multimodal generation refers to the process of generating outputs that incorporate multiple modalities, such as images, text, and sound. This can be done using deep learning models that are trained on data that includes multiple modalities, allowing the models to generate output that is informed by more than one type of data.

For example, a multimodal generation model could be trained to generate captions for images that incorporate both text and visual information. The model could learn to identify objects in the image and generate descriptions of them in natural language, while also taking into account contextual information and the relationships between the objects in the image.

Multimodal generation can also be used in other applications, such as generating realistic images from textual descriptions or generating audio descriptions of video content. By combining multiple modalities in this way, multimodal generation models can produce more accurate and comprehensive output, making them useful for a wide range of applications.

Papers

Showing 5198 of 98 papers

TitleStatusHype
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning ServicesCode0
MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal ControlsCode2
Diffusion Models For Multi-Modal Generative Modeling0
Harmonizing Visual Text Comprehension and GenerationCode2
ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text GenerationCode4
Empathic Grounding: Explorations using Multimodal Interaction and Large Language Models with Conversational AgentsCode0
4M-21: An Any-to-Any Vision Model for Tens of Tasks and ModalitiesCode5
LLMs Meet Multimodal Generation and Editing: A SurveyCode4
The Evolution of Multimodal Model Architectures0
C3LLM: Conditional Multimodal Content Generation Using Large Language Models0
Multimodal Pretraining and Generation for Recommendation: A Tutorial0
PCQA: A Strong Baseline for AIGC Quality Assessment Based on Prompt Condition0
Food Development through Co-creation with AI: bread with a "taste of love"0
PMG : Personalized Multimodal Generation with Large Language ModelsCode1
3D-VLA: A 3D Vision-Language-Action Generative World Model0
Retrieval-Augmented Generation for AI-Generated Content: A SurveyCode5
CLIP Model for Images to Textual Prompts Based on Top-k Neighbors0
CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation0
C3Net: Compound Conditioned ControlNet for Multimodal Content Generation0
Towards Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs0
EasyGen: Easing Multimodal Generation with BiDiffuser and LLMsCode1
MiniGPT-5: Interleaved Vision-and-Language Generation via Generative VokensCode2
Making LLaMA SEE and Draw with SEED TokenizerCode2
LiveChat: Video Comment Generation from Audio-Visual Multimodal Contexts0
Finite Scalar Quantization: VQ-VAE Made SimpleCode1
DreamLLM: Synergistic Multimodal Comprehension and CreationCode2
Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image SynthesisCode1
Consistent Multimodal Generation via A Unified GAN FrameworkCode0
SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs0
Multi-modal Latent DiffusionCode0
On Evaluating Adversarial Robustness of Large Vision-Language ModelsCode2
PathAsst: A Generative Foundation AI Assistant Towards Artificial General Intelligence of PathologyCode1
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion ModelsCode1
DiffuSIA: A Spiral Interaction Architecture for Encoder-Decoder Text Diffusion0
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creationCode0
Grounding Language Models to Images for Multimodal Inputs and OutputsCode2
Text2Poster: Laying out Stylized Texts on Retrieved ImagesCode2
Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion ModelsCode1
Unified Discrete Diffusion for Simultaneous Vision-Language GenerationCode1
Multimedia Generative Script Learning for Task PlanningCode0
Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily LivingCode0
Unconditional Image-Text Pair Generation with Multimodal Cross QuantizerCode0
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation0
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation0
GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)Code2
I Want This Product but Different : Multimodal Retrieval with Synthetic Query Expansion0
Continual and Multi-Task Architecture SearchCode0
Latent Dirichlet Allocation in Generative Adversarial Networks0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.