SOTAVerified

Semantic-aware Data Augmentation for Text-to-image Synthesis

2023-12-13Code Available0· sign in to hype

Zhaorui Tan, Xi Yang, Kaizhu Huang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Data augmentation has been recently leveraged as an effective regularizer in various vision-language deep neural networks. However, in text-to-image synthesis (T2Isyn), current augmentation wisdom still suffers from the semantic mismatch between augmented paired data. Even worse, semantic collapse may occur when generated images are less semantically constrained. In this paper, we develop a novel Semantic-aware Data Augmentation (SADA) framework dedicated to T2Isyn. In particular, we propose to augment texts in the semantic space via an Implicit Textual Semantic Preserving Augmentation (ITA), in conjunction with a specifically designed Image Semantic Regularization Loss (L_r) as Generated Image Semantic Conservation, to cope well with semantic mismatch and collapse. As one major contribution, we theoretically show that ITA can certify better text-image consistency while L_r regularizing the semantics of generated images would avoid semantic collapse and enhance image quality. Extensive experiments validate that SADA enhances text-image consistency and improves image quality significantly in T2Isyn models across various backbones. Especially, incorporating SADA during the tuning process of Stable Diffusion models also yields performance improvements.

Tasks

Reproductions