SOTAVerified

VITAL: A Visual Interpretation on Text with Adversarial Learning for Image Labeling

2019-07-26Unverified0· sign in to hype

Tao Hu, Chengjiang Long, Leheng Zhang, Chunxia Xiao

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose a novel way to interpret text information by extracting visual feature presentation from multiple high-resolution and photo-realistic synthetic images generated by Text-to-image Generative Adversarial Network (GAN) to improve the performance of image labeling. Firstly, we design a stacked Generative Multi-Adversarial Network (GMAN), StackGMAN++, a modified version of the current state-of-the-art Text-to-image GAN, StackGAN++, to generate multiple synthetic images with various prior noises conditioned on a text. And then we extract deep visual features from the generated synthetic images to explore the underlying visual concepts for text. Finally, we combine image-level visual feature, text-level feature and visual features based on synthetic images together to predict labels for images. We conduct experiments on two benchmark datasets and the experimental results clearly demonstrate the efficacy of our proposed approach.

Tasks

Reproductions