SOTAVerified

AffectGAN: Affect-Based Generative Art Driven by Semantics

2021-09-30Unverified0· sign in to hype

Theodoros Galanos, Antonios Liapis, Georgios N. Yannakakis

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic models from OpenAI, and the annotated dataset of the visual art encyclopedia WikiArt, our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes. A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty. Results show that for most instances the intended emotion used as a prompt for image generation matches the participants' responses. This small-scale study brings forth a new vision towards blending affective computing with computational creativity, enabling generative systems with intentionality in terms of the emotions they wish their output to elicit.

Tasks

Reproductions