SOTAVerified

Analysing Gender Bias in Text-to-Image Models using Object Detection

2023-07-16Code Available0· sign in to hype

Harvey Mannering

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This work presents a novel strategy to measure bias in text-to-image models. Using paired prompts that specify gender and vaguely reference an object (e.g. "a man/woman holding an item") we can examine whether certain objects are associated with a certain gender. In analysing results from Stable Diffusion, we observed that male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently. On the other hand, female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups. We hope that the method outlined here will be a useful tool for examining bias in text-to-image models.

Tasks

Reproductions