SOTAVerified

Intersectional Bias in Causal Language Models

2021-07-16Unverified0· sign in to hype

Liam Magee, Lida Ghahremanlou, Karen Soldatic, Shanthi Robertson

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

To examine whether intersectional bias can be observed in language generation, we examine GPT-2 and GPT-NEO models, ranging in size from 124 million to ~2.7 billion parameters. We conduct an experiment combining up to three social categories - gender, religion and disability - into unconditional or zero-shot prompts used to generate sentences that are then analysed for sentiment. Our results confirm earlier tests conducted with auto-regressive causal models, including the GPT family of models. We also illustrate why bias may be resistant to techniques that target single categories (e.g. gender, religion and race), as it can also manifest, in often subtle ways, in texts prompted by concatenated social categories. To address these difficulties, we suggest technical and community-based approaches need to combine to acknowledge and address complex and intersectional language model bias.

Tasks

Reproductions