SOTAVerified

GenderBench: Evaluation Suite for Gender Biases in LLMs

2025-05-17Code Available0· sign in to hype

Matúš Pikuliak

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present GenderBench -- a comprehensive evaluation suite designed to measure gender biases in LLMs. GenderBench includes 14 probes that quantify 19 gender-related harmful behaviors exhibited by LLMs. We release GenderBench as an open-source and extensible library to improve the reproducibility and robustness of benchmarking across the field. We also publish our evaluation of 12 LLMs. Our measurements reveal consistent patterns in their behavior. We show that LLMs struggle with stereotypical reasoning, equitable gender representation in generated texts, and occasionally also with discriminatory behavior in high-stakes scenarios, such as hiring.

Tasks

Reproductions