SOTAVerified

Language Models That Walk the Talk: A Framework for Formal Fairness Certificates

2025-05-19Unverified0· sign in to hype

Danqing Chen, Tobias Ladner, Ahmed Rayen Mhadhbi, Matthias Althoff

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

As large language models become integral to high-stakes applications, ensuring their robustness and fairness is critical. Despite their success, large language models remain vulnerable to adversarial attacks, where small perturbations, such as synonym substitutions, can alter model predictions, posing risks in fairness-critical areas, such as gender bias mitigation, and safety-critical areas, such as toxicity detection. While formal verification has been explored for neural networks, its application to large language models remains limited. This work presents a holistic verification framework to certify the robustness of transformer-based language models, with a focus on ensuring gender fairness and consistent outputs across different gender-related terms. Furthermore, we extend this methodology to toxicity detection, offering formal guarantees that adversarially manipulated toxic inputs are consistently detected and appropriately censored, thereby ensuring the reliability of moderation systems. By formalizing robustness within the embedding space, this work strengthens the reliability of language models in ethical AI deployment and content moderation.

Tasks

Reproductions