SOTAVerified

Fairness of Automatic Speech Recognition in Cleft Lip and Palate Speech

2025-05-06Unverified0· sign in to hype

Susmita Bhattacharjee, Jagabandhu Mishra, H. S. Shekhawat, S. R. Mahadeva Prasanna

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Speech produced by individuals with cleft lip and palate (CLP) is often highly nasalized and breathy due to structural anomalies, causing shifts in formant structure that affect automatic speech recognition (ASR) performance and fairness. This study hypothesizes that publicly available ASR systems exhibit reduced fairness for CLP speech and confirms this through experiments. Despite formant disruptions, mild and moderate CLP speech retains some spectro-temporal alignment with normal speech, motivating augmentation strategies to enhance fairness. The study systematically explores augmenting CLP speech with normal speech across severity levels and evaluates its impact on ASR fairness. Three ASR models-GMM-HMM, Whisper, and XLS-R-were tested on AIISH and NMCPC datasets. Results indicate that training with normal speech and testing on mixed data improves word error rate (WER). Notably, WER decreased from 22.64\% to 18.76\% (GMM-HMM, AIISH) and 28.45\% to 18.89\% (Whisper, NMCPC). The superior performance of GMM-HMM on AIISH may be due to its suitability for Kannada children's speech, a challenge for foundation models like XLS-R and Whisper. To assess fairness, a fairness score was introduced, revealing improvements of 17.89\% (AIISH) and 47.50\% (NMCPC) with augmentation.

Tasks

Reproductions