SOTAVerified

Enhancing Whisper's Accuracy and Speed for Indian Languages through Prompt-Tuning and Tokenization

2024-12-27Unverified0· sign in to hype

Kumud Tripathi, Raj Gothi, Pankaj Wasnik

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Automatic speech recognition has recently seen a significant advancement with large foundational models such as Whisper. However, these models often struggle to perform well in low-resource languages, such as Indian languages. This paper explores two novel approaches to enhance Whisper's multilingual speech recognition performance in Indian languages. First, we propose prompt-tuning with language family information, which enhances Whisper's accuracy in linguistically similar languages. Second, we introduce a novel tokenizer that reduces the number of generated tokens, thereby accelerating Whisper's inference speed. Our extensive experiments demonstrate that the tokenizer significantly reduces inference time, while prompt-tuning enhances accuracy across various Whisper model sizes, including Small, Medium, and Large. Together, these techniques achieve a balance between optimal WER and inference speed.

Tasks

Reproductions