SOTAVerified

Unimodal Aggregation for CTC-based Speech Recognition

2023-09-15Code Available1· sign in to hype

Ying Fang, Xiaofei Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper works on non-autoregressive automatic speech recognition. A unimodal aggregation (UMA) is proposed to segment and integrate the feature frames that belong to the same text token, and thus to learn better feature representations for text tokens. The frame-wise features and weights are both derived from an encoder. Then, the feature frames with unimodal weights are integrated and further processed by a decoder. Connectionist temporal classification (CTC) loss is applied for training. Compared to the regular CTC, the proposed method learns better feature representations and shortens the sequence length, resulting in lower recognition error and computational complexity. Experiments on three Mandarin datasets show that UMA demonstrates superior or comparable performance to other advanced non-autoregressive methods, such as self-conditioned CTC. Moreover, by integrating self-conditioned CTC into the proposed framework, the performance can be further noticeably improved.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
AISHELL-1UMAWord Error Rate (WER)4.7Unverified

Reproductions