Deep Audio-Visual Speech Recognition
Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, Andrew Zisserman
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/exgc/avmust-tednone★ 24
- github.com/lordmartian/deep_avsrpytorch★ 0
- github.com/smeetrs/deep_avsrpytorch★ 0
- github.com/amitai1992/AutomatedLipReadingnone★ 0
Abstract
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| LRS2 | TM-Seq2seq | Test WER | 8.5 | — | Unverified |
| LRS2 | TM-CTC | Test WER | 8.2 | — | Unverified |