SOTAVerified

An Empirical Analysis of Deep Audio-Visual Models for Speech Recognition

2018-12-21Unverified0· sign in to hype

Devesh Walawalkar, Yihui He, Rohit Pillai

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this project, we worked on speech recognition, specifically predicting individual words based on both the video frames and audio. Empowered by convolutional neural networks, the recent speech recognition and lip reading models are comparable to human level performance. We re-implemented and made derivations of the state-of-the-art model. Then, we conducted rich experiments including the effectiveness of attention mechanism, more accurate residual network as the backbone with pre-trained weights and the sensitivity of our model with respect to audio input with/without noise.

Tasks

Reproductions