SOTAVerified

Listening With Your Eyes: Towards a Practical Visual Speech Recognition System Using Deep Boltzmann Machines

2015-12-01ICCV 2015Unverified0· sign in to hype

Chao Sui, Mohammed Bennamoun, Roberto Togneri

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a novel feature learning method for visual speech recognition using Deep Boltzmann Machines (DBM). Unlike all existing visual feature extraction techniques which solely extracts features from video sequences, our method is able to explore both acoustic information and visual information to learn a better visual feature representation in the training stage. During the test stage, instead of using both audio and visual signals, only the videos are used for generating the missing audio feature, and both the given visual and given audio features are used to obtain a joint representation. We carried out our experiments on a large scale audio-visual data corpus, and experimental results show that our proposed techniques outperforms the performance of the hadncrafted features and features learned by other commonly used deep learning techniques.

Tasks

Reproductions