SOTAVerified

Lip reading using external viseme decoding

2021-04-10Unverified0· sign in to hype

Javad Peymanfard, Mohammad Reza Mohammadi, Hossein Zeinali, Nasser Mozayani

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Lip-reading is the operation of recognizing speech from lip movements. This is a difficult task because the movements of the lips when pronouncing the words are similar for some of them. Viseme is used to describe lip movements during a conversation. This paper aims to show how to use external text data (for viseme-to-character mapping) by dividing video-to-character into two stages, namely converting video to viseme, and then converting viseme to character by using separate models. Our proposed method improves word error rate by 4\% compared to the normal sequence to sequence lip-reading model on the BBC-Oxford Lip Reading Sentences 2 (LRS2) dataset.

Tasks

Reproductions