SOTAVerified

Low-Resource Machine Transliteration Using Recurrent Neural Networks of Asian Languages

2018-07-01WS 2018Unverified0· sign in to hype

Ngoc Tan Le, Fatiha Sadat

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Grapheme-to-phoneme models are key components in automatic speech recognition and text-to-speech systems. With low-resource language pairs that do not have available and well-developed pronunciation lexicons, grapheme-to-phoneme models are particularly useful. These models are based on initial alignments between grapheme source and phoneme target sequences. Inspired by sequence-to-sequence recurrent neural network-based translation methods, the current research presents an approach that applies an alignment representation for input sequences and pre-trained source and target embeddings to overcome the transliteration problem for a low-resource languages pair. We participated in the NEWS 2018 shared task for the English-Vietnamese transliteration task.

Tasks

Reproductions