SOTAVerified

Japanese Text Normalization with Encoder-Decoder Model

2016-12-01WS 2016Unverified0· sign in to hype

Taishi Ikeda, Hiroyuki Shindo, Yuji Matsumoto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.

Tasks

Reproductions