SOTAVerified

Neural-based Natural Language Generation in Dialogue using RNN Encoder-Decoder with Semantic Aggregation

2017-06-21WS 2017Unverified0· sign in to hype

Van-Khanh Tran, Le-Minh Nguyen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.

Tasks

Reproductions