SOTAVerified

A Context-aware Convolutional Natural Language Generation model for Dialogue Systems

2018-07-01WS 2018Unverified0· sign in to hype

Sourab Mangrulkar, Suhani Shrivastava, Veena Thenkanidiyoor, Dileep Aroor Dinesh

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Natural language generation (NLG) is an important component in spoken dialog systems (SDSs). A model for NLG involves sequence to sequence learning. State-of-the-art NLG models are built using recurrent neural network (RNN) based sequence to sequence models (Ondrej Dusek and Filip Jurc\' cek, 2016a). Convolutional sequence to sequence based models have been used in the domain of machine translation but their application as Natural Language Generators in dialogue systems is still unexplored. In this work, we propose a novel approach to NLG using convolutional neural network (CNN) based sequence to sequence learning. CNN-based approach allows to build a hierarchical model which encapsulates dependencies between words via shorter path unlike RNNs. In contrast to recurrent models, convolutional approach allows for efficient utilization of computational resources by parallelizing computations over all elements, and eases the learning process by applying constant number of nonlinearities. We also propose to use CNN-based reranker for obtaining responses having semantic correspondence with input dialogue acts. The proposed model is capable of entrainment. Studies using a standard dataset shows the effectiveness of the proposed CNN-based approach to NLG.

Tasks

Reproductions