SOTAVerified

Translation vs. Dialogue: A Comparative Analysis of Sequence-to-Sequence Modeling

2020-12-01COLING 2020Unverified0· sign in to hype

Wenpeng Hu, Ran Le, Bing Liu, Jinwen Ma, Dongyan Zhao, Rui Yan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Understanding neural models is a major topic of interest in the deep learning community. In this paper, we propose to interpret a general neural model comparatively. Specifically, we study the sequence-to-sequence (Seq2Seq) model in the contexts of two mainstream NLP tasks--machine translation and dialogue response generation--as they both use the seq2seq model. We investigate how the two tasks are different and how their task difference results in major differences in the behaviors of the resulting translation and dialogue generation systems. This study allows us to make several interesting observations and gain valuable insights, which can be used to help develop better translation and dialogue generation models. To our knowledge, no such comparative study has been done so far.

Tasks

Reproductions