SOTAVerified

Pretraining Methods for Dialog Context Representation Learning

2019-06-02ACL 2019Unverified0· sign in to hype

Shikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, Maxine Eskenazi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper examines various unsupervised pretraining objectives for learning dialog context representations. Two novel methods of pretraining dialog context encoders are proposed, and a total of four methods are examined. Each pretraining objective is fine-tuned and evaluated on a set of downstream dialog tasks using the MultiWoz dataset and strong performance improvement is observed. Further evaluation shows that our pretraining objectives result in not only better performance, but also better convergence, models that are less data hungry and have better domain generalizability.

Tasks

Reproductions