SOTAVerified

Adversarial Domain Adaptation Using Artificial Titles for Abstractive Title Generation

2019-07-01ACL 2019Unverified0· sign in to hype

Francine Chen, Yan-Ying Chen

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

A common issue in training a deep learning, abstractive summarization model is lack of a large set of training summaries. This paper examines techniques for adapting from a labeled source domain to an unlabeled target domain in the context of an encoder-decoder model for text generation. In addition to adversarial domain adaptation (ADA), we introduce the use of artificial titles and sequential training to capture the grammatical style of the unlabeled target domain. Evaluation on adapting to/from news articles and Stack Exchange posts indicates that the use of these techniques can boost performance for both unsupervised adaptation as well as fine-tuning with limited target data.

Tasks

Reproductions