SOTAVerified

Enhance Incomplete Utterance Restoration by Joint Learning Token Extraction and Text Generation

2022-04-08NAACL 2022Code Available0· sign in to hype

Shumpei Inoue, Tsungwei Liu, Nguyen Hong Son, Minh-Tien Nguyen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces a model for incomplete utterance restoration (IUR) called JET (Joint learning token Extraction and Text generation). Different from prior studies that only work on extraction or abstraction datasets, we design a simple but effective model, working for both scenarios of IUR. Our design simulates the nature of IUR, where omitted tokens from the context contribute to restoration. From this, we construct a Picker that identifies the omitted tokens. To support the picker, we design two label creation methods (soft and hard labels), which can work in cases of no annotation data for the omitted tokens. The restoration is done by using a Generator with the help of the Picker on joint learning. Promising results on four benchmark datasets in extraction and abstraction scenarios show that our model is better than the pretrained T5 and non-generative language model methods in both rich and limited training data settings.The code is available at https://github.com/shumpei19/JET

Tasks

Reproductions