SOTAVerified

Causal Language Model for Zero-shot Constrained Keyphrase Generation

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, most of the state-of-the-art keyphrase prediction models are based on a supervised generative model.Although it shows noticeable improvement over statistical methods, it still struggles with low performance on out of the domain and low-resource data. To overcome these limitations, unsupervised methods have also been critical and studied. But the unsupervised method also has a defect in the necessary process which extracting candidates before selecting keyphrases. As not including various forms of phrases, we note that the unsupervised method can't ensure oracle keyphrase.In this paper, we present zero-shot constrained keyphrase generation by leveraging a large-scale language model. To generate diverse keyphrases, we explore controlling a phrase during the generation. Finally, we evaluate benchmark datasets of the scholar domain. It results in better performances than unsupervised methods on several datasets without going through the candidate extraction stage. For domain robustness, we evaluate out-of-domain DUC compare with NUS. Since our method doesn't fine-tune to a corpus of a specific domain, it's better than supervised methods based on Sequence-to-Sequence.

Tasks

Reproductions