Causal Language Model for Zero-shot Constrained Keyphrase Generation
Anonymous
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Recently, most of the state-of-the-art keyphrase prediction models are based on a supervised generative model.Although it shows noticeable improvement over statistical methods, it still struggles with low performance on out of the domain and low-resource data. To overcome these limitations, unsupervised methods have also been critical and studied. But the unsupervised method also has a defect in the necessary process which extracting candidates before selecting keyphrases. As not including various forms of phrases, we note that the unsupervised method can't ensure oracle keyphrase.In this paper, we present zero-shot constrained keyphrase generation by leveraging a large-scale language model. To generate diverse keyphrases, we explore controlling a phrase during the generation. Finally, we evaluate benchmark datasets of the scholar domain. It results in better performances than unsupervised methods on several datasets without going through the candidate extraction stage. For domain robustness, we evaluate out-of-domain DUC compare with NUS. Since our method doesn't fine-tune to a corpus of a specific domain, it's better than supervised methods based on Sequence-to-Sequence.