SOTAVerified

UCPhrase: Unsupervised Context-aware Quality Phrase Tagging

2021-05-28Code Available1· sign in to hype

Xiaotao Gu, Zihan Wang, Zhenyu Bi, Yu Meng, Liyuan Liu, Jiawei Han, Jingbo Shang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Identifying and understanding quality phrases from context is a fundamental task in text mining. The most challenging part of this task arguably lies in uncommon, emerging, and domain-specific phrases. The infrequent nature of these phrases significantly hurts the performance of phrase mining methods that rely on sufficient phrase occurrences in the input corpus. Context-aware tagging models, though not restricted by frequency, heavily rely on domain experts for either massive sentence-level gold labels or handcrafted gazetteers. In this work, we propose UCPhrase, a novel unsupervised context-aware quality phrase tagger. Specifically, we induce high-quality phrase spans as silver labels from consistently co-occurring word sequences within each document. Compared with typical context-agnostic distant supervision based on existing knowledge bases (KBs), our silver labels root deeply in the input domain and context, thus having unique advantages in preserving contextual completeness and capturing emerging, out-of-KB phrases. Training a conventional neural tagger based on silver labels usually faces the risk of overfitting phrase surface names. Alternatively, we observe that the contextualized attention maps generated from a transformer-based neural language model effectively reveal the connections between words in a surface-agnostic way. Therefore, we pair such attention maps with the silver labels to train a lightweight span prediction model, which can be applied to new input to recognize (unseen) quality phrases regardless of their surface names or frequency. Thorough experiments on various tasks and datasets, including corpus-level phrase ranking, document-level keyphrase extraction, and sentence-level phrase tagging, demonstrate the superiority of our design over state-of-the-art pre-trained, unsupervised, and distantly supervised methods.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
KP20kPKERecall57.1Unverified
KP20kTopMineRecall53.3Unverified
KP20kStanfordNLPRecall51.7Unverified
KP20kWiki+RoBERTaRecall73Unverified
KP20kUCPhraseRecall72.9Unverified
KP20kAutoPhraseRecall62.9Unverified
KP20kSpacyRecall59.5Unverified
KPTimesTopMineRecall63.4Unverified
KPTimesWiki+RoBERTaRecall64.5Unverified
KPTimesAutoPhraseRecall77.8Unverified
KPTimesUCPhraseRecall83.4Unverified

Reproductions