SOTAVerified

A Cueing Strategy for Prompt Tuning in Relation Extraction

2021-11-16ACL ARR November 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Traditional relation extraction models predict confidence scores for each relation type based on a condensed sentence representation. In prompt tuning, prompt templates is used to tune pretrained language models (PLMs), which outputs relation types as verbalized type tokens. This strategy shows great potential to support relation extraction because it is effective to take full use of rich knowledge in PLMs. However, current prompt tuning models are directly implemented on a raw input. It is weak to encode contextual features and semantic dependencies of a relation instance. In this paper, we designed a cueing strategy which implants task specific cues into the input. It controls the attention of prompt tuning, which enable PLMs to learn task specific contextual features and semantic dependencies of a relation instance. We evaluated our method on two public datasets. Experiments show great improvement. It exceeds state-of-the-art performance by more than 4.8% and 1.4% in terms of F1-score on the SemEval corpus and the ReTACRED corpus.

Tasks

Reproductions