SOTAVerified

A Cueing Strategy with Prompt Tuning for Relation Extraction

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Prompt tuning shows great potential to support relation extraction because it is effective to take full use of rich knowledge in pretrained language models (PLMs). However, current prompt tuning models are directly implemented on a raw input. It is weak to encode semantic dependencies of a relation instance. In this paper, we designed a cueing strategy which implants task specific cues into the input. It enables PLMs to learn task specific contextual features and semantic dependencies in a relation instance. Experiments on ReTACRED corpus and ACE 2005 corpus show state-of-the-art performance in terms of F1-score.

Tasks

Reproductions