SOTAVerified

SPE: Symmetrical Prompt Enhancement for Factual Knowledge Retrieval

2021-10-16ACL ARR October 2021Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Pretrained language models (PLMs) have been shown to accumulate factual knowledge from their unsupervised pretraining procedures (Petroni et al., 2019). Prompting is an effective way to query such knowledge from PLMs. Recently, continuous prompt methods have been shown to have a larger potential than discrete prompt methods in generating effective queries (Liu et al., 2021a). However, these methods do not consider symmetry of the task. In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for fact retrieval that leverages the symmetry of the task. Our results on LAMA, a popular fact retrieval dataset, show significant improvement of SPE over previous prompt methods.

Tasks

Reproductions