SOTAVerified

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

2021-09-06Code Available0· sign in to hype

Tianxing He, Kyunghyun Cho, James Glass

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Prompt-based knowledge probing for 1-hop relations has been used to measure how much world knowledge is stored in pretrained language models. Existing work uses considerable amounts of data to tune the prompts for better performance. In this work, we compare a variety of approaches under a few-shot knowledge probing setting, where only a small number (e.g., 10 or 20) of example triples are available. In addition, we create a new dataset named TREx-2p, which contains 2-hop relations. We report that few-shot examples can strongly boost the probing performance for both 1-hop and 2-hop relations. In particular, we find that a simple-yet-effective approach of finetuning the bias vectors in the model outperforms existing prompt-engineering methods. Our dataset and code are available at https://github.com/cloudygoose/fewshot_lama.

Tasks

Reproductions