SOTAVerified

Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method

2023-09-30Code Available1· sign in to hype

Xuan Zhang, Wei Gao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

While large pre-trained language models (LLMs) have shown their impressive capabilities in various NLP tasks, they are still under-explored in the misinformation domain. In this paper, we examine LLMs with in-context learning (ICL) for news claim verification, and find that only with 4-shot demonstration examples, the performance of several prompting methods can be comparable with previous supervised models. To further boost performance, we introduce a Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to separate a claim into several subclaims and then verify each of them via multiple questions-answering steps progressively. Experiment results on two public misinformation datasets show that HiSS prompting outperforms state-of-the-art fully-supervised approach and strong few-shot ICL-enabled baselines.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
RAWFCHiSSF153.9Unverified
RAWFCReActF149.8Unverified
RAWFCStandard prompting with articlesF147.9Unverified
RAWFCCoTF144.4Unverified

Reproductions