Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method
Xuan Zhang, Wei Gao
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/jadecurl/hissOfficialIn papernone★ 21
Abstract
While large pre-trained language models (LLMs) have shown their impressive capabilities in various NLP tasks, they are still under-explored in the misinformation domain. In this paper, we examine LLMs with in-context learning (ICL) for news claim verification, and find that only with 4-shot demonstration examples, the performance of several prompting methods can be comparable with previous supervised models. To further boost performance, we introduce a Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to separate a claim into several subclaims and then verify each of them via multiple questions-answering steps progressively. Experiment results on two public misinformation datasets show that HiSS prompting outperforms state-of-the-art fully-supervised approach and strong few-shot ICL-enabled baselines.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| RAWFC | HiSS | F1 | 53.9 | — | Unverified |
| RAWFC | ReAct | F1 | 49.8 | — | Unverified |
| RAWFC | Standard prompting with articles | F1 | 47.9 | — | Unverified |
| RAWFC | CoT | F1 | 44.4 | — | Unverified |