SOTAVerified

Self-Consistent Narrative Prompts on Abductive Natural Language Inference

2023-09-15Code Available0· sign in to hype

Chunkit Chan, Xin Liu, Tsz Ho Chan, Jiayang Cheng, Yangqiu Song, Ginny Wong, Simon See

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Abduction has long been seen as crucial for narrative comprehension and reasoning about everyday situations. The abductive natural language inference (NLI) task has been proposed, and this narrative text-based task aims to infer the most plausible hypothesis from the candidates given two observations. However, the inter-sentential coherence and the model consistency have not been well exploited in the previous works on this task. In this work, we propose a prompt tuning model -PACE, which takes self-consistency and inter-sentential coherence into consideration. Besides, we propose a general self-consistent framework that considers various narrative sequences (e.g., linear narrative and reverse chronology) for guiding the pre-trained language model in understanding the narrative context of input. We conduct extensive experiments and thorough ablation studies to illustrate the necessity and effectiveness of -PACE. The performance of our method shows significant improvement against extensive competitive baselines.

Tasks

Reproductions