SOTAVerified

Contextualizing Argument Quality Assessment with Relevant Knowledge

2023-05-20Code Available0· sign in to hype

Darshan Deshpande, Zhivar Sourati, Filip Ilievski, Fred Morstatter

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Automatic assessment of the quality of arguments has been recognized as a challenging task with significant implications for misinformation and targeted speech. While real-world arguments are tightly anchored in context, existing computational methods analyze their quality in isolation, which affects their accuracy and generalizability. We propose SPARK: a novel method for scoring argument quality based on contextualization via relevant knowledge. We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument. SPARK uses a dual-encoder Transformer architecture to enable the original argument and its augmentation to be considered jointly. Our experiments in both in-domain and zero-shot setups show that SPARK consistently outperforms existing techniques across multiple metrics.

Tasks

Reproductions