SOTAVerified

Modeling Disclosive Transparency in NLP Application Descriptions

2021-01-02EMNLP 2021Code Available0· sign in to hype

Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Broader disclosive transparency-truth and clarity in communication regarding the function of AI systems-is widely considered desirable. Unfortunately, it is a nebulous concept, difficult to both define and quantify. This is problematic, as previous work has demonstrated possible trade-offs and negative consequences to disclosive transparency, such as a confusion effect, where "too much information" clouds a reader's understanding of what a system description means. Disclosive transparency's subjective nature has rendered deep study into these problems and their remedies difficult. To improve this state of affairs, We introduce neural language model-based probabilistic metrics to directly model disclosive transparency, and demonstrate that they correlate with user and expert opinions of system transparency, making them a valid objective proxy. Finally, we demonstrate the use of these metrics in a pilot study quantifying the relationships between transparency, confusion, and user perceptions in a corpus of real NLP system descriptions.

Tasks

Reproductions