A Methodology for the Comparison of Human Judgments With Metrics for Coreference Resolution
2022-05-01HumEval (ACL) 2022Unverified0· sign in to hype
Mariya Borovikova, Loïc Grobol, Anaïs Halftermeyer, Sylvie Billot
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We propose a method for investigating the interpretability of metrics used for the coreference resolution task through comparisons with human judgments. We provide a corpus with annotations of different error types and human evaluations of their gravity. Our preliminary analysis shows that metrics considerably overlook several error types and overlook errors in general in comparison to humans. This study is conducted on French texts, but the methodology is language-independent.