Revisiting the Effects of Leakage on Dependency Parsing
2022-03-24Findings (ACL) 2022Code Available0· sign in to hype
Nathaniel Krasner, Miriam Wanner, Antonios Anastasopoulos
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/miriamwanner/reu-nlp-projectOfficialIn papernone★ 1
Abstract
Recent work by S gaard (2020) showed that, treebank size aside, overlap between training and test graphs (termed leakage) explains more of the observed variation in dependency parsing performance than other explanations. In this work we revisit this claim, testing it on more models and languages. We find that it only holds for zero-shot cross-lingual settings. We then propose a more fine-grained measure of such leakage which, unlike the original measure, not only explains but also correlates with observed performance variation. Code and data are available here: https://github.com/miriamwanner/reu-nlp-project