A Challenge Set and Methods for Noun-Verb Ambiguity
Ali Elkahky, Kellie Webster, Daniel Andor, Emily Pitler
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
English part-of-speech taggers regularly make egregious errors related to noun-verb ambiguity, despite having achieved 97\%+ accuracy on the WSJ Penn Treebank since 2002. These mistakes have been difficult to quantify and make taggers less useful to downstream tasks such as translation and text-to-speech synthesis. This paper creates a new dataset of over 30,000 naturally-occurring non-trivial examples of noun-verb ambiguity. Taggers within 1\% of each other when measured on the WSJ have accuracies ranging from 57\% to 75\% accuracy on this challenge set. Enhancing the strongest existing tagger with contextual word embeddings and targeted training data improves its accuracy to 89\%, a 14\% absolute (52\% relative) improvement. Downstream, using just this enhanced tagger yields a 28\% reduction in error over the prior best learned model for homograph disambiguation for textto-speech synthesis.