Robust Active Learning Strategies for Model Variability
2020-10-15NeurIPS Workshop HAMLETS 2020Unverified0· sign in to hype
Anonymous
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Active learning methods are useful when a limited budget for data labelling is available. However, the most widely used methods -- uncertainty sampling -- may suffer from problems derived from an excessive dependence on the model learned during data acquisition. This results in datasets which are not optimal when they are used to train models very different from those used during data creation. In this paper, we link this to the tendency of uncertainty sampling to select outliers and show that other methods that favour selection of representative sampling are more robust to changes in models. We validate this experimentally on four NLP datasets.