Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction
Jacob Steinhardt, Gregory Valiant, Moses Charikar
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We consider a crowdsourcing model in which n workers are asked to rate the quality of n items previously generated by other workers. An unknown set of n workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. The manager of the experiment can also manually evaluate the quality of a small number of items, and wishes to curate together almost all of the high-quality items with at most an fraction of low-quality items. Perhaps surprisingly, we show that this is possible with an amount of work required of the manager, and each worker, that does not scale with n: the dataset can be curated with O(1^3^4) ratings per worker, and O(1^2) ratings by the manager, where is the fraction of high-quality items. Our results extend to the more general setting of peer prediction, including peer grading in online classrooms.