SOTAVerified

Strategyproof Learning: Building Trustworthy User-Generated Datasets

2021-06-04Code Available0· sign in to hype

Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality. By leveraging a careful design of the loss function, we propose Licchavi, a global and personalized learning framework with provable strategyproofness guarantees. Essentially, we prove that no user can gain much by replying to Licchavi's queries with answers that deviate from their true preferences. Interestingly, Licchavi also promotes the desirable "one person, one unit-force vote" fairness principle. Furthermore, our empirical evaluation of its performance showcases Licchavi's real-world applicability. We believe that our results are critical for the safety of any learning scheme that leverages user-generated data.

Tasks

Reproductions