SOTAVerified

Survey Bandits with Regret Guarantees

2020-02-23Unverified0· sign in to hype

Sanath Kumar Krishnamurthy, Susan Athey

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider a variant of the contextual bandit problem. In standard contextual bandits, when a user arrives we get the user's complete feature vector and then assign a treatment (arm) to that user. In a number of applications (like healthcare), collecting features from users can be costly. To address this issue, we propose algorithms that avoid needless feature collection while maintaining strong regret guarantees.

Tasks

Reproductions