Thompson Sampling for Gaussian Entropic Risk Bandits
2021-05-14Unverified0· sign in to hype
Ming Liang Ang, Eloise Y. Y. Lim, Joel Q. L. Chang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The multi-armed bandit (MAB) problem is a ubiquitous decision-making problem that exemplifies exploration-exploitation tradeoff. Standard formulations exclude risk in decision making. Risknotably complicates the basic reward-maximising objectives, in part because there is no universally agreed definition of it. In this paper, we consider an entropic risk (ER) measure and explore the performance of a Thompson sampling-based algorithm ERTS under this risk measure by providing regret bounds for ERTS and corresponding instance dependent lower bounds.