SOTAVerified

Approximate information for efficient exploration-exploitation strategies

2023-07-04Unverified0· sign in to hype

Alex Barbier-Chebbah, Christian L. Vestergaard, Jean-Baptiste Masson

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper addresses the exploration-exploitation dilemma inherent in decision-making, focusing on multi-armed bandit problems. The problems involve an agent deciding whether to exploit current knowledge for immediate gains or explore new avenues for potential long-term rewards. We here introduce a novel algorithm, approximate information maximization (AIM), which employs an analytical approximation of the entropy gradient to choose which arm to pull at each point in time. AIM matches the performance of Infomax and Thompson sampling while also offering enhanced computational speed, determinism, and tractability. Empirical evaluation of AIM indicates its compliance with the Lai-Robbins asymptotic bound and demonstrates its robustness for a range of priors. Its expression is tunable, which allows for specific optimization in various settings.

Tasks

Reproductions