SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 12411250 of 1262 papers

TitleStatusHype
Towards Distribution-Free Multi-Armed Bandits with Combinatorial Strategies0
From Bandits to Experts: A Tale of Domination and Independence0
On Finding the Largest Mean Among Many0
Concentration bounds for temporal difference learning with linear function approximation: The case of batch data and uniform sampling0
A Gang of Bandits0
Dynamic Ad Allocation: Bandits with Budgets0
Exponentiated Gradient LINUCB for Contextual Multi-Armed Bandits0
Hierarchical Optimistic Region Selection driven by Curiosity0
Risk-Aversion in Multi-armed Bandits0
Thompson Sampling for Contextual Bandits with Linear PayoffsCode0
Show:102550
← PrevPage 125 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified