SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 571580 of 1262 papers

TitleStatusHype
Contextual Pandora's Box0
Information-Directed Selection for Top-Two AlgorithmsCode0
Neural Contextual Bandits Based Dynamic Sensor Selection for Low-Power Body-Area Networks0
Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs0
Falsification of Multiple Requirements for Cyber-Physical Systems Using Online Generative Adversarial Networks and Multi-Armed Bandits0
Contextual Information-Directed Sampling0
Pessimism for Offline Linear Contextual Bandits using _p Confidence Sets0
Stability Enforced Bandit Algorithms for Channel Selection in Remote State Estimation of Gauss-Markov Processes0
Multi-Armed Bandits in Brain-Computer InterfacesCode0
Breaking the T Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits0
Show:102550
← PrevPage 58 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified