Beyond O(T) Regret: Decoupling Learning and Decision-making in Online Linear Programming
Wenzhi Gao, Dongdong Ge, Chenyu Xue, Chunlin Sun, Yinyu Ye
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Online linear programming plays an important role in both revenue management and resource allocation, and recent research has focused on developing efficient first-order online learning algorithms. Despite the empirical success of first-order methods, they typically achieve a regret no better than O ( T ), which is suboptimal compared to the O ( T) bound guaranteed by the state-of-the-art linear programming (LP)-based online algorithms. This paper establishes a general framework that improves upon the O ( T ) result when the LP dual problem exhibits certain error bound conditions. For the first time, we show that first-order learning algorithms achieve o( T ) regret in the continuous support setting and O ( T) regret in the finite support setting beyond the non-degeneracy assumption. Our results significantly improve the state-of-the-art regret results and provide new insights for sequential decision-making.