SOTAVerified

Fast rates with high probability in exp-concave statistical learning

2016-05-04Unverified0· sign in to hype

Nishant A. Mehta

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present an algorithm for the statistical learning setting with a bounded exp-concave loss in d dimensions that obtains excess risk O(d (1/)/n) with probability at least 1 - . The core technique is to boost the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that with probability 1 - the standard ERM method obtains excess risk O(d ((n) + (1/))/n). We further show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present two high probability bounds for the exp-concave model selection aggregation problem that are quantile-adaptive in a certain sense. The first bound is a purely exponential weights type algorithm, obtains a nearly optimal rate, and has no explicit dependence on the Lipschitz continuity of the loss. The second bound requires Lipschitz continuity but obtains the optimal rate.

Tasks

Reproductions