SOTAVerified

Optimality of Maximum Likelihood for Log-Concave Density Estimation and Bounded Convex Regression

2019-03-13Unverified0· sign in to hype

Gil Kur, Yuval Dagan, Alexander Rakhlin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we study two problems: (1) estimation of a d-dimensional log-concave distribution and (2) bounded multivariate convex regression with random design with an underlying log-concave density or a compactly supported distribution with a continuous density. First, we show that for all d 4 the maximum likelihood estimators of both problems achieve an optimal risk of _d(n^-2/(d+1)) (up to a logarithmic factor) in terms of squared Hellinger distance and L_2 squared distance, respectively. Previously, the optimality of both these estimators was known only for d 3. We also prove that the -entropy numbers of the two aforementioned families are equal up to logarithmic factors. We complement these results by proving a sharp bound _d(n^-2/(d+4)) on the minimax rate (up to logarithmic factors) with respect to the total variation distance. Finally, we prove that estimating a log-concave density - even a uniform distribution on a convex set - up to a fixed accuracy requires the number of samples at least exponential in the dimension. We do that by improving the dimensional constant in the best known lower bound for the minimax rate from 2^-d n^-2/(d+1) to c n^-2/(d+1) (when d 2).

Tasks

Reproductions