SOTAVerified

Myersonian Regression

2020-12-01NeurIPS 2020Unverified0· sign in to hype

Allen Liu, Renato Leme, Jon Schneider

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Motivated by pricing applications in online advertising, we study a variant of linear regression with a discontinuous loss function that we term Myersonian regression. In this variant, we wish to find a linear function f : R^d R that well approximates a set of points (x_i, v_i) R^d [0, 1] in the following sense: we receive a loss of v_i when f(x_i) > v_i and a loss of v_i - f(x_i) when f(x_i) v_i. This arises naturally in the economic application of designing a pricing policy for differentiated items (where the loss is the gap between the performance of our policy and the optimal Myerson prices). We show that Myersonian regression is NP-hard to solve exactly and furthermore that no fully polynomial-time approximation scheme exists for Myersonian regression conditioned on the Exponential Time Hypothesis being true. In contrast to this, we demonstrate a polynomial-time approximation scheme for Myersonian regression that obtains an m additive approximation to the optimal possible revenue and can be computed in time O((poly(1/))(m, n)). We show that this algorithm is stable and generalizes well over distributions of samples.

Tasks

Reproductions