SOTAVerified

Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

2020-10-01Unverified0· sign in to hype

Spencer Frei, Yuan Cao, Quanquan Gu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of linear halfspaces. If OPT is the best classification error achieved by a halfspace, by appealing to the notion of soft margins we are able to show that gradient descent finds halfspaces with classification error O(OPT^1/2) + in poly(d,1/) time and sample complexity for a broad class of distributions that includes log-concave isotropic distributions as a subclass. Along the way we answer a question recently posed by Ji et al. (2020) on how the tail behavior of a loss function can affect sample complexity and runtime guarantees for gradient descent.

Tasks

Reproductions