SOTAVerified

Sample complexity of learning Mahalanobis distance metrics

2015-05-11NeurIPS 2015Unverified0· sign in to hype

Nakul Verma, Kristin Branson

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Metric learning seeks a transformation of the feature space that enhances prediction quality for the given task at hand. In this work we provide PAC-style sample complexity rates for supervised metric learning. We give matching lower- and upper-bounds showing that the sample complexity scales with the representation dimension when no assumptions are made about the underlying data distribution. However, by leveraging the structure of the data distribution, we show that one can achieve rates that are fine-tuned to a specific notion of intrinsic complexity for a given dataset. Our analysis reveals that augmenting the metric learning optimization criterion with a simple norm-based regularization can help adapt to a dataset's intrinsic complexity, yielding better generalization. Experiments on benchmark datasets validate our analysis and show that regularizing the metric can help discern the signal even when the data contains high amounts of noise.

Tasks

Reproductions