SOTAVerified

Learning Deep Embeddings with Histogram Loss

2016-11-02NeurIPS 2016Code Available0· sign in to hype

Evgeniya Ustinova, Victor Lempitsky

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives.

Tasks

Reproductions