SOTAVerified

GradML: A Gradient-based Loss for Deep Metric Learning

2021-09-22NeurIPS Workshop ICBINB 2021Unverified0· sign in to hype

Bhavya Vasudeva, Puneesh Deora, Saumik Bhattacharya, Umapada Pal, Sukalpa Chanda

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deep metric learning (ML) uses a carefully designed loss function to learn distance metrics for improving the discriminatory ability for tasks like clustering and retrieval. Most loss functions are designed by considering the distance between the embeddings to induce certain properties without exploring how such losses would impact the movement of the said embeddings via their gradients during optimization. In this work, we analyze the gradients of various ML loss functions and propose a gradient-based loss for ML (GradML). Instead of directly formulating the loss, we first formulate the gradients of the loss and use them to derive the loss to be optimized. It has a simple formulation and lowers the computational cost as compared to other methods. We evaluate our approach on three datasets and find that the performance is data-dependent on properties like inter-class variance.

Tasks

Reproductions