SOTAVerified

Multi-Task Learning with Multi-Annotation Triplet Loss for Improved Object Detection

2025-04-10Code Available0· sign in to hype

Meilun Zhou, Aditya Dutt, Alina Zare

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Triplet loss traditionally relies only on class labels and does not use all available information in multi-task scenarios where multiple types of annotations are available. This paper introduces a Multi-Annotation Triplet Loss (MATL) framework that extends triplet loss by incorporating additional annotations, such as bounding box information, alongside class labels in the loss formulation. By using these complementary annotations, MATL improves multi-task learning for tasks requiring both classification and localization. Experiments on an aerial wildlife imagery dataset demonstrate that MATL outperforms conventional triplet loss in both classification and localization. These findings highlight the benefit of using all available annotations for triplet loss in multi-task learning frameworks.

Tasks

Reproductions