In Defense of the Triplet Loss for Person Re-Identification
Alexander Hermans, Lucas Beyer, Bastian Leibe
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/layumi/Person_reID_baseline_pytorchpytorch★ 4,415
- github.com/OML-Team/open-metric-learningpytorch★ 985
- github.com/agongt408/vbranchtf★ 3
- github.com/bastiennNB/Pair_ReIDpytorch★ 0
- github.com/immuno121/audio_source_classificationpytorch★ 0
- github.com/cftang0827/pedestrian_recognitiontf★ 0
- github.com/kilsenp/triplet-reid-pytorchpytorch★ 0
- github.com/zhengziqiang/ReshapeGANtf★ 0
- github.com/AdrianUng/keras-triplet-loss-mnisttf★ 0
- github.com/thomas-liao/diva_tracking_reidtf★ 0
Abstract
In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| CUHK03 | TriNet | Rank-1 | 89.63 | — | Unverified |
| DukeMTMC-reID | TriNet | mAP | 53.5 | — | Unverified |
| Market-1501 | LuNet (RK) | Rank-1 | 84.59 | — | Unverified |
| Market-1501 | LuNet | Rank-1 | 81.38 | — | Unverified |
| Market-1501 | TriNet (RK) | Rank-1 | 86.67 | — | Unverified |
| Market-1501 | TriNet | Rank-1 | 84.92 | — | Unverified |
| MARS | LuNet (RK) | mAP | 73.68 | — | Unverified |
| MARS | TriNet | mAP | 67.7 | — | Unverified |
| MARS | LuNet | mAP | 60.48 | — | Unverified |
| MARS | TriNet (RK) | mAP | 77.43 | — | Unverified |