SuperAnimal pretrained pose estimation models for behavioral analysis
Shaokai Ye, Anastasiia Filippova, Jessy Lauer, Steffen Schneider, Maxime Vidal, Tian Qiu, Alexander Mathis, Mackenzie Weygandt Mathis
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/DeepLabCut/DeepLabCutOfficialIn papertf★ 5,559
- github.com/adaptivemotorcontrollab/modelzoo-figuresOfficialIn paperpytorch★ 18
- github.com/AlexEMG/DeepLabCuttf★ 5,558
Abstract
Quantification of behavior is critical in applications ranging from neuroscience, veterinary medicine and animal conservation efforts. A common key step for behavioral analysis is first extracting relevant keypoints on animals, known as pose estimation. However, reliable inference of poses currently requires domain knowledge and manual labeling effort to build supervised models. We present a series of technical innovations that enable a new method, collectively called SuperAnimal, to develop unified foundation models that can be used on over 45 species, without additional human labels. Concretely, we introduce a method to unify the keypoint space across differently labeled datasets (via our generalized data converter) and for training these diverse datasets in a manner such that they don't catastrophically forget keypoints given the unbalanced inputs (via our keypoint gradient masking and memory replay approaches). These models show excellent performance across six pose benchmarks. Then, to ensure maximal usability for end-users, we demonstrate how to fine-tune the models on differently labeled data and provide tooling for unsupervised video adaptation to boost performance and decrease jitter across frames. If the models are fine-tuned, we show SuperAnimal models are 10-100 more data efficient than prior transfer-learning-based approaches. We illustrate the utility of our models in behavioral classification in mice and gait analysis in horses. Collectively, this presents a data-efficient solution for animal pose estimation.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| iRodent | fine-tuned HRNetw32 pretrained on SuperAnimal (1 fac of data) | Average mAP | 72.97 | — | Unverified |
| iRodent | fine-tuned HRNetw32 pretrained on AP-10K (1 fac of data) | Average mAP | 61.64 | — | Unverified |
| iRodent | fine-tuned HRNetw32 pretrained on SuperAnimal (0.01 fac of data) | Average mAP | 60.85 | — | Unverified |
| iRodent | fine-tuned HRNetw32 pretrained on ImageNet | Average mAP | 58.86 | — | Unverified |
| iRodent | zero-shot HRNet-w32 pretrained on SuperAnimal-Quadruped | Average mAP | 58.56 | — | Unverified |
| iRodent | zero-shot AnimalTokenPose pretrained on AP-10K | Average mAP | 55.42 | — | Unverified |
| iRodent | fine-tuned HRNetw32 pretrained on AP-10K (0.01 fac of data) | Average mAP | 43.14 | — | Unverified |
| iRodent | zero-shot HRNet-w32 pretrained on AP-10K | Average mAP | 40.39 | — | Unverified |