General Feature Extraction In SAR Target Classification: A Contrastive Learning Approach Across Sensor Types
M. Muzeau, J. Frontera-Pons, Chengfang Ren, J. -P. Ovarlez
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/muzmax/mstar_feature_extractionOfficialIn paperpytorch★ 2
Abstract
The increased availability of SAR data has raised a growing interest in applying deep learning algorithms. However, the limited availability of labeled data poses a significant challenge for supervised training. This article introduces a new method for classifying SAR data with minimal labeled images. The method is based on a feature extractor Vit trained with contrastive learning. It is trained on a dataset completely different from the one on which classification is made. The effectiveness of the method is assessed through 2D visualization using t-SNE for qualitative evaluation and k-NN classification with a small number of labeled data for quantitative evaluation. Notably, our results outperform a k-NN on data processed with PCA and a ResNet-34 specifically trained for the task, achieving a 95.9% accuracy on the MSTAR dataset with just ten labeled images per class.