SOTAVerified

Feature Guided Masked Autoencoder for Self-supervised Learning in Remote Sensing

2023-10-28Code Available1· sign in to hype

Yi Wang, Hugo Hernández Hernández, Conrad M Albrecht, Xiao Xiang Zhu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Self-supervised learning guided by masked image modelling, such as Masked AutoEncoder (MAE), has attracted wide attention for pretraining vision transformers in remote sensing. However, MAE tends to excessively focus on pixel details, thereby limiting the model's capacity for semantic understanding, in particular for noisy SAR images. In this paper, we explore spectral and spatial remote sensing image features as improved MAE-reconstruction targets. We first conduct a study on reconstructing various image features, all performing comparably well or better than raw pixels. Based on such observations, we propose Feature Guided Masked Autoencoder (FG-MAE): reconstructing a combination of Histograms of Oriented Graidents (HOG) and Normalized Difference Indices (NDI) for multispectral images, and reconstructing HOG for SAR images. Experimental results on three downstream tasks illustrate the effectiveness of FG-MAE with a particular boost for SAR imagery. Furthermore, we demonstrate the well-inherited scalability of FG-MAE and release a first series of pretrained vision transformers for medium resolution SAR and multispectral images.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EuroSAT-SARViT-S/16Overall Accuracy78.4Unverified
EuroSAT-SARFG-MAE (ViT-S/16)Overall Accuracy85.9Unverified
EuroSAT-SARMAE (ViT-S/16)Overall Accuracy81Unverified

Reproductions