SOTAVerified

Geography-Aware Self-Supervised Learning

2020-11-19ICCV 2021Code Available1· sign in to hype

Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tanmay, Marshall Burke, David Lobell, Stefano Ermon

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Contrastive learning methods have significantly narrowed the gap between supervised and unsupervised learning on computer vision tasks. In this paper, we explore their application to geo-located datasets, e.g. remote sensing, where unlabeled data is often abundant but labeled data is scarce. We first show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks. To close the gap, we propose novel training methods that exploit the spatio-temporal structure of remote sensing data. We leverage spatially aligned images over time to construct temporal positive pairs in contrastive learning and geo-location to design pre-text tasks. Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing. Moreover, we demonstrate that the proposed method can also be applied to geo-tagged ImageNet images, improving downstream performance on various tasks. Project Webpage can be found at this link geography-aware-ssl.github.io.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
SpaceNet 1PSANet w/ ResNet50 - FMoW self-supervised pre-training w/ MoCo-V2 + Temporal PositivesMean IoU78.48Unverified
SpaceNet 1PSANet w/ ResNet50 backbone - FMoW self-supervised pre-training w/ MoCo-V2Mean IoU78.05Unverified
SpaceNet 1PSANet w/ ResNet50 backbone - FMoW pretrainedMean IoU75.57Unverified
SpaceNet 1PSANet w/ ResNet50 backbone - ImageNet pretrainedMean IoU75.23Unverified
SpaceNet 1PSANet w/ ResNet50 backboneMean IoU74.93Unverified

Reproductions