SOTAVerified

Adversarial Shape Learning for Building Extraction in VHR Remote Sensing Images

2021-02-22Code Available1· sign in to hype

Lei Ding, Hao Tang, Yahui Liu, Yilei Shi, Xiao Xiang Zhu, Lorenzo Bruzzone

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Building extraction in VHR RSIs remains a challenging task due to occlusion and boundary ambiguity problems. Although conventional convolutional neural networks (CNNs) based methods are capable of exploiting local texture and context information, they fail to capture the shape patterns of buildings, which is a necessary constraint in the human recognition. To address this issue, we propose an adversarial shape learning network (ASLNet) to model the building shape patterns that improve the accuracy of building segmentation. In the proposed ASLNet, we introduce the adversarial learning strategy to explicitly model the shape constraints, as well as a CNN shape regularizer to strengthen the embedding of shape features. To assess the geometric accuracy of building segmentation results, we introduced several object-based quality assessment metrics. Experiments on two open benchmark datasets show that the proposed ASLNet improves both the pixel-based accuracy and the object-based quality measurements by a large margin. The code is available at: https://github.com/ggsDing/ASLNet

Tasks

Reproductions