SOTAVerified

Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation

2019-12-20ECCV 2020Code Available0· sign in to hype

Yang He, Shadi Rahimian, Bernt Schiele, Mario Fritz

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Today's success of state of the art methods for semantic segmentation is driven by large datasets. Data is considered an important asset that needs to be protected, as the collection and annotation of such datasets comes at significant efforts and associated costs. In addition, visual data might contain private or sensitive information, that makes it equally unsuited for public release. Unfortunately, recent work on membership inference in the broader area of adversarial machine learning and inference attacks on machine learning models has shown that even black box classifiers leak information on the dataset that they were trained on. We show that such membership inference attacks can be successfully carried out on complex, state of the art models for semantic segmentation. In order to mitigate the associated risks, we also study a series of defenses against such membership inference attacks and find effective counter measures against the existing risks with little effect on the utility of the segmentation method. Finally, we extensively evaluate our attacks and defenses on a range of relevant real-world datasets: Cityscapes, BDD100K, and Mapillary Vistas.

Tasks

Reproductions