SOTAVerified

Revealing Unintentional Information Leakage in Low-Dimensional Facial Portrait Representations

2025-03-12Code Available0· sign in to hype

Kathleen Anderson, Thomas Martinetz

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We evaluate the information that can unintentionally leak into the low dimensional output of a neural network, by reconstructing an input image from a 40- or 32-element feature vector that intends to only describe abstract attributes of a facial portrait. The reconstruction uses blackbox-access to the image encoder which generates the feature vector. Other than previous work, we leverage recent knowledge about image generation and facial similarity, implementing a method that outperforms the current state-of-the-art. Our strategy uses a pretrained StyleGAN and a new loss function that compares the perceptual similarity of portraits by mapping them into the latent space of a FaceNet embedding. Additionally, we present a new technique that fuses the output of an ensemble, to deliberately generate specific aspects of the recreated image.

Tasks

Reproductions