SOTAVerified

Adversarial Quantum Machine Learning: An Information-Theoretic Generalization Analysis

2024-01-31Unverified0· sign in to hype

Petros Georgiou, Sharu Theresa Jose, Osvaldo Simeone

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In a manner analogous to their classical counterparts, quantum classifiers are vulnerable to adversarial attacks that perturb their inputs. A promising countermeasure is to train the quantum classifier by adopting an attack-aware, or adversarial, loss function. This paper studies the generalization properties of quantum classifiers that are adversarially trained against bounded-norm white-box attacks. Specifically, a quantum adversary maximizes the classifier's loss by transforming an input state (x) into a state that is -close to the original state (x) in p-Schatten distance. Under suitable assumptions on the quantum embedding (x), we derive novel information-theoretic upper bounds on the generalization error of adversarially trained quantum classifiers for p = 1 and p = . The derived upper bounds consist of two terms: the first is an exponential function of the 2-R\'enyi mutual information between classical data and quantum embedding, while the second term scales linearly with the adversarial perturbation size . Both terms are shown to decrease as 1/T over the training set size T . An extension is also considered in which the adversary assumed during training has different parameters p and as compared to the adversary affecting the test inputs. Finally, we validate our theoretical findings with numerical experiments for a synthetic setting.

Tasks

Reproductions