SOTAVerified

Towards Strong Certified Defense with Universal Asymmetric Randomization

2025-10-22Code Available0· sign in to hype

Hanbin Hong, Ashish Kundu, Ali Payani, Binghui Wang, Yuan Hong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Randomized smoothing has become essential for achieving certified adversarial robustness in machine learning models. However, current methods primarily use isotropic noise distributions that are uniform across all data dimensions, such as image pixels, limiting the effectiveness of robustness certification by ignoring the heterogeneity of inputs and data dimensions. To address this limitation, we propose UCAN: a novel technique that Universally Certifies adversarial robustness with Anisotropic Noise. UCAN is designed to enhance any existing randomized smoothing method, transforming it from symmetric (isotropic) to asymmetric (anisotropic) noise distributions, thereby offering a more tailored defense against adversarial attacks. Our theoretical framework is versatile, supporting a wide array of noise distributions for certified robustness in different _p-norms and applicable to any arbitrary classifier by guaranteeing the classifier's prediction over perturbed inputs with provable robustness bounds through tailored noise injection. Additionally, we develop a novel framework equipped with three exemplary noise parameter generators (NPGs) to optimally fine-tune the anisotropic noise parameters for different data dimensions, allowing for pursuing different levels of robustness enhancements in practice.Empirical evaluations underscore the significant leap in UCAN's performance over existing state-of-the-art methods, demonstrating up to 182.6\% improvement in certified accuracy at large certified radii on MNIST, CIFAR10, and ImageNet datasets.Code is anonymously available at https://github.com/youbin2014/UCAN/

Reproductions