SOTAVerified

Adversarial Examples in Environment Perception for Automated Driving (Review)

2025-04-11Unverified0· sign in to hype

Jun Yan, Huilin Yin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The renaissance of deep learning has led to the massive development of automated driving. However, deep neural networks are vulnerable to adversarial examples. The perturbations of adversarial examples are imperceptible to human eyes but can lead to the false predictions of neural networks. It poses a huge risk to artificial intelligence (AI) applications for automated driving. This survey systematically reviews the development of adversarial robustness research over the past decade, including the attack and defense methods and their applications in automated driving. The growth of automated driving pushes forward the realization of trustworthy AI applications. This review lists significant references in the research history of adversarial examples.

Tasks

Reproductions