SOTAVerified

Exploring Secure Machine Learning Through Payload Injection and FGSM Attacks on ResNet-50

2025-01-04Unverified0· sign in to hype

Umesh Yadav, Suman Niroula, Gaurav Kumar Gupta, Bicky Yadav

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper investigates the resilience of a ResNet-50 image classification model under two prominent security threats: Fast Gradient Sign Method (FGSM) adversarial attacks and malicious payload injection. Initially, the model attains a 53.33% accuracy on clean images. When subjected to FGSM perturbations, its overall accuracy remains unchanged; however, the model's confidence in incorrect predictions notably increases. Concurrently, a payload injection scheme is successfully executed in 93.33% of the tested samples, revealing how stealthy attacks can manipulate model predictions without degrading visual quality. These findings underscore the vulnerability of even high-performing neural networks and highlight the urgency of developing more robust defense mechanisms for security-critical applications.

Tasks

Reproductions