SOTAVerified

Backdoor Attack

Backdoor attacks inject maliciously constructed data into a training set so that, at test time, the trained model misclassifies inputs patched with a backdoor trigger as an adversarially-desired target class.

Papers

Showing 461470 of 523 papers

TitleStatusHype
Erased but Not Forgotten: How Backdoors Compromise Concept Erasure0
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack0
Evil from Within: Machine Learning Backdoors through Hardware Trojans0
Evolutionary Trigger Detection and Lightweight Model Repair Based Backdoor Defense0
Explainability-based Backdoor Attacks Against Graph Neural Networks0
Exploring Backdoor Attack and Defense for LLM-empowered Recommendations0
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry0
Fake the Real: Backdoor Attack on Deep Speech Classification via Voice Conversion0
False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger0
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks0
Show:102550
← PrevPage 47 of 53Next →

No leaderboard results yet.