SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 476492 of 492 papers

TitleStatusHype
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Get a Model! Model Hijacking Attack Against Machine Learning Models0
GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV0
GFL: A Decentralized Federated Learning Framework Based On Blockchain0
Gradient-based Data Subversion Attack Against Binary Classifiers0
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search0
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning0
Histopathological Image Classification and Vulnerability Analysis using Federated Learning0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
WW-FL: Secure and Private Large-Scale Federated Learning0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Show:102550
← PrevPage 20 of 20Next →

No leaderboard results yet.