SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 451492 of 492 papers

TitleStatusHype
Evaluating Impact of User-Cluster Targeted Attacks in Matrix Factorisation Recommenders0
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning0
Explainable Label-flipping Attacks on Human Emotion Assessment System0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
Face Recognition in the age of CLIP & Billion image datasets0
Fairness-aware Summarization for Justified Decision-Making0
FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
Fed-Credit: Robust Federated Learning with Credibility Management0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
Federated Learning with Dual Attention for Robust Modulation Classification under Attacks0
Federated Multi-Armed Bandits Under Byzantine Attacks0
Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application0
Federated Unlearning0
FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
Filter, Obstruct and Dilute: Defending Against Backdoor Attacks on Semi-Supervised Learning0
FLock: Defending Malicious Behaviors in Federated Learning with Blockchain0
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification0
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning Attacks0
Fortifying Federated Learning Towards Trustworthiness via Auditable Data Valuation and Verifiable Client Contribution0
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks0
FR-GAN: Fair and Robust Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Generating Fake Cyber Threat Intelligence Using Transformer-Based Models0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Get a Model! Model Hijacking Attack Against Machine Learning Models0
GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV0
GFL: A Decentralized Federated Learning Framework Based On Blockchain0
Gradient-based Data Subversion Attack Against Binary Classifiers0
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search0
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning0
Histopathological Image Classification and Vulnerability Analysis using Federated Learning0
How Robust are Randomized Smoothing based Defenses to Data Poisoning?0
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning0
WW-FL: Secure and Private Large-Scale Federated Learning0
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization0
If You Don't Understand It, Don't Use It: Eliminating Trojans with Filters Between Layers0
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition0
Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving0
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Influence Based Defense Against Data Poisoning Attacks in Online Learning0
Show:102550
← PrevPage 10 of 10Next →

No leaderboard results yet.