SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 201225 of 492 papers

TitleStatusHype
FedNIA: Noise-Induced Activation Analysis for Mitigating Data Poisoning in FL0
Federated Transfer-Ordered-Personalized Learning for Driver Monitoring Application0
Class Machine Unlearning for Complex Data via Concepts Inference and Data Poisoning0
Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions0
Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era0
Federated Multi-Armed Bandits Under Byzantine Attacks0
Federated Learning with Dual Attention for Robust Modulation Classification under Attacks0
Federated Unlearning0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users0
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning0
FLock: Defending Malicious Behaviors in Federated Learning with Blockchain0
Fed-Credit: Robust Federated Learning with Credibility Management0
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against Adversarial Attacks0
Fortifying Federated Learning Towards Trustworthiness via Auditable Data Valuation and Verifiable Client Contribution0
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks0
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing0
Atlas: A Framework for ML Lifecycle Provenance & Transparency0
Fairness-aware Summarization for Justified Decision-Making0
Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing0
Face Recognition in the age of CLIP & Billion image datasets0
Exploring Vulnerabilities and Protections in Large Language Models: A Survey0
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks0
Show:102550
← PrevPage 9 of 20Next →

No leaderboard results yet.