SOTAVerified

Model Poisoning

Papers

Showing 8190 of 108 papers

TitleStatusHype
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with SparsificationCode0
FedRAD: Federated Robust Adaptive Distillation0
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client PerspectiveCode1
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy0
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion0
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks0
On the Security Risks of AutoMLCode0
Byzantine-robust Federated Learning through Collaborative Malicious Gradient FilteringCode1
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
Show:102550
← PrevPage 9 of 11Next →

No leaderboard results yet.