SOTAVerified

Model Poisoning

Papers

Showing 51100 of 108 papers

TitleStatusHype
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures0
A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning0
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated LearningCode1
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning0
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection0
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning AttacksCode0
CADeSH: Collaborative Anomaly Detection for Smart Homes0
Poster: Sponge ML Model Attacks of Mobile Apps0
WW-FL: Secure and Private Large-Scale Federated Learning0
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications0
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?0
FedCC: Robust Federated Learning against Model Poisoning Attacks0
Security Analysis of SplitFed Learning0
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks0
Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks0
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks0
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated LearningCode0
FedPerm: Private and Robust Federated Learning by Parameter Permutation0
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious ClientsCode1
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications0
Performance Weighting for Robust Federated Learning Against Corrupted Sources0
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling0
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy0
FedRecAttack: Model Poisoning Attack to Federated RecommendationCode1
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error AnalysisCode0
Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing0
MPAF: Model Poisoning Attacks to Federated Learning based on Fake ClientsCode0
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine LearningCode1
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors0
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of RobustnessCode0
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with SparsificationCode0
FedRAD: Federated Robust Adaptive Distillation0
ARFED: Attack-Resistant Federated averaging based on outlier eliminationCode1
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client PerspectiveCode1
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy0
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion0
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks0
On the Security Risks of AutoMLCode0
Byzantine-robust Federated Learning through Collaborative Malicious Gradient FilteringCode1
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples0
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated LearningCode1
Turning Federated Learning Systems Into Covert Channels0
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning0
Robust Federated Learning with Attack-Adaptive AggregationCode1
SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation0
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization0
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation0
Ditto: Fair and Robust Federated Learning Through PersonalizationCode1
2CP: Decentralized Protocols to Transparently Evaluate Contributivity in Blockchain Federated Learning Environments0
BaFFLe: Backdoor detection via Feedback-based Federated Learning0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.