SOTAVerified

Data Poisoning

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Papers

Showing 471480 of 492 papers

TitleStatusHype
Fortifying Federated Learning Towards Trustworthiness via Auditable Data Valuation and Verifiable Client Contribution0
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks0
FR-GAN: Fair and Robust Training0
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning0
Generating Fake Cyber Threat Intelligence Using Transformer-Based Models0
Generative AI in Cybersecurity: A Comprehensive Review of LLM Applications and Vulnerabilities0
Get a Model! Model Hijacking Attack Against Machine Learning Models0
GFCL: A GRU-based Federated Continual Learning Framework against Data Poisoning Attacks in IoV0
GFL: A Decentralized Federated Learning Framework Based On Blockchain0
Gradient-based Data Subversion Attack Against Binary Classifiers0
Show:102550
← PrevPage 48 of 50Next →

No leaderboard results yet.