SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 101125 of 176 papers

TitleStatusHype
On the Difficulty of Defending Self-Supervised Learning against Model ExtractionCode0
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations0
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive ConditionsCode0
Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption0
On the Effectiveness of Dataset Watermarking in Adversarial SettingsCode0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
Increasing the Cost of Model Extraction with Calibrated Proof of Work0
Protecting Intellectual Property of Language Generation APIs with Lexical WatermarkCode0
Efficiently Learning One Hidden Layer ReLU Networks From Queries0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories0
Watermarking Graph Neural Networks based on Backdoor Attacks0
Process Extraction from Text: Benchmarking the State of the Art and Paving the Way for Future ChallengesCode0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
HODA: Protecting DNNs Against Model Extraction Attacks via Hardness of Samples0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
NASPY: Automated Extraction of Automated Machine Learning Models0
Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Black-Box Attacks on Sequential Recommenders via Data-Free Model ExtractionCode1
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs0
Power-Based Attacks on Spatial DNN Accelerators0
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI0
Stateful Detection of Model Extraction AttacksCode0
HODA: Hardness-Oriented Detection of Model Extraction Attacks0
Show:102550
← PrevPage 5 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified