SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 101110 of 176 papers

TitleStatusHype
On the Difficulty of Defending Self-Supervised Learning against Model ExtractionCode0
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations0
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive ConditionsCode0
Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption0
On the Effectiveness of Dataset Watermarking in Adversarial SettingsCode0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
Increasing the Cost of Model Extraction with Calibrated Proof of Work0
Protecting Intellectual Property of Language Generation APIs with Lexical WatermarkCode0
Efficiently Learning One Hidden Layer ReLU Networks From Queries0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
Show:102550
← PrevPage 11 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified