SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 121130 of 176 papers

TitleStatusHype
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs0
Power-Based Attacks on Spatial DNN Accelerators0
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI0
Stateful Detection of Model Extraction AttacksCode0
HODA: Hardness-Oriented Detection of Model Extraction Attacks0
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information0
Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs0
An Exact Poly-Time Membership-Queries Algorithm for Extraction a three-Layer ReLU Network0
A Review of Confidentiality Threats Against Embedded Neural Network Models0
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models0
Show:102550
← PrevPage 13 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified