SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 131140 of 176 papers

TitleStatusHype
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack0
Using Python for Model Inference in Deep Learning0
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!Code1
BODAME: Bilevel Optimization for Defense Against Model Extraction0
Model Extraction and Defenses on Generative Adversarial Networks0
EXPLORING VULNERABILITIES OF BERT-BASED APIS0
Grey-box Extraction of Natural Language Models0
MEME: Generating RNN Model Explanations via Model ExtractionCode1
Sparsity-driven Digital Terrain Model Extraction0
Data-Free Model ExtractionCode1
Show:102550
← PrevPage 14 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified