SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 151160 of 176 papers

TitleStatusHype
Entangled Watermarks as a Defense against Model ExtractionCode1
ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public DataCode0
Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism0
Model Extraction Attacks against Recurrent Neural Networks0
Adversarial Model Extraction on Graph Neural Networks0
Deep Neural Network Fingerprinting by Conferrable Adversarial ExamplesCode0
Towards Security Threats of Deep Learning Systems: A Survey0
Quantifying (Hyper) Parameter Leakage in Machine Learning0
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection0
Thieves on Sesame Street! Model Extraction of BERT-based APIsCode0
Show:102550
← PrevPage 16 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified