SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 8190 of 176 papers

TitleStatusHype
Model Extraction and Defenses on Generative Adversarial Networks0
Model Extraction Attack against Self-supervised Speech Models0
Model Extraction Attacks Against Reinforcement Learning Based Controllers0
Model Extraction Attacks against Recurrent Neural Networks0
Model Extraction Attacks on Split Federated Learning0
Model Extraction Attacks Revisited0
Model Extraction Warning in MLaaS Paradigm0
Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack0
NASPY: Automated Extraction of Automated Machine Learning Models0
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks0
Show:102550
← PrevPage 9 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified