SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 161170 of 176 papers

TitleStatusHype
Extraction of Complex DNN Models: Real Threat or Boogeyman?0
High Accuracy and High Fidelity Extraction of Neural Networks0
Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability for STIT Logics0
Adversarial Exploitation of Policy Imitation0
DAWN: Dynamic Adversarial Watermarking of Neural NetworksCode0
A framework for the extraction of Deep Neural Networks by leveraging public data0
An Approach for Process Model Extraction By Multi-Grained Text ClassificationCode0
Exploring Connections Between Active Learning and Model Extraction0
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data0
Model Extraction Warning in MLaaS Paradigm0
Show:102550
← PrevPage 17 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified