SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 1120 of 176 papers

TitleStatusHype
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!Code1
MEME: Generating RNN Model Explanations via Model ExtractionCode1
Data-Free Model ExtractionCode1
Now You See Me (CME): Concept-based Model ExtractionCode1
MEME: Generating RNN Model Explanations via Model ExtractionCode1
MARLeME: A Multi-Agent Reinforcement Learning Model Extraction LibraryCode1
Cryptanalytic Extraction of Neural Network ModelsCode1
Entangled Watermarks as a Defense against Model ExtractionCode1
Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security0
CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and AcquisitionCode0
Show:102550
← PrevPage 2 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified