SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 7180 of 176 papers

TitleStatusHype
Extraction of Complex DNN Models: Real Threat or Boogeyman?0
Few-shot Model Extraction Attacks against Sequential Recommender Systems0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
A Review of Confidentiality Threats Against Embedded Neural Network Models0
"FRAME: Forward Recursive Adaptive Model Extraction -- A Technique for Advance Feature Selection"0
Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking0
CopyQNN: Quantum Neural Network Extraction Attack under Varying Quantum Noise0
Bounding-box Watermarking: Defense against Model Extraction Attacks on Object Detectors0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
Show:102550
← PrevPage 8 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified