SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 101125 of 176 papers

TitleStatusHype
Stealing Deep Reinforcement Learning Models for Fun and Profit0
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack0
Three-dimensional planar model estimation using multi-constraint knowledge based on k-means and RANSAC0
Towards dialogue based, computer aided software requirements elicitation0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Towards Security Threats of Deep Learning Systems: A Survey0
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them0
Using Python for Model Inference in Deep Learning0
Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks0
Watermarking Graph Neural Networks based on Backdoor Attacks0
Few-shot Model Extraction Attacks against Sequential Recommender Systems0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
"FRAME: Forward Recursive Adaptive Model Extraction -- A Technique for Advance Feature Selection"0
Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking0
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models0
Grey-box Extraction of Natural Language Models0
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings0
HODA: Hardness-Oriented Detection of Model Extraction Attacks0
High Accuracy and High Fidelity Extraction of Neural Networks0
HODA: Protecting DNNs Against Model Extraction Attacks via Hardness of Samples0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Increasing the Cost of Model Extraction with Calibrated Proof of Work0
Interpretability via Model Extraction0
Show:102550
← PrevPage 5 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified