SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 151175 of 176 papers

TitleStatusHype
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Differentially private fine-tuned NF-Net to predict GI cancer type0
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data0
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations0
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking0
Efficiently Learning Any One Hidden Layer ReLU Network From Queries0
Efficiently Learning One Hidden Layer ReLU Networks From Queries0
Efficient Model Extraction via Boundary Sampling0
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security0
Evaluating Query Efficiency and Accuracy of Transfer Learning-based Model Extraction Attack in Federated Learning0
EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System0
Explore the vulnerability of black-box models via diffusion models0
Exploring Connections Between Active Learning and Model Extraction0
EXPLORING VULNERABILITIES OF BERT-BASED APIS0
Extraction of Complex DNN Models: Real Threat or Boogeyman?0
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index0
Few-shot Model Extraction Attacks against Sequential Recommender Systems0
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations0
First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data0
Show:102550
← PrevPage 7 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified