SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 161170 of 176 papers

TitleStatusHype
Emerging AI Security Threats for Autonomous Cars -- Case Studies0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security0
Evaluating Query Efficiency and Accuracy of Transfer Learning-based Model Extraction Attack in Federated Learning0
EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System0
Explore the vulnerability of black-box models via diffusion models0
Exploring Connections Between Active Learning and Model Extraction0
EXPLORING VULNERABILITIES OF BERT-BASED APIS0
Extraction of Complex DNN Models: Real Threat or Boogeyman?0
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
Show:102550
← PrevPage 17 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified