SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 2650 of 176 papers

TitleStatusHype
Robust and Minimally Invasive Watermarking for EaaSCode0
Efficient Model Extraction via Boundary Sampling0
Efficient and Effective Model ExtractionCode0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Model Watermarking0
"Yes, My LoRD." Guiding Language Model Extraction with Locality Reinforced DistillationCode1
VidModEx: Interpretable and Efficient Black Box Model Extraction for High-Dimensional SpacesCode0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
QUEEN: Query Unlearning against Model Extraction0
Privacy Implications of Explainable AI in Data-Driven Systems0
Beyond Slow Signs in High-fidelity Model ExtractionCode0
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Watermarking Counterfactual ExplanationsCode0
Noisy Data Meets Privacy: Training Local Models with Post-Processed Remote Queries0
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks0
Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope TheoryCode0
Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
QuantumLeak: Stealing Quantum Neural Networks from Cloud-based NISQ Machines0
Not Just Change the Labels, Learn the Features: Watermarking Deep Neural Networks with Multi-View DataCode0
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices0
WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright ProtectionCode0
MEA-Defender: A Robust Watermark against Model Extraction AttackCode1
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them0
MEAOD: Model Extraction Attack against Object Detectors0
Show:102550
← PrevPage 2 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified