SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 5175 of 176 papers

TitleStatusHype
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Watermarking Counterfactual ExplanationsCode0
Noisy Data Meets Privacy: Training Local Models with Post-Processed Remote Queries0
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks0
Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope TheoryCode0
Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
QuantumLeak: Stealing Quantum Neural Networks from Cloud-based NISQ Machines0
Not Just Change the Labels, Learn the Features: Watermarking Deep Neural Networks with Multi-View DataCode0
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices0
WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright ProtectionCode0
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them0
MEAOD: Model Extraction Attack against Object Detectors0
SAME: Sample Reconstruction against Model Extraction AttacksCode0
Model Extraction Attacks Revisited0
Security and Privacy Challenges in Deep Learning Models0
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers0
Defense Against Model Extraction Attacks on Recommender SystemsCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
Towards dialogue based, computer aided software requirements elicitation0
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack0
Beyond Labeling Oracles: What does it mean to steal ML models?0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified