SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 2650 of 176 papers

TitleStatusHype
Process Extraction from Text: Benchmarking the State of the Art and Paving the Way for Future ChallengesCode0
On the Effectiveness of Dataset Watermarking in Adversarial SettingsCode0
Stealing and Evading Malware Classifiers and Antivirus at Low False Positive ConditionsCode0
SAME: Sample Reconstruction against Model Extraction AttacksCode0
Beyond Slow Signs in High-fidelity Model ExtractionCode0
Model extraction from counterfactual explanationsCode0
MISLEADER: Defending against Model Extraction with Ensembles of Distilled ModelsCode0
Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope TheoryCode0
ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public DataCode0
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and AcquisitionCode0
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public DataCode0
From Counterfactuals to Trees: Competitive Analysis of Model Extraction AttacksCode0
DAWN: Dynamic Adversarial Watermarking of Neural NetworksCode0
GUIDO: A Hybrid Approach to Guideline Discovery & Ordering from Natural Language TextsCode0
Robust and Minimally Invasive Watermarking for EaaSCode0
Deep Neural Network Fingerprinting by Conferrable Adversarial ExamplesCode0
An Approach for Process Model Extraction By Multi-Grained Text ClassificationCode0
Model Extraction Attacks on Graph Neural Networks: Taxonomy and RealizationCode0
Efficient and Effective Model ExtractionCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel AttacksCode0
Not Just Change the Labels, Learn the Features: Watermarking Deep Neural Networks with Multi-View DataCode0
Show:102550
← PrevPage 2 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified