SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 2130 of 176 papers

TitleStatusHype
From Counterfactuals to Trees: Competitive Analysis of Model Extraction AttacksCode0
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel AttacksCode0
GUIDO: A Hybrid Approach to Guideline Discovery & Ordering from Natural Language TextsCode0
Defense Against Model Extraction Attacks on Recommender SystemsCode0
VidModEx: Interpretable and Efficient Black Box Model Extraction for High-Dimensional SpacesCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
Deep Neural Network Fingerprinting by Conferrable Adversarial ExamplesCode0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
Efficient and Effective Model ExtractionCode0
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
Show:102550
← PrevPage 3 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified