SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 1120 of 176 papers

TitleStatusHype
ATOM: A Framework of Detecting Query-Based Model Extraction Attacks for Graph Neural NetworksCode1
ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction0
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments0
Differentially private fine-tuned NF-Net to predict GI cancer type0
From Counterfactuals to Trees: Competitive Analysis of Model Extraction AttacksCode0
A Framework for Double-Blind Federated Adaptation of Foundation Models0
Safety at Scale: A Comprehensive Survey of Large Model SafetyCode3
Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI0
"FRAME: Forward Recursive Adaptive Model Extraction -- A Technique for Advance Feature Selection"0
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction AttacksCode1
Show:102550
← PrevPage 2 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified