SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 4150 of 176 papers

TitleStatusHype
Robust and Minimally Invasive Watermarking for EaaSCode0
Efficient Model Extraction via Boundary Sampling0
Efficient and Effective Model ExtractionCode0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Model Watermarking0
VidModEx: Interpretable and Efficient Black Box Model Extraction for High-Dimensional SpacesCode0
Enhancing TinyML Security: Study of Adversarial Attack Transferability0
QUEEN: Query Unlearning against Model Extraction0
Privacy Implications of Explainable AI in Data-Driven Systems0
Beyond Slow Signs in High-fidelity Model ExtractionCode0
Show:102550
← PrevPage 5 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified