SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 111120 of 176 papers

TitleStatusHype
Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack0
Three-dimensional planar model estimation using multi-constraint knowledge based on k-means and RANSAC0
Towards dialogue based, computer aided software requirements elicitation0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Towards Security Threats of Deep Learning Systems: A Survey0
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them0
Using Python for Model Inference in Deep Learning0
Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks0
Watermarking Graph Neural Networks based on Backdoor Attacks0
Sparsity-driven Digital Terrain Model Extraction0
Show:102550
← PrevPage 12 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified