SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 101110 of 176 papers

TitleStatusHype
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models0
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters0
Towards Automatically Extracting UML Class Diagrams from Natural Language SpecificationsCode0
SEEK: model extraction attack against hybrid secure inference protocols0
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking0
Revealing Secrets From Pre-trained Models0
EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System0
On the amplification of security and privacy risks by post-hoc explanations in machine learning models0
A Framework for Understanding Model Extraction Attack and Defense0
On the Difficulty of Defending Self-Supervised Learning against Model ExtractionCode0
Show:102550
← PrevPage 11 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified