SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 171176 of 176 papers

TitleStatusHype
Deep Neural Network Fingerprinting by Conferrable Adversarial ExamplesCode0
CEGA: A Cost-Effective Approach for Graph-Based Model Extraction and AcquisitionCode0
An Approach for Process Model Extraction By Multi-Grained Text ClassificationCode0
MISLEADER: Defending against Model Extraction with Ensembles of Distilled ModelsCode0
Towards Automatically Extracting UML Class Diagrams from Natural Language SpecificationsCode0
Beyond Slow Signs in High-fidelity Model ExtractionCode0
Show:102550
← PrevPage 18 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified