SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 121130 of 176 papers

TitleStatusHype
GradEscape: A Gradient-Based Evader Against AI-Generated Text Detectors0
A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks0
Adversarial Exploitation of Policy Imitation0
Adversarial Model Extraction on Graph Neural Networks0
A Framework for Double-Blind Federated Adaptation of Foundation Models0
A framework for the extraction of Deep Neural Networks by leveraging public data0
A Framework for Understanding Model Extraction Attack and Defense0
A Knowledge Representation Approach to Automated Mathematical Modelling0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
An Exact Poly-Time Membership-Queries Algorithm for Extraction a three-Layer ReLU Network0
Show:102550
← PrevPage 13 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified