SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 5160 of 176 papers

TitleStatusHype
VidModEx: Interpretable and Efficient Black Box Model Extraction for High-Dimensional SpacesCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
Efficient and Effective Model ExtractionCode0
An Approach for Process Model Extraction By Multi-Grained Text ClassificationCode0
From Counterfactuals to Trees: Competitive Analysis of Model Extraction AttacksCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
Thieves on Sesame Street! Model Extraction of BERT-based APIsCode0
Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security0
Student Surpasses Teacher: Imitation Attack for Black-Box NLP APIs0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
Show:102550
← PrevPage 6 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified