SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 8190 of 176 papers

TitleStatusHype
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings0
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks0
Model Extraction Attacks on Split Federated Learning0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public DataCode0
A Survey on Event-based News Narrative Extraction0
Protecting Language Generation Models via Invisible WatermarkingCode1
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model ExtractionCode1
Show:102550
← PrevPage 9 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified