SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 51100 of 176 papers

TitleStatusHype
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index0
On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction0
GradEscape: A Gradient-Based Evader Against AI-Generated Text Detectors0
A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks0
Adversarial Exploitation of Policy Imitation0
Adversarial Model Extraction on Graph Neural Networks0
A Framework for Double-Blind Federated Adaptation of Foundation Models0
A framework for the extraction of Deep Neural Networks by leveraging public data0
A Framework for Understanding Model Extraction Attack and Defense0
A Knowledge Representation Approach to Automated Mathematical Modelling0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
An Exact Poly-Time Membership-Queries Algorithm for Extraction a three-Layer ReLU Network0
A Novel Watermarking Framework for Ownership Verification of DNN Architectures0
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters0
A Review of Confidentiality Threats Against Embedded Neural Network Models0
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments0
A Survey on Event-based News Narrative Extraction0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Automated Data-Driven Model Extraction and Validation of Inverter Dynamics with Grid Support Function0
Model Extraction and Defenses on Generative Adversarial Networks0
Model Extraction Attack against Self-supervised Speech Models0
Model Extraction Attacks Against Reinforcement Learning Based Controllers0
Model Extraction Attacks against Recurrent Neural Networks0
Model Extraction Attacks on Split Federated Learning0
Model Extraction Attacks Revisited0
Model Extraction Warning in MLaaS Paradigm0
Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack0
NASPY: Automated Extraction of Automated Machine Learning Models0
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks0
Navigating the Deep: Signature Extraction on Deep Neural Networks0
On the amplification of security and privacy risks by post-hoc explanations in machine learning models0
Ownership Protection of Generative Adversarial Networks0
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems0
Power-Based Attacks on Spatial DNN Accelerators0
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices0
Privacy Implications of Explainable AI in Data-Driven Systems0
ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction0
Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Model Watermarking0
Quantifying (Hyper) Parameter Leakage in Machine Learning0
QuantumLeak: Stealing Quantum Neural Networks from Cloud-based NISQ Machines0
QUEEN: Query Unlearning against Model Extraction0
Revealing Secrets From Pre-trained Models0
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack0
Security and Privacy Challenges in Deep Learning Models0
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models0
SEEK: model extraction attack against hybrid secure inference protocols0
Sparsity-driven Digital Terrain Model Extraction0
Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption0
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified