SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 51100 of 176 papers

TitleStatusHype
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public DataCode0
Process Extraction from Text: Benchmarking the State of the Art and Paving the Way for Future ChallengesCode0
Protecting Intellectual Property of Language Generation APIs with Lexical WatermarkCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
Safe and Robust Watermark Injection with a Single OoD ImageCode0
SAME: Sample Reconstruction against Model Extraction AttacksCode0
Fraternal Twins: Unifying Attacks on Machine Learning and Digital Watermarking0
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models0
Grey-box Extraction of Natural Language Models0
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings0
HODA: Hardness-Oriented Detection of Model Extraction Attacks0
High Accuracy and High Fidelity Extraction of Neural Networks0
HODA: Protecting DNNs Against Model Extraction Attacks via Hardness of Samples0
HoneypotNet: Backdoor Attacks Against Model Extraction0
Increasing the Cost of Model Extraction with Calibrated Proof of Work0
Interpretability via Model Extraction0
Interpreting Blackbox Models via Model Extraction0
Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs0
Noisy Data Meets Privacy: Training Local Models with Post-Processed Remote Queries0
Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models0
Leveraging Extracted Model Adversaries for Improved Black Box Attacks0
Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers0
MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection0
MEAOD: Model Extraction Attack against Object Detectors0
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI0
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator0
Mitigating Query-Flooding Parameter Duplication Attack on Regression Models with High-Dimensional Gaussian Mechanism0
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information0
Model Extraction and Defenses on Generative Adversarial Networks0
Model Extraction Attack against Self-supervised Speech Models0
Model Extraction Attacks Against Reinforcement Learning Based Controllers0
Model Extraction Attacks against Recurrent Neural Networks0
Model Extraction Attacks on Split Federated Learning0
Model Extraction Attacks Revisited0
Model Extraction Warning in MLaaS Paradigm0
Monitoring-based Differential Privacy Mechanism Against Query-Flooding Parameter Duplication Attack0
NASPY: Automated Extraction of Automated Machine Learning Models0
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks0
Navigating the Deep: Signature Extraction on Deep Neural Networks0
On the amplification of security and privacy risks by post-hoc explanations in machine learning models0
On the interplay of Explainability, Privacy and Predictive Performance with Explanation-assisted Model Extraction0
Ownership Protection of Generative Adversarial Networks0
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems0
Power-Based Attacks on Spatial DNN Accelerators0
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices0
Privacy Implications of Explainable AI in Data-Driven Systems0
ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction0
Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Model Watermarking0
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified