SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 51100 of 176 papers

TitleStatusHype
GENIE: Watermarking Graph Neural Networks for Link Prediction0
Watermarking Counterfactual ExplanationsCode0
Noisy Data Meets Privacy: Training Local Models with Post-Processed Remote Queries0
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks0
Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope TheoryCode0
Learnable Linguistic Watermarks for Tracing Model Extraction Attacks on Large Language Models0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
QuantumLeak: Stealing Quantum Neural Networks from Cloud-based NISQ Machines0
Not Just Change the Labels, Learn the Features: Watermarking Deep Neural Networks with Multi-View DataCode0
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices0
WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright ProtectionCode0
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them0
MEAOD: Model Extraction Attack against Object Detectors0
SAME: Sample Reconstruction against Model Extraction AttacksCode0
Model Extraction Attacks Revisited0
Security and Privacy Challenges in Deep Learning Models0
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers0
Defense Against Model Extraction Attacks on Recommender SystemsCode0
Towards dialogue based, computer aided software requirements elicitation0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack0
Beyond Labeling Oracles: What does it mean to steal ML models?0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Safe and Robust Watermark Injection with a Single OoD ImageCode0
Business Process Text Sketch Automation Generation Using Large Language Model0
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement LearningCode0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
Data-Free Model Extraction Attacks in the Context of Object Detection0
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator0
Automated Data-Driven Model Extraction and Validation of Inverter Dynamics with Grid Support Function0
GUIDO: A Hybrid Approach to Guideline Discovery & Ordering from Natural Language TextsCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems0
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language TasksCode0
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index0
Ownership Protection of Generative Adversarial Networks0
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks0
Model Extraction Attacks Against Reinforcement Learning Based Controllers0
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings0
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks0
Model Extraction Attacks on Split Federated Learning0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
A Survey on Event-based News Narrative Extraction0
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public DataCode0
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
Model Extraction Attack against Self-supervised Speech Models0
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified