SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 5175 of 176 papers

TitleStatusHype
SAME: Sample Reconstruction against Model Extraction AttacksCode0
Model Extraction Attacks Revisited0
Security and Privacy Challenges in Deep Learning Models0
Watermarking Vision-Language Pre-trained Models for Multi-modal Embedding as a ServiceCode1
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers0
Defense Against Model Extraction Attacks on Recommender SystemsCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
Towards dialogue based, computer aided software requirements elicitation0
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack0
Beyond Labeling Oracles: What does it mean to steal ML models?0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Safe and Robust Watermark Injection with a Single OoD ImageCode0
Business Process Text Sketch Automation Generation Using Large Language Model0
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement LearningCode0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
Data-Free Model Extraction Attacks in the Context of Object Detection0
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator0
Automated Data-Driven Model Extraction and Validation of Inverter Dynamics with Grid Support Function0
GUIDO: A Hybrid Approach to Guideline Discovery & Ordering from Natural Language TextsCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems0
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language TasksCode0
Show:102550
← PrevPage 3 of 8Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified