SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 51100 of 176 papers

TitleStatusHype
SAME: Sample Reconstruction against Model Extraction AttacksCode0
Model Extraction Attacks Revisited0
Security and Privacy Challenges in Deep Learning Models0
Watermarking Vision-Language Pre-trained Models for Multi-modal Embedding as a ServiceCode1
Army of Thieves: Enhancing Black-Box Model Extraction via Ensemble based sample selectionCode0
Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-bit Microcontrollers0
Defense Against Model Extraction Attacks on Recommender SystemsCode0
MeaeQ: Mount Model Extraction Attacks with Efficient QueriesCode0
Towards dialogue based, computer aided software requirements elicitation0
SCME: A Self-Contrastive Method for Data-free and Query-Limited Model Extraction Attack0
Beyond Labeling Oracles: What does it mean to steal ML models?0
Towards Few-Call Model Stealing via Active Self-Paced Knowledge Distillation and Diffusion-Based Image Generation0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Safe and Robust Watermark Injection with a Single OoD ImageCode0
Business Process Text Sketch Automation Generation Using Large Language Model0
The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement LearningCode0
Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models0
Data-Free Model Extraction Attacks in the Context of Object Detection0
Mercury: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator0
Automated Data-Driven Model Extraction and Validation of Inverter Dynamics with Grid Support Function0
GUIDO: A Hybrid Approach to Guideline Discovery & Ordering from Natural Language TextsCode0
FLuID: Mitigating Stragglers in Federated Learning using Invariant DropoutCode0
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems0
Weighted Automata Extraction and Explanation of Recurrent Neural Networks for Natural Language TasksCode0
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index0
Ownership Protection of Generative Adversarial Networks0
NaturalFinger: Generating Natural Fingerprint with Generative Adversarial Networks0
Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor WatermarkCode1
Model Extraction Attacks Against Reinforcement Learning Based Controllers0
GrOVe: Ownership Verification of Graph Neural Networks using Embeddings0
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles0
A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks0
Model Extraction Attacks on Split Federated Learning0
An anatomy-based V1 model: Extraction of Low-level Features, Reduction of distortion and a V1-inspired SOM0
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public DataCode0
A Survey on Event-based News Narrative Extraction0
Protecting Language Generation Models via Invisible WatermarkingCode1
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models0
FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model ExtractionCode1
Model Extraction Attack against Self-supervised Speech Models0
Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models0
A Practical Introduction to Side-Channel Extraction of Deep Neural Network Parameters0
Towards Automatically Extracting UML Class Diagrams from Natural Language SpecificationsCode0
SEEK: model extraction attack against hybrid secure inference protocols0
DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking0
Revealing Secrets From Pre-trained Models0
EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System0
On the amplification of security and privacy risks by post-hoc explanations in machine learning models0
A Framework for Understanding Model Extraction Attack and Defense0
Show:102550
← PrevPage 2 of 4Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified