SOTAVerified

Model extraction

Model extraction attacks, aka model stealing attacks, are used to extract the parameters from the target model. Ideally, the adversary will be able to steal and replicate a model that will have a very similar performance to the target model.

Papers

Showing 4150 of 176 papers

TitleStatusHype
Data-Free Model Extraction Attacks in the Context of Object Detection0
Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI0
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks0
A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments0
DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
A Framework for Double-Blind Federated Adaptation of Foundation Models0
Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training0
Differentially private fine-tuned NF-Net to predict GI cancer type0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble0
Show:102550
← PrevPage 5 of 18Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1three-step-originalExact Match0.17Unverified