SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 40514100 of 4240 papers

TitleStatusHype
SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient ModelsCode0
On Exploring Pose Estimation as an Auxiliary Learning Task for Visible-Infrared Person Re-identificationCode0
Weakly Supervised Change Detection via Knowledge Distillation and Multiscale Sigmoid InferenceCode0
Low-Cost Self-Ensembles Based on Multi-Branch Transformation and Grouped ConvolutionCode0
FedBrain-Distill: Communication-Efficient Federated Brain Tumor Classification Using Ensemble Knowledge Distillation on Non-IID DataCode0
A Dual-Contrastive Framework for Low-Resource Cross-Lingual Named Entity RecognitionCode0
SynthDistill: Face Recognition with Knowledge Distillation from Synthetic DataCode0
Synthetic data generation method for data-free knowledge distillation in regression neural networksCode0
FedBKD: Distilled Federated Learning to Embrace Gerneralization and Personalization on Non-IID DataCode0
Online Adversarial Knowledge Distillation for Graph Neural NetworksCode0
Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural NetworksCode0
Towards Low-Latency Event Stream-based Visual Object Tracking: A Slow-Fast ApproachCode0
Tackling Data Heterogeneity in Federated Learning through Knowledge Distillation with Inequitable AggregationCode0
SCJD: Sparse Correlation and Joint Distillation for Efficient 3D Human Pose EstimationCode0
SCKD: Semi-Supervised Cross-Modality Knowledge Distillation for 4D Radar Object DetectionCode0
Online Ensemble Model Compression using Knowledge DistillationCode0
Understanding the Effect of Model Compression on Social Bias in Large Language ModelsCode0
Feature Representation Learning for Robust Retinal Disease Detection from Optical Coherence Tomography ImagesCode0
Feature Normalized Knowledge Distillation for Image ClassificationCode0
An Embarrassingly Simple Approach for Knowledge DistillationCode0
Declarative Knowledge Distillation from Large Language Models for Visual Question Answering DatasetsCode0
Feature Fusion for Online Mutual Knowledge DistillationCode0
Online Knowledge Distillation with Diverse PeersCode0
Dealing With Heterogeneous 3D MR Knee Images: A Federated Few-Shot Learning Method With Dual Knowledge DistillationCode0
Towards Mitigating Architecture Overfitting on Distilled DatasetsCode0
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language ModelsCode0
Faster gaze prediction with dense networks and Fisher pruningCode0
AMR-Evol: Adaptive Modular Response Evolution Elicits Better Knowledge Distillation for Large Language Models in Code GenerationCode0
FastAST: Accelerating Audio Spectrogram Transformer via Token Merging and Cross-Model Knowledge DistillationCode0
On Membership Inference Attacks in Knowledge DistillationCode0
TAKE: Topic-shift Aware Knowledge sElection for Dialogue GenerationCode0
Towards Multi-Morphology Controllers with Diversity and Knowledge DistillationCode0
VECT-GAN: A variationally encoded generative model for overcoming data scarcity in pharmaceutical scienceCode0
Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained ModelCode0
DCA: Dividing and Conquering Amnesia in Incremental Object DetectionCode0
SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPCCode0
Bridging Modalities: Knowledge Distillation and Masked Training for Translating Multi-Modal Emotion Recognition to Uni-Modal, Speech-Only Emotion RecognitionCode0
On the Byzantine-Resilience of Distillation-Based Federated LearningCode0
Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion RecognitionCode0
Understanding the Role of Mixup in Knowledge Distillation: An Empirical StudyCode0
Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge DistillationCode0
FANFOLD: Graph Normalizing Flows-driven Asymmetric Network for Unsupervised Graph-Level Anomaly DetectionCode0
Data Upcycling Knowledge Distillation for Image Super-ResolutionCode0
On the Efficacy of Small Self-Supervised Contrastive Models without Distillation SignalsCode0
AMLNet: Adversarial Mutual Learning Neural Network for Non-AutoRegressive Multi-Horizon Time Series ForecastingCode0
On the Generalization vs Fidelity Paradox in Knowledge DistillationCode0
Segmenting the FutureCode0
SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type ClassificationCode0
Attention-Based Depth Distillation with 3D-Aware Positional Encoding for Monocular 3D Object DetectionCode0
Bridging Dimensions: Confident Reachability for High-Dimensional ControllersCode0
Show:102550
← PrevPage 82 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified