SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 27012750 of 4240 papers

TitleStatusHype
MiniDisc: Minimal Distillation Schedule for Language Model CompressionCode0
Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box PredictorsCode1
One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive TranslationCode0
Parameter-Efficient and Student-Friendly Knowledge Distillation0
Geometer: Graph Few-Shot Class-Incremental Learning via Prototype RepresentationCode1
Continual evaluation for lifelong learning: Identifying the stability gapCode1
Region-aware Knowledge Distillation for Efficient Image-to-Image Translation0
Do we need Label Regularization to Fine-tune Pre-trained Language Models?0
DFM: Dialogue Foundation Model for Universal Large-Scale Dialogue-Oriented Task Learning0
Compressing Deep Graph Neural Networks via Adversarial Knowledge DistillationCode1
Optimizing Performance of Federated Person Re-identification: Benchmarking and AnalysisCode1
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature SharingCode0
IDEAL: Query-Efficient Data-Free Learning from Black-box ModelsCode1
Boosting Multi-Label Image Classification with Complementary Parallel Self-DistillationCode1
PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D DetectionCode1
LILA-BOTI : Leveraging Isolated Letter Accumulations By Ordering Teacher Insights for Bangla Handwriting RecognitionCode0
Knowledge Distillation via the Target-aware TransformerCode1
Aligning Logits Generatively for Principled Black-Box Knowledge DistillationCode0
Knowledge Distillation from A Stronger TeacherCode1
Exploring Extreme Parameter Compression for Pre-trained Language ModelsCode1
InDistill: Information flow-preserving knowledge distillation for model compressionCode0
Simple Regularisation for Uncertainty-Aware Knowledge Distillation0
ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval0
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt0
Chemical transformer compression for accelerating both training and inference of molecular modelingCode0
Directed Acyclic Transformer for Non-Autoregressive Machine TranslationCode1
Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering0
Knowledge Distillation Meets Open-Set Semi-Supervised LearningCode1
"Teaching Independent Parts Separately" (TIPSy-GAN) : Improving Accuracy and Stability in Unsupervised Adversarial 2D to 3D Pose Estimation0
D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image Generation0
Knowledge Distillation for Multi-Target Domain Adaptation in Real-Time Person Re-IdentificationCode0
DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterpartsCode1
Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning0
Data-Free Adversarial Knowledge Distillation for Graph Neural Networks0
ConceptDistil: Model-Agnostic Distillation of Concept Explanations0
Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks0
Distilling Inter-Class Distance for Semantic Segmentation0
Collective Relevance Labeling for Passage RetrievalCode0
Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural NetworksCode0
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems0
A Deep Reinforcement Learning Framework for Rapid Diagnosis of Whole Slide Pathological Images0
Spot-adaptive Knowledge DistillationCode1
FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative Joint Matrix Factorization and Knowledge Distillation0
Attention-based Knowledge Distillation in Multi-attention Tasks: The Impact of a DCT-driven Loss0
Knowledge Distillation of Russian Language Models with Reduction of VocabularyCode0
Generalized Knowledge Distillation via Relationship MatchingCode0
Masked Generative DistillationCode2
FedDKD: Federated Learning with Decentralized Knowledge Distillation0
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive TranslationCode0
Knowledge Distillation Meets Few-Shot Learning: An Approach for Few-Shot Intent Classification Within and Across Domains0
Show:102550
← PrevPage 55 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified