SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 11011150 of 4240 papers

TitleStatusHype
Retrieval-Oriented Knowledge for Click-Through Rate PredictionCode1
A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation0
Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities0
Promoting CNNs with Cross-Architecture Knowledge Distillation for Efficient Monocular Depth Estimation0
BeSound: Bluetooth-Based Position Estimation Enhancing with Cross-Modality Distillation0
Compressed Meta-Optical Encoder for Image Classification0
Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation0
Distributed Learning for Wi-Fi AP Load Prediction0
Towards Multi-Morphology Controllers with Diversity and Knowledge DistillationCode0
DynaMMo: Dynamic Model Merging for Efficient Class Incremental Learning for Medical ImagesCode0
From LLM to NMT: Advancing Low-Resource Machine Translation with Claude0
CKD: Contrastive Knowledge Distillation from A Sample-wise PerspectiveCode0
FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning0
EncodeNet: A Framework for Boosting DNN Accuracy with Entropy-driven Generalized Converting Autoencoder0
MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities0
Dynamic Temperature Knowledge DistillationCode1
Parameter Efficient Diverse Paraphrase Generation Using Sequence-Level Knowledge Distillation0
EdgeFusion: On-Device Text-to-Image Generation0
Data-free Knowledge Distillation for Fine-grained Visual CategorizationCode0
KDk: A Defense Mechanism Against Label Inference Attacks in Vertical Federated Learning0
LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models0
GhostNetV3: Exploring the Training Strategies for Compact Models0
A Progressive Framework of Vision-language Knowledge Distillation and Alignment for Multilingual Scene0
Comprehensive Survey of Model Compression and Speed up for Vision Transformers0
MK-SGN: A Spiking Graph Convolutional Network with Multimodal Fusion and Knowledge Distillation for Skeleton-based Action Recognition0
Camera clustering for scalable stream-based active distillationCode1
Digging into contrastive learning for robust depth estimation with diffusion modelsCode1
ReffAKD: Resource-efficient Autoencoder-based Knowledge DistillationCode0
AI-KD: Towards Alignment Invariant Face Image Quality Assessment Using Knowledge DistillationCode0
MTKD: Multi-Teacher Knowledge Distillation for Image Super-Resolution0
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision TransformersCode0
Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning StrategiesCode0
Edge-Efficient Deep Learning Models for Automatic Modulation Classification: A Performance Analysis0
Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers0
Boosting Self-Supervision for Single-View Scene Completion via Knowledge Distillation0
Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image DenoisingCode2
Remembering Transformer for Continual Learning0
A predictive machine learning force field framework for liquid electrolyte development0
Optimization Methods for Personalizing Large Language Models through Retrieval AugmentationCode2
Improving Facial Landmark Detection Accuracy and Efficiency with Knowledge Distillation0
Robust feature knowledge distillation for enhanced performance of lightweight crack segmentation models0
CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as TeachersCode1
GHOST: Grounded Human Motion Generation with Open Vocabulary Scene-and-Text Contexts0
Bootstrapping Chest CT Image Understanding by Distilling Knowledge from X-ray Expert Models0
MonoTAKD: Teaching Assistant Knowledge Distillation for Monocular 3D Object DetectionCode1
Diffusion Time-step Curriculum for One Image to 3D GenerationCode2
What Happens When Small Is Made Smaller? Exploring the Impact of Compression on Small Data Pretrained Language Models0
Do We Really Need a Complex Agent System? Distill Embodied Agent into a Single Model0
Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual ExplanationsCode0
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small ModelsCode0
Show:102550
← PrevPage 23 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified