SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 19011950 of 4240 papers

TitleStatusHype
IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera Pose Alignment0
Densely Distilling Cumulative Knowledge for Continual Learning0
A Survey on Transformer Compression0
Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer0
Attention-based Knowledge Distillation in Multi-attention Tasks: The Impact of a DCT-driven Loss0
Compact CNN Models for On-device Ocular-based User Recognition in Mobile Devices0
Implicit Word Reordering with Knowledge Distillation for Cross-Lingual Dependency Parsing0
Impossible Triangle: What's Next for Pre-trained Language Models?0
基于层间知识蒸馏的神经机器翻译(Inter-layer Knowledge Distillation for Neural Machine Translation)0
Efficient Technical Term Translation: A Knowledge Distillation Approach for Parenthetical Terminology Translation0
A Survey on Symbolic Knowledge Distillation of Large Language Models0
Improved Customer Transaction Classification using Semi-Supervised Knowledge Distillation0
A Flexible Multi-Task Model for BERT Serving0
Improved implicit diffusion model with knowledge distillation to estimate the spatial distribution density of carbon stock in remote sensing imagery0
Improved knowledge distillation by utilizing backward pass knowledge in neural networks0
Designing Parameter and Compute Efficient Diffusion Transformers using Distillation0
Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection0
Improved Knowledge Distillation via Adversarial Collaboration0
Joint Architecture and Knowledge Distillation in CNN for Chinese Text Recognition0
Efficient speech detection in environmental audio using acoustic recognition and knowledge distillation0
A Survey on Recent Teacher-student Learning Studies0
Efficient Speech Command Recognition Leveraging Spiking Neural Network and Curriculum Learning-based Knowledge Distillation0
Batch Selection and Communication for Active Learning with Edge Labeling0
Improve Knowledge Distillation via Label Revision and Data Selection0
Active Large Language Model-based Knowledge Distillation for Session-based Recommendation0
Improving Acoustic Scene Classification in Low-Resource Conditions0
Efficient Point Cloud Classification via Offline Distillation Framework and Negative-Weight Self-Distillation Technique0
Improving Apple Object Detection with Occlusion-Enhanced Distillation0
Improving Autoregressive NMT with Non-Autoregressive Model0
Improving CLIP Robustness with Knowledge Distillation and Self-Training0
Efficient Open-world Reinforcement Learning via Knowledge Distillation and Autonomous Rule Discovery0
ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model0
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment0
DFMSD: Dual Feature Masking Stage-wise Knowledge Distillation for Object Detection0
Improving Defensive Distillation using Teacher Assistant0
Improving De-Raining Generalization via Neural Reorganization0
Efficient Object Detection in Optical Remote Sensing Imagery via Attention-based Feature Distillation0
CoMBO: Conflict Mitigation via Branched Optimization for Class Incremental Segmentation0
A Survey on Model Compression for Large Language Models0
Improving Facial Landmark Detection Accuracy and Efficiency with Knowledge Distillation0
Improving Feature Generalizability with Multitask Learning in Class Incremental Learning0
Improving Frame-level Classifier for Word Timings with Non-peaky CTC in End-to-End Automatic Speech Recognition0
Efficient Machine Translation with Model Pruning and Quantization0
Noise as a Resource for Learning in Knowledge Distillation0
Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging0
Improving Knowledge Distillation for BERT Models: Loss Functions, Mapping Methods, and Weight Tuning0
Combining Curriculum Learning and Knowledge Distillation for Dialogue Generation0
Combining Compressions for Multiplicative Size Scaling on Natural Language Tasks0
ABC-KD: Attention-Based-Compression Knowledge Distillation for Deep Learning-Based Noise Suppression0
JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition0
Show:102550
← PrevPage 39 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified