SOTAVerified

Knowledge Distillation

Knowledge distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized.

Papers

Showing 12011250 of 4240 papers

TitleStatusHype
LookALike: Human Mimicry based collaborative decision making0
Group-Mix SAM: Lightweight Solution for Industrial Assembly Line Applications0
Histo-Genomic Knowledge Distillation For Cancer Prognosis From Histopathology Whole Slide ImagesCode1
Recurrent Drafter for Fast Speculative Decoding in Large Language ModelsCode3
Adapting OC20-trained EquiformerV2 Models for High-Entropy Materials0
MT-PATCHER: Selective and Extendable Knowledge Distillation from Large Language Models for Machine TranslationCode0
SpikeReveal: Unlocking Temporal Sequences from Real Blurry Inputs with Spike StreamsCode1
Open-Vocabulary Object Detection with Meta Prompt Representation and Instance Contrastive Optimization0
Knowledge Distillation in YOLOX-ViT for Side-Scan Sonar Object DetectionCode2
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models0
Distilling Named Entity Recognition Models for Endangered Species from Large Language Models0
Training Self-localization Models for Unseen Unfamiliar Places via Teacher-to-Student Data-Free Knowledge Transfer0
An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task LearningCode0
CoroNetGAN: Controlled Pruning of GANs via Hypernetworks0
LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving0
eDifFIQA: Towards Efficient Face Image Quality Assessment Based On Denoising Diffusion Probabilistic ModelsCode1
Low-Energy On-Device Personalization for MCUsCode0
Continual All-in-One Adverse Weather Removal with Knowledge Replay on a Unified Network StructureCode1
Distilling the Knowledge in Data Pruning0
CALF: Aligning LLMs for Time Series Forecasting via Cross-modal Fine-TuningCode2
Enhancing Adversarial Training with Prior Knowledge Distillation for Robust Image Compression0
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge DistillationCode0
Evolving Knowledge Distillation with Large Language Models and Active Learning0
One Category One Prompt: Dataset Distillation using Diffusion Models0
MEND: Meta dEmonstratioN Distillation for Efficient and Effective In-Context LearningCode0
Enhanced Sparsification via Stimulative Training0
Answering Diverse Questions via Text Attached with Key Audio-Visual CluesCode0
Attention is all you need for boosting graph convolutional neural network0
Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised Semantic HashingCode1
Knowledge Distillation of Convolutional Neural Networks through Feature Map Transformation using Decision Trees0
V_kD: Improving Knowledge Distillation using Orthogonal ProjectionsCode2
Cooperative Classification and Rationalization for Graph GeneralizationCode0
Weakly Supervised Change Detection via Knowledge Distillation and Multiscale Sigmoid InferenceCode0
Frequency Attention for Knowledge DistillationCode1
Scene Graph Aided Radiology Report Generation0
Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation0
Attention-guided Feature Distillation for Semantic Segmentation0
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples0
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR FeaturesCode1
Self-Adapting Large Visual-Language Models to Edge Devices across Visual ModalitiesCode1
Privacy-preserving Fine-tuning of Large Language Models through Flatness0
A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech RecognitionCode0
MKF-ADS: Multi-Knowledge Fusion Based Self-supervised Anomaly Detection System for Control Area Network0
Can Small Language Models be Good Reasoners for Sequential Recommendation?0
A Teacher-Free Graph Knowledge Distillation Framework with Dual Self-DistillationCode0
Learning to Maximize Mutual Information for Chain-of-Thought DistillationCode0
PromptKD: Unsupervised Prompt Distillation for Vision-Language ModelsCode3
JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition0
Distilled ChatGPT Topic & Sentiment Modeling with Applications in Finance0
UB-FineNet: Urban Building Fine-grained Classification Network for Open-access Satellite Images0
Show:102550
← PrevPage 25 of 85Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ScaleKD (T:BEiT-L S:ViT-B/14)Top-1 accuracy %86.43Unverified
2ScaleKD (T:Swin-L S:ViT-B/16)Top-1 accuracy %85.53Unverified
3ScaleKD (T:Swin-L S:ViT-S/16)Top-1 accuracy %83.93Unverified
4ScaleKD (T:Swin-L S:Swin-T)Top-1 accuracy %83.8Unverified
5KD++(T: regnety-16GF S:ViT-B)Top-1 accuracy %83.6Unverified
6VkD (T:RegNety 160 S:DeiT-S)Top-1 accuracy %82.9Unverified
7SpectralKD (T:Swin-S S:Swin-T)Top-1 accuracy %82.7Unverified
8ScaleKD (T:Swin-L S:ResNet-50)Top-1 accuracy %82.55Unverified
9DiffKD (T:Swin-L S: Swin-T)Top-1 accuracy %82.5Unverified
10DIST (T: Swin-L S: Swin-T)Top-1 accuracy %82.3Unverified
#ModelMetricClaimedVerifiedStatus
1SRD (T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)79.86Unverified
2shufflenet-v2(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)78.76Unverified
3MV-MR (T: CLIP/ViT-B-16 S: resnet50)Top-1 Accuracy (%)78.6Unverified
4resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)78.28Unverified
5resnet8x4 (T: resnet32x4 S: resnet8x4 [modified])Top-1 Accuracy (%)78.08Unverified
6ReviewKD++(T:resnet-32x4, S:shufflenet-v2)Top-1 Accuracy (%)77.93Unverified
7ReviewKD++(T:resnet-32x4, S:shufflenet-v1)Top-1 Accuracy (%)77.68Unverified
8resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)77.5Unverified
9resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.68Unverified
10resnet8x4 (T: resnet32x4 S: resnet8x4)Top-1 Accuracy (%)76.31Unverified
#ModelMetricClaimedVerifiedStatus
1LSHFM (T: ResNet101 S: ResNet50)mAP93.17Unverified
2LSHFM (T: ResNet101 S: MobileNetV2)mAP90.14Unverified
#ModelMetricClaimedVerifiedStatus
1TIE-KD (T: Adabins S: MobileNetV2)RMSE2.43Unverified