Learning Lightweight Lane Detection CNNs by Self Attention Distillation Aug 2, 2019 Knowledge Distillation Lane Detection
Code Code Available 05 Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasets May 24, 2024 Knowledge Distillation Multi-Task Learning
Code Code Available 05 MST-KD: Multiple Specialized Teachers Knowledge Distillation for Fair Face Recognition Aug 29, 2024 Face Recognition Knowledge Distillation
Code Code Available 05 Leaning Compact and Representative Features for Cross-Modality Person Re-Identification Mar 26, 2021 Cross-Modality Person Re-identification Knowledge Distillation
Code Code Available 05 Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation May 17, 2019 Knowledge Distillation
Code Code Available 05 Dealing With Heterogeneous 3D MR Knee Images: A Federated Few-Shot Learning Method With Dual Knowledge Distillation Mar 25, 2023 Federated Learning Few-Shot Learning
Code Code Available 05 Language-Universal Adapter Learning with Knowledge Distillation for End-to-End Multilingual Speech Recognition Feb 28, 2023 Automatic Speech Recognition Automatic Speech Recognition (ASR)
Code Code Available 05 Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation Nov 26, 2024 Data-free Knowledge Distillation Diversity
Code Code Available 05 Beyond the Limitation of Monocular 3D Detector via Knowledge Distillation Jan 1, 2023 Knowledge Distillation
Code Code Available 05 DCA: Dividing and Conquering Amnesia in Incremental Object Detection Mar 19, 2025 Knowledge Distillation object-detection
Code Code Available 05 Data Upcycling Knowledge Distillation for Image Super-Resolution Sep 25, 2023 Image Super-Resolution Knowledge Distillation
Code Code Available 05 Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networks Oct 3, 2024 Dataset Distillation Knowledge Distillation
Code Code Available 05 KS-DETR: Knowledge Sharing in Attention Learning for Detection Transformer Feb 22, 2023 Knowledge Distillation Transfer Learning
Code Code Available 05 Language Model Knowledge Distillation for Efficient Question Answering in Spanish Dec 7, 2023 Knowledge Distillation Language Modeling
Code Code Available 05 KnowledgeSG: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from Server Oct 8, 2024 Federated Learning Knowledge Distillation
Code Code Available 05 Adaptive Mixing of Auxiliary Losses in Supervised Learning Feb 7, 2022 Denoising Knowledge Distillation
Code Code Available 05 Data-Free Knowledge Distillation for Image Super-Resolution Jun 19, 2021 Data-free Knowledge Distillation Image Super-Resolution
Code Code Available 05 Data-free Knowledge Distillation for Fine-grained Visual Categorization Apr 18, 2024 Data-free Knowledge Distillation Fine-Grained Visual Categorization
Code Code Available 05 Beyond Conventional Transformers: The Medical X-ray Attention (MXA) Block for Improved Multi-Label Diagnosis Using Knowledge Distillation Apr 3, 2025 Anomaly Detection Knowledge Distillation
Code Code Available 05 Knowledge Extraction with No Observable Data Dec 1, 2019 Data-free Knowledge Distillation Knowledge Distillation
Code Code Available 05 Knowledge Grafting of Large Language Models May 24, 2025 Continual Learning Knowledge Distillation
Code Code Available 05 Knowledge Transfer Graph for Deep Collaborative Learning Sep 10, 2019 Knowledge Distillation Transfer Learning
Code Code Available 05 Data-free Knowledge Distillation for Segmentation using Data-Enriching GAN Nov 2, 2020 Data-free Knowledge Distillation Diversity
Code Code Available 05 Data-Free Generative Replay for Class-Incremental Learning on Imbalanced Data Jun 7, 2024 class-incremental learning Class Incremental Learning
Code Code Available 05 Knowledge Distillation with Adversarial Samples Supporting Decision Boundary May 15, 2018 Adversarial Attack Knowledge Distillation
Code Code Available 05 Knowledge Distillation with Reptile Meta-Learning for Pretrained Language Model Compression Oct 1, 2022 Knowledge Distillation Language Modeling
Code Code Available 05 Knowledge Distillation via Instance Relationship Graph Jun 1, 2019 Knowledge Distillation
Code Code Available 05 Data-Free Adversarial Distillation Dec 23, 2019 Knowledge Distillation Model Compression
Code Code Available 05 Data exploitation: multi-task learning of object detection and semantic segmentation on partially annotated data Nov 7, 2023 Knowledge Distillation Multi-Task Learning
Code Code Available 05 Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge Distillation Jun 13, 2022 image-classification Image Classification
Code Code Available 05 Better Supervisory Signals by Observing Learning Paths Mar 4, 2022 Knowledge Distillation
Code Code Available 05 Align-to-Distill: Trainable Attention Alignment for Knowledge Distillation in Neural Machine Translation Mar 3, 2024 Knowledge Distillation Machine Translation
Code Code Available 05 Knowledge Distillation of Russian Language Models with Reduction of Vocabulary May 4, 2022 Knowledge Distillation
Code Code Available 05 Knowledge Distillation Layer that Lets the Student Decide Sep 6, 2023 Knowledge Distillation
Code Code Available 05 Knowledge Distillation Performs Partial Variance Reduction May 27, 2023 Knowledge Distillation
Code Code Available 05 DASK: Distribution Rehearsing via Adaptive Style Kernel Learning for Exemplar-Free Lifelong Person Re-Identification Dec 12, 2024 Exemplar-Free Knowledge Distillation
Code Code Available 05 Knowledge Distillation from Single to Multi Labels: an Empirical Study Mar 15, 2023 Classification image-classification
Code Code Available 05 Knowledge Distillation from Cross Teaching Teachers for Efficient Semi-Supervised Abdominal Organ Segmentation in CT Nov 11, 2022 Image Segmentation Knowledge Distillation
Code Code Available 05 Few Sample Knowledge Distillation for Efficient Network Compression Dec 5, 2018 Knowledge Distillation Network Pruning
Code Code Available 05 Knowledge Distillation in RNN-Attention Models for Early Prediction of Student Performance Dec 19, 2024 Knowledge Distillation Student dropout
Code Code Available 05 DAD++: Improved Data-free Test Time Adversarial Defense Sep 10, 2023 Adversarial Defense Adversarial Robustness
Code Code Available 05 DAdEE: Unsupervised Domain Adaptation in Early Exit PLMs Oct 6, 2024 Domain Adaptation Knowledge Distillation
Code Code Available 05 BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers Aug 12, 2022 image-classification Image Classification
Code Code Available 05 Being Strong Progressively! Enhancing Knowledge Distillation of Large Language Models through a Curriculum Learning Framework Jun 6, 2025 Instruction Following Knowledge Distillation
Code Code Available 05 Knowledge Distillation for Quality Estimation Jul 1, 2021 Data Augmentation Knowledge Distillation
Code Code Available 05 Knowledge Distillation for Singing Voice Detection Nov 9, 2020 Information Retrieval Knowledge Distillation
Code Code Available 05 Aligning (Medical) LLMs for (Counterfactual) Fairness Aug 22, 2024 counterfactual Fairness
Code Code Available 05 D^2TV: Dual Knowledge Distillation and Target-oriented Vision Modeling for Many-to-Many Multimodal Summarization May 22, 2023 Knowledge Distillation
Code Code Available 05 cViL: Cross-Lingual Training of Vision-Language Models using Knowledge Distillation Jun 7, 2022 Knowledge Distillation Question Answering
Code Code Available 05 BEBERT: Efficient and Robust Binary Ensemble BERT Oct 28, 2022 Binarization Computational Efficiency
Code Code Available 05