Collaborative Deep Reinforcement Learning Feb 19, 2017 Deep Reinforcement Learning Knowledge Distillation
Code Code Available 05 KDMOS:Knowledge Distillation for Motion Segmentation Jun 17, 2025 Autonomous Driving Knowledge Distillation
Code Code Available 05 Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation May 16, 2020 Domain Adaptation Knowledge Distillation
Code Code Available 05 Joint Pre-training and Local Re-training: Transferable Representation Learning on Multi-source Knowledge Graphs Jun 5, 2023 Entity Alignment Knowledge Distillation
Code Code Available 05 Few Sample Knowledge Distillation for Efficient Network Compression Dec 5, 2018 Knowledge Distillation Network Pruning
Code Code Available 05 Improved Knowledge Distillation via Full Kernel Matrix Transfer Sep 30, 2020 Knowledge Distillation Model Compression
Code Code Available 05 Leveraging Large Language Models for Active Merchant Non-player Characters Dec 15, 2024 Knowledge Distillation
Code Code Available 05 Cogni-Net: Cognitive Feature Learning through Deep Visual Perception Nov 1, 2018 EEG Electroencephalogram (EEG)
Code Code Available 05 Invariant debiasing learning for recommendation via biased imputation Dec 28, 2024 Imputation Knowledge Distillation
Code Code Available 05 Knowledge Distillation For Wireless Edge Learning Apr 3, 2021 Cloud Computing Federated Learning
Code Code Available 05 Is Modularity Transferable? A Case Study through the Lens of Knowledge Distillation Mar 27, 2024 Domain Adaptation Knowledge Distillation
Code Code Available 05 Adversarial Teacher-Student Representation Learning for Domain Generalization Dec 1, 2021 Data Augmentation Domain Generalization
Code Code Available 05 Intra-class Patch Swap for Self-Distillation May 20, 2025 image-classification Image Classification
Code Code Available 05 Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural Network Apr 28, 2021 Graph Neural Network Knowledge Distillation
Code Code Available 05 Interpreting and Disentangling Feature Components of Various Complexity from DNNs Jun 29, 2020 Knowledge Distillation
Code Code Available 05 Efficient Multitask Dense Predictor via Binarization May 23, 2024 Binarization Knowledge Distillation
Code Code Available 05 A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech Recognition Mar 7, 2024 Audio-Visual Speech Recognition Knowledge Distillation
Code Code Available 05 Inter-Domain Alignment for Predicting High-Resolution Brain Networks Using Teacher-Student Learning Oct 6, 2021 Decoder Domain Adaptation
Code Code Available 05 Interpreting Microbiome Relative Abundance Data Using Symbolic Regression Oct 18, 2024 Diagnostic Knowledge Distillation
Code Code Available 05 Instance Temperature Knowledge Distillation Jun 27, 2024 Decision Making Efficient Exploration
Code Code Available 05 EaSyGuide : ESG Issue Identification Framework leveraging Abilities of Generative Large Language Models Jun 11, 2023 Articles Knowledge Distillation
Code Code Available 05 CL-XABSA: Contrastive Learning for Cross-lingual Aspect-based Sentiment Analysis Apr 2, 2022 Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA)
Code Code Available 05 Assessor-Guided Learning for Continual Environments Mar 21, 2023 Continual Learning Incremental Learning
Code Code Available 05 A Flexible Multi-Task Model for BERT Serving Jul 12, 2021 Knowledge Distillation model
Code Code Available 05 Infusing Sequential Information into Conditional Masked Translation Model with Self-Review Mechanism Oct 19, 2020 Decoder Knowledge Distillation
Code Code Available 05 DynaMMo: Dynamic Model Merging for Efficient Class Incremental Learning for Medical Images Apr 22, 2024 class-incremental learning Class Incremental Learning
Code Code Available 05 Cluster-aware Semi-supervised Learning: Relational Knowledge Distillation Provably Learns Clustering Jul 20, 2023 Clustering Data Augmentation
Code Code Available 05 Induced Model Matching: Restricted Models Help Train Full-Featured Models Jan 15, 2025 Knowledge Distillation Language Modeling
Code Code Available 05 InDistill: Information flow-preserving knowledge distillation for model compression May 20, 2022 Knowledge Distillation Model Compression
Code Code Available 05 Efficient Ternary Weight Embedding Model: Bridging Scalability and Performance Nov 23, 2024 Computational Efficiency Knowledge Distillation
Code Code Available 05 Distilling Knowledge by Mimicking Features Nov 3, 2020 Knowledge Distillation object-detection
Code Code Available 05 Induced Model Matching: How Restricted Models Can Help Larger Ones Feb 19, 2024 Knowledge Distillation Language Modeling
Code Code Available 05 Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning Dec 27, 2023 Continual Learning graph construction
Code Code Available 05 Knowledge Extraction with No Observable Data Dec 1, 2019 Data-free Knowledge Distillation Knowledge Distillation
Code Code Available 05 Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot Image Recognition Nov 9, 2021 Continual Learning Knowledge Distillation
Code Code Available 05 PruMUX: Augmenting Data Multiplexing with Model Compression May 24, 2023 Knowledge Distillation model
Code Code Available 05 Dynamic Rectification Knowledge Distillation Jan 27, 2022 Edge-computing Knowledge Distillation
Code Code Available 05 Incorporating Graph Information in Transformer-based AMR Parsing Jun 23, 2023 Abstract Meaning Representation AMR Parsing
Code Code Available 05 UNIKD: UNcertainty-filtered Incremental Knowledge Distillation for Neural Implicit Representation Dec 21, 2022 3D Reconstruction Incremental Learning
Code Code Available 05 Improving Question Answering Performance Using Knowledge Distillation and Active Learning Sep 26, 2021 Active Learning Knowledge Distillation
Code Code Available 05 Improving Neural Topic Models with Wasserstein Knowledge Distillation Mar 27, 2023 Knowledge Distillation Topic Models
Code Code Available 05 Improving Respiratory Sound Classification with Architecture-Agnostic Knowledge Distillation from Ensembles May 28, 2025 Knowledge Distillation Sound Classification
Code Code Available 05 Closest Neighbors are Harmful for Lightweight Masked Auto-encoders Jan 1, 2025 Knowledge Distillation
Code Code Available 05 Improving Neural Architecture Search Image Classifiers via Ensemble Learning Mar 14, 2019 Ensemble Learning Image Classification
Code Code Available 05 3M-Health: Multimodal Multi-Teacher Knowledge Distillation for Mental Health Detection Jul 12, 2024 Knowledge Distillation Social Media Mental Health Detection
Code Code Available 05 DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition Jul 16, 2025 Benchmarking Knowledge Distillation
Code Code Available 05 Improving Stance Detection with Multi-Dataset Learning and Knowledge Distillation Nov 1, 2021 Knowledge Distillation Stance Detection
Code Code Available 05 Improving generalizability of distilled self-supervised speech processing models under distorted settings Oct 14, 2022 Knowledge Distillation
Code Code Available 05 Improving Robustness by Enhancing Weak Subnets Jan 30, 2022 Adversarial Robustness Data Augmentation
Code Code Available 05 Improving End-to-End Speech Translation by Imitation-Based Knowledge Distillation with Synthetic Transcripts Jul 17, 2023 automatic-speech-translation Imitation Learning
Code Code Available 05