Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks Feb 1, 2025 Federated Learning Knowledge Distillation
Code Code Available 0Students are the Best Teacher: Exit-Ensemble Distillation with Multi-Exits Apr 1, 2021 Classification General Classification
Code Code Available 0Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning Strategies Apr 13, 2024 Few-Shot Learning Knowledge Distillation
Code Code Available 0NC-NCD: Novel Class Discovery for Node Classification Jul 25, 2024 Classification Knowledge Distillation
Code Code Available 0Robust Model Compression Using Deep Hypotheses Mar 13, 2021 Binary Classification Knowledge Distillation
Code Code Available 0Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks Oct 2, 2024 Knowledge Distillation
Code Code Available 0Neighborhood Commonality-aware Evolution Network for Continuous Generalized Category Discovery Dec 7, 2024 Contrastive Learning Incremental Learning
Code Code Available 0Robust Multimodal Segmentation with Representation Regularization and Hybrid Prototype Distillation May 19, 2025 Knowledge Distillation Semantic Segmentation
Code Code Available 0Robustness and Diversity Seeking Data-Free Knowledge Distillation Nov 7, 2020 Data-free Knowledge Distillation Diversity
Code Code Available 0FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge Distillation for Question Answering Dec 9, 2024 Knowledge Distillation Question Answering
Code Code Available 0Towards Disturbance-Free Visual Mobile Manipulation Dec 17, 2021 Collision Avoidance Deep Reinforcement Learning
Code Code Available 0Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration Sep 27, 2024 Federated Learning Knowledge Distillation
Code Code Available 0Adversarial Teacher-Student Representation Learning for Domain Generalization Dec 1, 2021 Data Augmentation Domain Generalization
Code Code Available 0Network Pruning via Transformable Architecture Search May 23, 2019 Knowledge Distillation Network Pruning
Code Code Available 0Towards Effective Data-Free Knowledge Distillation via Diverse Diffusion Augmentation Oct 23, 2024 Data-free Knowledge Distillation Diversity
Code Code Available 0Differentially Private Knowledge Distillation via Synthetic Text Generation Mar 1, 2024 Knowledge Distillation Model Compression
Code Code Available 0Detect, Distill and Update: Learned DB Systems Facing Out of Distribution Data Oct 11, 2022 Knowledge Distillation Synthetic Data Generation
Code Code Available 0ROD: Reception-aware Online Distillation for Sparse Graphs Jul 25, 2021 Clustering Graph Learning
Code Code Available 0Detect, Distill and Update: Detect, Distill and Update: Learned DB Systems Facing Out of Distribution Data May 1, 2023 Knowledge Distillation Synthetic Data Generation
Code Code Available 0Neural Network Pruning with Residual-Connections and Limited-Data Nov 19, 2019 Knowledge Distillation Network Pruning
Code Code Available 0FlowDistill: Scalable Traffic Flow Prediction via Distillation from LLMs Apr 2, 2025 Knowledge Distillation Prediction
Code Code Available 0Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class Incremental Learning Dec 20, 2023 class-incremental learning Class Incremental Learning
Code Code Available 0AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation Mar 11, 2024 Data-free Knowledge Distillation Knowledge Distillation
Code Code Available 0New Insights on Relieving Task-Recency Bias for Online Class Incremental Learning Feb 16, 2023 class-incremental learning Class Incremental Learning
Code Code Available 0Dense 2D-3D Indoor Prediction with Sound via Aligned Cross-Modal Distillation Sep 20, 2023 3D Scene Reconstruction Depth Estimation
Code Code Available 0ROSAR: An Adversarial Re-Training Framework for Robust Side-Scan Sonar Object Detection Oct 14, 2024 Knowledge Distillation object-detection
Code Code Available 0Adversarial Moment-Matching Distillation of Large Language Models Jun 5, 2024 Imitation Learning Instruction Following
Code Code Available 0Delta Distillation for Efficient Video Processing Mar 17, 2022 Knowledge Distillation object-detection
Code Code Available 0AttriPrompter: Auto-Prompting with Attribute Semantics for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models Oct 22, 2024 Attribute Knowledge Distillation
Code Code Available 0Few Shot Network Compression via Cross Distillation Nov 21, 2019 Knowledge Distillation Model Compression
Code Code Available 0Sub-goal Distillation: A Method to Improve Small Language Agents May 4, 2024 Imitation Learning Knowledge Distillation
Code Code Available 0Few-shot Class-Incremental Semantic Segmentation via Pseudo-Labeling and Knowledge Distillation Aug 5, 2023 Class-Incremental Semantic Segmentation Knowledge Distillation
Code Code Available 0RUIE: Retrieval-based Unified Information Extraction using Large Language Model Sep 18, 2024 Contrastive Learning In-Context Learning
Code Code Available 0deepQuest-py: Large and Distilled Models for Quality Estimation Nov 1, 2021 Knowledge Distillation Sentence
Code Code Available 0WAVER: Writing-style Agnostic Text-Video Retrieval via Distilling Vision-Language Models Through Open-Vocabulary Knowledge Dec 15, 2023 Information Retrieval Knowledge Distillation
Code Code Available 0Subspace Distillation for Continual Learning Jul 31, 2023 Continual Learning Knowledge Distillation
Code Code Available 0A Forward and Backward Compatible Framework for Few-shot Class-incremental Pill Recognition Apr 24, 2023 class-incremental learning Class Incremental Learning
Code Code Available 0"No Matter What You Do": Purifying GNN Models via Backdoor Unlearning Oct 2, 2024 Backdoor Attack backdoor defense
Code Code Available 0Integrating Translation Memories into Non-Autoregressive Machine Translation Oct 12, 2022 Knowledge Distillation Machine Translation
Code Code Available 0Non-Autoregressive Neural Machine Translation Nov 7, 2017 Knowledge Distillation Machine Translation
Code Code Available 0Uncertainty-Guided Cross Attention Ensemble Mean Teacher for Semi-supervised Medical Image Segmentation Dec 19, 2024 Domain Generalization Image Segmentation
Code Code Available 0CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing May 24, 2022 Data-free Knowledge Distillation Knowledge Distillation
Code Code Available 0Knowledge Distillation with Deep Supervision Feb 16, 2022 Knowledge Distillation Transfer Learning
Code Code Available 0Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling May 18, 2023 Knowledge Distillation
Code Code Available 0Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay Jul 22, 2022 class-incremental learning Class Incremental Learning
Code Code Available 0FedSiKD: Clients Similarity and Knowledge Distillation: Addressing Non-i.i.d. and Constraints in Federated Learning Feb 14, 2024 Federated Learning Knowledge Distillation
Code Code Available 0Catastrophic Interference in Reinforcement Learning: A Solution Based on Context Division and Knowledge Distillation Sep 1, 2021 Deep Reinforcement Learning General Reinforcement Learning
Code Code Available 0Exploring Vacant Classes in Label-Skewed Federated Learning Jan 4, 2024 Federated Learning Knowledge Distillation
Code Code Available 0SaiT: Sparse Vision Transformers through Adaptive Token Pruning Oct 11, 2022 Knowledge Distillation
Code Code Available 0Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax May 28, 2021 Data Augmentation Knowledge Distillation
Code Code Available 0