KS-DETR: Knowledge Sharing in Attention Learning for Detection Transformer Feb 22, 2023 Knowledge Distillation Transfer Learning
Code Code Available 0Debiased Distillation by Transplanting the Last Layer Feb 22, 2023 Attribute Knowledge Distillation
— Unverified 0FrankenSplit: Efficient Neural Feature Compression with Shallow Variational Bottleneck Injection for Mobile Edge Computing Feb 21, 2023 Data Compression Edge-computing
Code Code Available 1Two-in-one Knowledge Distillation for Efficient Facial Forgery Detection Feb 21, 2023 Knowledge Distillation Vocal Bursts Valence Prediction
— Unverified 0The Role of Masking for Efficient Supervised Knowledge Distillation of Vision Transformers Feb 21, 2023 Knowledge Distillation
— Unverified 0CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with Clustered Aggregation and Knowledge DIStilled Regularization Feb 21, 2023 Federated Learning Knowledge Distillation
Code Code Available 0Social4Rec: Distilling User Preference from Social Graph for Video Recommendation in Tencent Feb 20, 2023 Knowledge Distillation Recommendation Systems
Code Code Available 2HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers Feb 19, 2023 Knowledge Distillation Model Compression
— Unverified 0RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness Feb 18, 2023 Knowledge Distillation Multi-Task Learning
— Unverified 0Fairly Predicting Graft Failure in Liver Transplant for Organ Assigning Feb 18, 2023 Fairness Knowledge Distillation
— Unverified 0Explicit and Implicit Knowledge Distillation via Unlabeled Data Feb 17, 2023 Data-free Knowledge Distillation Knowledge Distillation
— Unverified 0Few-shot 3D LiDAR Semantic Segmentation for Autonomous Driving Feb 17, 2023 Autonomous Driving Few-Shot Learning
— Unverified 0ST-MFNet Mini: Knowledge Distillation-Driven Frame Interpolation Feb 16, 2023 Knowledge Distillation Network Pruning
Code Code Available 0Fuzzy Knowledge Distillation from High-Order TSK to Low-Order TSK Feb 16, 2023 Benchmarking Knowledge Distillation
— Unverified 0Cross Modal Distillation for Flood Extent Mapping Feb 16, 2023 Knowledge Distillation
— Unverified 0LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation Feb 16, 2023 Knowledge Distillation Sentence
— Unverified 0Learning From Biased Soft Labels Feb 16, 2023 Knowledge Distillation
— Unverified 0New Insights on Relieving Task-Recency Bias for Online Class Incremental Learning Feb 16, 2023 class-incremental learning Class Incremental Learning
Code Code Available 0A lightweight network for photovoltaic cell defect detection in electroluminescence images based on neural architecture search and knowledge distillation Feb 15, 2023 Data Augmentation Defect Detection
— Unverified 0Offline-to-Online Knowledge Distillation for Video Instance Segmentation Feb 15, 2023 Data Augmentation Instance Segmentation
— Unverified 0Multi-teacher knowledge distillation as an effective method for compressing ensembles of neural networks Feb 14, 2023 Ensemble Learning Knowledge Distillation
Code Code Available 1Take a Prior from Other Tasks for Severe Blur Removal Feb 14, 2023 Deblurring Image Deblurring
— Unverified 0PerAda: Parameter-Efficient Federated Learning Personalization with Generalization Guarantees Feb 13, 2023 Federated Learning Generalization Bounds
Code Code Available 1Learning from Noisy Crowd Labels with Logics Feb 13, 2023 Knowledge Distillation named-entity-recognition
Code Code Available 0Exploring Navigation Maps for Learning-Based Motion Prediction Feb 13, 2023 Autonomous Driving Knowledge Distillation
Code Code Available 1NYCU-TWO at Memotion 3: Good Foundation, Good Teacher, then you have Good Meme Analysis Feb 13, 2023 Knowledge Distillation Sentiment Analysis
— Unverified 0SCLIFD:Supervised Contrastive Knowledge Distillation for Incremental Fault Diagnosis under Limited Fault Data Feb 12, 2023 class-incremental learning Class Incremental Learning
— Unverified 0Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels Feb 11, 2023 Knowledge Distillation Semantic Segmentation
Code Code Available 1Dual Relation Knowledge Distillation for Object Detection Feb 11, 2023 Knowledge Distillation Model Compression
Code Code Available 1Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data Feb 10, 2023 Knowledge Distillation Quantization
— Unverified 0CEN-HDR: Computationally Efficient neural Network for real-time High Dynamic Range imaging Feb 10, 2023 Efficient Neural Network Knowledge Distillation
Code Code Available 1SOCRATES: Text-based Human Search and Approach using a Robot Dog Feb 10, 2023 Knowledge Distillation
— Unverified 0Toward Extremely Lightweight Distracted Driver Recognition With Distillation-Based Neural Architecture Search and Knowledge Transfer Feb 9, 2023 Knowledge Distillation Neural Architecture Search
Code Code Available 0Lightweight Transformers for Clinical Natural Language Processing Feb 9, 2023 Continual Learning Knowledge Distillation
Code Code Available 1Knowledge Distillation-based Information Sharing for Online Process Monitoring in Decentralized Manufacturing System Feb 8, 2023 Knowledge Distillation
— Unverified 0Enhancing Modality-Agnostic Representations via Meta-Learning for Brain Tumor Segmentation Feb 8, 2023 Brain Tumor Segmentation Image Generation
— Unverified 0SLaM: Student-Label Mixing for Distillation with Unlabeled Examples Feb 8, 2023 Knowledge Distillation
— Unverified 0An Empirical Study of Uniform-Architecture Knowledge Distillation in Document Ranking Feb 8, 2023 Document Ranking Knowledge Distillation
— Unverified 0Audio Representation Learning by Distilling Video as Privileged Information Feb 6, 2023 Emotion Recognition Knowledge Distillation
— Unverified 0Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning Feb 4, 2023 Federated Learning Graph Embedding
— Unverified 0Knowledge Distillation in Vision Transformers: A Critical Review Feb 4, 2023 Decoder image-classification
— Unverified 0Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective Feb 3, 2023 Knowledge Distillation
Code Code Available 0Enhancing Once-For-All: A Study on Parallel Blocks, Skip Connections and Early Exits Feb 3, 2023 All Knowledge Distillation
— Unverified 0Generalized Uncertainty of Deep Neural Networks: Taxonomy and Applications Feb 2, 2023 Knowledge Distillation Model Compression
— Unverified 0Distill-DBDGAN: Knowledge Distillation and Adversarial Learning Framework for Defocus Blur Detection Feb 1, 2023 Defocus Blur Detection Generative Adversarial Network
Code Code Available 0Adaptive Search-and-Training for Robust and Efficient Network Pruning Feb 1, 2023 Knowledge Distillation Network Pruning
Code Code Available 0Knowledge Distillation on Graphs: A Survey Feb 1, 2023 Knowledge Distillation Model Compression
— Unverified 0Continual Segment: Towards a Single, Unified and Accessible Continual Segmentation Model of 143 Whole-body Organs in CT Scans Feb 1, 2023 Continual Semantic Segmentation Decoder
— Unverified 0Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection Feb 1, 2023 Knowledge Distillation
— Unverified 0AMD: Adaptive Masked Distillation for Object Detection Jan 31, 2023 Knowledge Distillation Model Compression
— Unverified 0