AMD: Automatic Multi-step Distillation of Large-scale Vision Models Jul 5, 2024 image-classification Image Classification
— Unverified 0Understanding the Gains from Repeated Self-Distillation Jul 5, 2024 Knowledge Distillation regression
— Unverified 0Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates Jul 5, 2024 Knowledge Distillation Transfer Learning
— Unverified 0Fully Fine-tuned CLIP Models are Efficient Few-Shot Learners Jul 4, 2024 Domain Generalization Few-Shot Learning
— Unverified 0Relative Difficulty Distillation for Semantic Segmentation Jul 4, 2024 Knowledge Distillation Semantic Segmentation
Code Code Available 0DSMix: Distortion-Induced Sensitivity Map Based Pre-training for No-Reference Image Quality Assessment Jul 4, 2024 Data Augmentation Image Quality Assessment
Code Code Available 0MLKD-BERT: Multi-level Knowledge Distillation for Pre-trained Language Models Jul 3, 2024 Extractive Question-Answering Knowledge Distillation
— Unverified 0Accelerated Proton Resonance Frequency-based Magnetic Resonance Thermometry by Optimized Deep Learning Method Jul 3, 2024 Knowledge Distillation
Code Code Available 0Supporting Cross-language Cross-project Bug Localization Using Pre-trained Language Models Jul 3, 2024 Contrastive Learning CPU
— Unverified 0Edge AI-Enabled Chicken Health Detection Based on Enhanced FCOS-Lite and Knowledge Distillation Jul 3, 2024 Knowledge Distillation Quantization
— Unverified 0Unified Anomaly Detection methods on Edge Device using Knowledge Distillation and Quantization Jul 3, 2024 Anomaly Detection CPU
— Unverified 0Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment Jul 3, 2024 Chatbot Computational Efficiency
— Unverified 0Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer based Dim Object Detection Jul 2, 2024 EEG Electroencephalogram (EEG)
Code Code Available 0Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application Jul 2, 2024 Knowledge Distillation Survey
— Unverified 0Self-Cooperation Knowledge Distillation for Novel Class Discovery Jul 2, 2024 Knowledge Distillation Novel Class Discovery
— Unverified 0ECAT: A Entire space Continual and Adaptive Transfer Learning Framework for Cross-Domain Recommendation Jul 2, 2024 Domain Adaptation Knowledge Distillation
— Unverified 0Advancing Compressed Video Action Recognition through Progressive Knowledge Distillation Jul 2, 2024 Action Recognition Knowledge Distillation
Code Code Available 0uDistil-Whisper: Label-Free Data Filtering for Knowledge Distillation in Low-Data Regimes Jul 1, 2024 Knowledge Distillation
Code Code Available 0BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large Language Models Personalization Jun 30, 2024 Continual Learning General Knowledge
— Unverified 0FANFOLD: Graph Normalizing Flows-driven Asymmetric Network for Unsupervised Graph-Level Anomaly Detection Jun 29, 2024 Anomaly Detection Knowledge Distillation
Code Code Available 0Enhancing Accuracy and Parameter-Efficiency of Neural Representations for Network Parameterization Jun 29, 2024 Knowledge Distillation
— Unverified 0Direct Preference Knowledge Distillation for Large Language Models Jun 28, 2024 Knowledge Distillation
— Unverified 0MuGSI: Distilling GNNs with Multi-Granularity Structural Information for Graph Classification Jun 28, 2024 Classification Graph Classification
Code Code Available 0Instance Temperature Knowledge Distillation Jun 27, 2024 Decision Making Efficient Exploration
Code Code Available 0Aligning Teacher with Student Preferences for Tailored Training Data Generation Jun 27, 2024 In-Context Learning Knowledge Distillation
— Unverified 0On Reducing Activity with Distillation and Regularization for Energy Efficient Spiking Neural Networks Jun 26, 2024 Knowledge Distillation
— Unverified 0Sequential Editing for Lifelong Training of Speech Recognition Models Jun 25, 2024 Automatic Speech Recognition Automatic Speech Recognition (ASR)
— Unverified 0Towards Optimal Trade-offs in Knowledge Distillation for CNNs and Vision Transformers at the Edge Jun 25, 2024 Knowledge Distillation
— Unverified 0Preserving Node Distinctness in Graph Autoencoders via Similarity Distillation Jun 25, 2024 Decoder Knowledge Distillation
— Unverified 0WAVE: Weight Template for Adaptive Initialization of Variable-sized Models Jun 25, 2024 Knowledge Distillation Transfer Learning
— Unverified 0Knowledge Distillation in Automated Annotation: Supervised Text Classification with LLM-Generated Training Labels Jun 25, 2024 Articles In-Context Learning
— Unverified 0Highly Constrained Coded Aperture Imaging Systems Design Via a Knowledge Distillation Approach Jun 25, 2024 Image Reconstruction Knowledge Distillation
— Unverified 0InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation Jun 25, 2024 Knowledge Distillation
— Unverified 0Leveraging Knowledge Distillation for Lightweight Skin Cancer Classification: Balancing Accuracy and Computational Efficiency Jun 24, 2024 Cancer Classification Computational Efficiency
— Unverified 0Exploring compressibility of transformer based text-to-music (TTM) models Jun 24, 2024 Decoder FAD
— Unverified 0Enhancing OOD Detection Using Latent Diffusion Jun 24, 2024 Contrastive Learning Knowledge Distillation
Code Code Available 0The Privileged Students: On the Value of Initialization in Multilingual Knowledge Distillation Jun 24, 2024 Knowledge Distillation
— Unverified 0Continual Learning with Diffusion-based Generative Replay for Industrial Streaming Data Jun 22, 2024 Continual Learning Knowledge Distillation
— Unverified 0Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning Jun 21, 2024 Knowledge Distillation
— Unverified 0Reinforced Knowledge Distillation for Time Series Regression Jun 21, 2024 Knowledge Distillation Model Compression
Code Code Available 0Failure-Resilient Distributed Inference with Model Compression over Heterogeneous Edge Devices Jun 20, 2024 Knowledge Distillation Model Compression
— Unverified 0Factual Dialogue Summarization via Learning from Large Language Models Jun 20, 2024 Contrastive Learning Data Augmentation
— Unverified 0SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shots Jun 20, 2024 In-Context Learning Knowledge Distillation
— Unverified 0Apprenticeship-Inspired Elegance: Synergistic Knowledge Distillation Empowers Spiking Neural Networks for Efficient Single-Eye Emotion Recognition Jun 20, 2024 Emotion Recognition Knowledge Distillation
— Unverified 0Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge Distillation Jun 19, 2024 Knowledge Distillation
Code Code Available 0WaterMono: Teacher-Guided Anomaly Masking and Enhancement Boosting for Robust Underwater Self-Supervised Monocular Depth Estimation Jun 19, 2024 Depth Estimation Image Enhancement
Code Code Available 0Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning? Jun 19, 2024 Knowledge Distillation
— Unverified 0Federated Learning with a Single Shared Image Jun 18, 2024 Federated Learning Knowledge Distillation
Code Code Available 0Enhancing Single-Slice Segmentation with 3D-to-2D Unpaired Scan Distillation Jun 18, 2024 Computed Tomography (CT) Knowledge Distillation
— Unverified 0Vernacular? I Barely Know Her: Challenges with Style Control and Stereotyping Jun 18, 2024 Knowledge Distillation
— Unverified 0