Unsupervised Text Style Transfer via LLMs and Attention Masking with Multi-way Interactions Feb 21, 2024 In-Context Learning Knowledge Distillation
— Unverified 0PaCKD: Pattern-Clustered Knowledge Distillation for Compressing Memory Access Prediction Models Feb 21, 2024 image-classification Image Classification
Code Code Available 0FGAD: Self-boosted Knowledge Distillation for An Effective Federated Graph Anomaly Detection Framework Feb 20, 2024 Anomaly Detection Federated Learning
— Unverified 0PIRB: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods Feb 20, 2024 Information Retrieval Knowledge Distillation
— Unverified 0ELAD: Explanation-Guided Large Language Models Active Distillation Feb 20, 2024 Active Learning Knowledge Distillation
— Unverified 0Induced Model Matching: How Restricted Models Can Help Larger Ones Feb 19, 2024 Knowledge Distillation Language Modeling
Code Code Available 0On the Byzantine-Resilience of Distillation-Based Federated Learning Feb 19, 2024 Federated Learning Knowledge Distillation
Code Code Available 0Revisiting Knowledge Distillation for Autoregressive Language Models Feb 19, 2024 Knowledge Distillation
Code Code Available 0Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation Feb 18, 2024 Data-free Knowledge Distillation Knowledge Distillation
Code Code Available 0On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models Feb 17, 2024 Data Augmentation Knowledge Distillation
— Unverified 0FedD2S: Personalized Data-Free Federated Knowledge Distillation Feb 16, 2024 Data-free Knowledge Distillation Fairness
— Unverified 0Cultural Commonsense Knowledge for Intercultural Dialogues Feb 16, 2024 Knowledge Distillation Specificity
— Unverified 0Model Compression and Efficient Inference for Large Language Models: A Survey Feb 15, 2024 Knowledge Distillation Model Compression
— Unverified 0NutePrune: Efficient Progressive Pruning with Numerous Teachers for Large Language Models Feb 15, 2024 Knowledge Distillation
Code Code Available 0Distilled Gradual Pruning with Pruned Fine-tuning Feb 15, 2024 Image Classification Knowledge Distillation
Code Code Available 0Walsh-domain Neural Network for Power Amplifier Behavioral Modelling and Digital Predistortion Feb 15, 2024 Knowledge Distillation
— Unverified 0Integrating ChatGPT into Secure Hospital Networks: A Case Study on Improving Radiology Report Analysis Feb 14, 2024 Contrastive Learning Knowledge Distillation
— Unverified 0Leveraging Large Language Models for Enhanced NLP Task Performance through Knowledge Distillation and Optimized Training Strategies Feb 14, 2024 Knowledge Distillation named-entity-recognition
— Unverified 0FedSiKD: Clients Similarity and Knowledge Distillation: Addressing Non-i.i.d. and Constraints in Federated Learning Feb 14, 2024 Federated Learning Knowledge Distillation
Code Code Available 0APALU: A Trainable, Adaptive Activation Function for Deep Learning Networks Feb 13, 2024 Anomaly Detection Deep Learning
— Unverified 0Two-Stage Multi-task Self-Supervised Learning for Medical Image Segmentation Feb 11, 2024 Auxiliary Learning Image Segmentation
— Unverified 0Domain Adaptable Fine-Tune Distillation Framework For Advancing Farm Surveillance Feb 10, 2024 Computational Efficiency Knowledge Distillation
Code Code Available 0Embedding Compression for Teacher-to-Student Knowledge Transfer Feb 9, 2024 Knowledge Distillation Transfer Learning
— Unverified 0Multi-source-free Domain Adaptation via Uncertainty-aware Adaptive Distillation Feb 9, 2024 Domain Adaptation Knowledge Distillation
Code Code Available 0Large Language Model Meets Graph Neural Network in Knowledge Distillation Feb 8, 2024 Contrastive Learning Graph Attention
— Unverified 0EfficientViT-SAM: Accelerated Segment Anything Model Without Accuracy Loss Feb 7, 2024 Decoder GPU
— Unverified 0Beyond Answers: Transferring Reasoning Capabilities to Smaller LLMs Using Multi-Teacher Knowledge Distillation Feb 7, 2024 Diversity Knowledge Distillation
Code Code Available 0Knowledge Distillation for Road Detection based on cross-model Semi-Supervised Learning Feb 7, 2024 Knowledge Distillation Road Segmentation
— Unverified 0A Survey on Transformer Compression Feb 5, 2024 Knowledge Distillation Mamba
— Unverified 0Dual Knowledge Distillation for Efficient Sound Event Detection Feb 5, 2024 Event Detection Knowledge Distillation
— Unverified 0Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted Inference Feb 2, 2024 Knowledge Distillation Privacy Preserving
— Unverified 0Cooperative Knowledge Distillation: A Learner Agnostic Approach Feb 2, 2024 counterfactual Knowledge Distillation
Code Code Available 0Faster Inference of Integer SWIN Transformer by Removing the GELU Activation Feb 2, 2024 GPU image-classification
— Unverified 0Spiking CenterNet: A Distillation-boosted Spiking Neural Network for Object Detection Feb 2, 2024 Decoder Knowledge Distillation
— Unverified 0Class incremental learning with probability dampening and cascaded gated classifier Feb 2, 2024 class-incremental learning Class Incremental Learning
Code Code Available 0Addressing Bias Through Ensemble Learning and Regularized Fine-Tuning Feb 1, 2024 Ensemble Learning Knowledge Distillation
— Unverified 0Dual-Student Knowledge Distillation Networks for Unsupervised Anomaly Detection Feb 1, 2024 Anomaly Detection Anomaly Segmentation
— Unverified 0Augmenting Offline Reinforcement Learning with State-only Interactions Feb 1, 2024 D4RL Data Augmentation
— Unverified 0Scavenging Hyena: Distilling Transformers into Long Convolution Models Jan 31, 2024 Knowledge Distillation
— Unverified 0EPSD: Early Pruning with Self-Distillation for Efficient Model Compression Jan 31, 2024 Knowledge Distillation Model Compression
— Unverified 0Stolen Subwords: Importance of Vocabularies for Machine Translation Model Stealing Jan 29, 2024 Knowledge Distillation Machine Translation
Code Code Available 0TQCompressor: improving tensor decomposition methods in neural networks via permutations Jan 29, 2024 Knowledge Distillation Model Compression
Code Code Available 0Face to Cartoon Incremental Super-Resolution using Knowledge Distillation Jan 27, 2024 Hallucination Incremental Learning
— Unverified 0Dynamic Transformer Architecture for Continual Learning of Multimodal Tasks Jan 27, 2024 Continual Learning Edge-computing
— Unverified 0Distilling Privileged Multimodal Information for Expression Recognition using Optimal Transport Jan 27, 2024 Diversity Knowledge Distillation
— Unverified 0A Comprehensive Survey of Compression Algorithms for Language Models Jan 27, 2024 Knowledge Distillation Quantization
— Unverified 0Large Language Model Guided Knowledge Distillation for Time Series Anomaly Detection Jan 26, 2024 Anomaly Detection Knowledge Distillation
— Unverified 0Deep Clustering with Diffused Sampling and Hardness-aware Self-distillation Jan 25, 2024 Clustering Contrastive Learning
Code Code Available 0Self-supervised Video Object Segmentation with Distillation Learning of Deformable Attention Jan 25, 2024 Knowledge Distillation Object
— Unverified 0Towards Complementary Knowledge Distillation for Efficient Dense Image Prediction Jan 24, 2024 Implicit Relations Instance Segmentation
— Unverified 0