Meta Knowledge Distillation Feb 16, 2022 Data Augmentation Image Classification
— Unverified 0Knowledge Distillation with Deep Supervision Feb 16, 2022 Knowledge Distillation Transfer Learning
Code Code Available 0EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation Feb 16, 2022 Grammatical Error Correction Knowledge Distillation
— Unverified 0FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction Feb 16, 2022 Active Learning Knowledge Distillation
Code Code Available 1No One Left Behind: Inclusive Federated Learning over Heterogeneous Devices Feb 16, 2022 Federated Learning Knowledge Distillation
— Unverified 0ZeroGen: Efficient Zero-shot Learning via Dataset Generation Feb 16, 2022 Data-free Knowledge Distillation Dataset Generation
Code Code Available 1Uni-Retriever: Towards Learning The Unified Embedding Based Retriever in Bing Sponsored Search Feb 13, 2022 Contrastive Learning Knowledge Distillation
— Unverified 0AI can evolve without labels: self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation Feb 13, 2022 Deep Learning Diagnostic
— Unverified 0Tiny Object Tracking: A Large-scale Dataset and A Baseline Feb 11, 2022 Attribute Knowledge Distillation
Code Code Available 2Distillation with Contrast is All You Need for Self-Supervised Point Cloud Representation Learning Feb 9, 2022 All Contrastive Learning
— Unverified 0Point-Level Region Contrast for Object Detection Pre-Training Feb 9, 2022 Contrastive Learning Knowledge Distillation
Code Code Available 1Exploring Inter-Channel Correlation for Diversity-preserved KnowledgeDistillation Feb 8, 2022 Diversity Knowledge Distillation
Code Code Available 1Adaptive Mixing of Auxiliary Losses in Supervised Learning Feb 7, 2022 Denoising Knowledge Distillation
Code Code Available 0Locally Differentially Private Distributed Deep Learning via Knowledge Distillation Feb 7, 2022 Deep Learning Knowledge Distillation
Code Code Available 0Measuring and Reducing Model Update Regression in Structured Prediction for NLP Feb 7, 2022 Dependency Parsing Knowledge Distillation
— Unverified 0Cross domain knowledge compression in realtime optical flow prediction on ultrasound sequences Feb 4, 2022 Knowledge Distillation Optical Flow Estimation
— Unverified 0Bootstrapped Representation Learning for Skeleton-Based Action Recognition Feb 4, 2022 Action Recognition Data Augmentation
— Unverified 0Iterative Self Knowledge Distillation -- From Pothole Classification to Fine-Grained and COVID Recognition Feb 4, 2022 Classification Knowledge Distillation
— Unverified 0Local Feature Matching with Transformers for low-end devices Feb 1, 2022 Knowledge Distillation
Code Code Available 1Deep-Disaster: Unsupervised Disaster Detection and Localization Using Visual Data Jan 31, 2022 Humanitarian Knowledge Distillation
Code Code Available 0Improving Robustness by Enhancing Weak Subnets Jan 30, 2022 Adversarial Robustness Data Augmentation
Code Code Available 0Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning Jan 30, 2022 Knowledge Distillation Network Pruning
— Unverified 0AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models Jan 29, 2022 Inductive Bias Knowledge Distillation
— Unverified 0Global-Reasoned Multi-Task Learning Model for Surgical Scene Understanding Jan 28, 2022 Graph Attention Knowledge Distillation
Code Code Available 1Dynamic Rectification Knowledge Distillation Jan 27, 2022 Edge-computing Knowledge Distillation
Code Code Available 0Anomaly Detection via Reverse Distillation from One-Class Embedding Jan 26, 2022 Anomaly Classification
Code Code Available 2Adaptive Instance Distillation for Object Detection in Autonomous Driving Jan 26, 2022 Autonomous Driving Knowledge Distillation
— Unverified 0TrustAL: Trustworthy Active Learning using Knowledge Distillation Jan 26, 2022 Active Learning Diversity
— Unverified 0One Student Knows All Experts Know: From Sparse to Dense Jan 26, 2022 All Knowledge Distillation
— Unverified 0Attentive Task Interaction Network for Multi-Task Learning Jan 25, 2022 Decoder Knowledge Distillation
Code Code Available 0Jointly Learning Knowledge Embedding and Neighborhood Consensus with Relational Knowledge Distillation for Entity Alignment Jan 25, 2022 Benchmarking Entity Alignment
— Unverified 0Federated Unlearning with Knowledge Distillation Jan 24, 2022 Federated Learning Knowledge Distillation
— Unverified 0AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models Jan 21, 2022 Bayesian Optimization Knowledge Distillation
— Unverified 0Image-to-Video Re-Identification via Mutual Discriminative Knowledge Transfer Jan 21, 2022 Knowledge Distillation Transfer Learning
— Unverified 0Can Model Compression Improve NLP Fairness Jan 21, 2022 Fairness Knowledge Distillation
— Unverified 0UKD: Debiasing Conversion Rate Estimation via Uncertainty-regularized Knowledge Distillation Jan 20, 2022 Knowledge Distillation Selection bias
— Unverified 0Improving Neural Machine Translation by Denoising Training Jan 19, 2022 Denoising Knowledge Distillation
— Unverified 0Continual Coarse-to-Fine Domain Adaptation in Semantic Segmentation Jan 18, 2022 Domain Adaptation Knowledge Distillation
Code Code Available 0It's All in the Head: Representation Knowledge Distillation through Classifier Sharing Jan 18, 2022 All Classification
Code Code Available 1Cross-modal Contrastive Distillation for Instructional Activity Anticipation Jan 18, 2022 Knowledge Distillation
— Unverified 0Knowledge Distillation as Self-Supervised Learning Jan 17, 2022 Knowledge Distillation Self-Supervised Learning
— Unverified 0Tree Knowledge Distillation for Compressing Transformer-Based Language Models Jan 16, 2022 Knowledge Distillation
— Unverified 0Learning Cross-Lingual IR from an English Retriever Jan 16, 2022 Cross-Lingual Information Retrieval Information Retrieval
— Unverified 0Nearest Neighbor Knowledge Distillation for Neural Machine Translation Jan 16, 2022 Knowledge Distillation Machine Translation
— Unverified 0Transferring Knowledge from Structure-aware Self-attention Language Model to Sequence-to-Sequence Semantic Parsing Jan 16, 2022 Code Generation Knowledge Distillation
— Unverified 0KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation Jan 16, 2022 cross-modal alignment Knowledge Distillation
— Unverified 0Re2G: Retrieve, Rerank, Generate Jan 16, 2022 Fact Checking GPU
— Unverified 0CL-ReKD: Cross-lingual Knowledge Distillation for Multilingual Retrieval Question Answering Jan 16, 2022 Knowledge Distillation Language Modeling
— Unverified 0MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation Jan 16, 2022 Knowledge Distillation Mixture-of-Experts
— Unverified 0SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation Jan 13, 2022 Knowledge Distillation regression
Code Code Available 1