LENAS: Learning-based Neural Architecture Search and Ensemble for 3D Radiotherapy Dose Prediction Jun 12, 2021 Diversity Ensemble Learning
Code Code Available 0RefBERT: Compressing BERT by Referencing to Pre-computed Representations Jun 11, 2021 Knowledge Distillation
— Unverified 0Generate, Annotate, and Learn: NLP with Synthetic Text Jun 11, 2021 Few-Shot Learning Image Classification
Code Code Available 0Does Knowledge Distillation Really Work? Jun 10, 2021 Knowledge Distillation
Code Code Available 1Marginal Utility Diminishes: Exploring the Minimum Knowledge for BERT Knowledge Distillation Jun 10, 2021 Knowledge Distillation
Code Code Available 0AKE-GNN: Effective Graph Learning with Adaptive Knowledge Exchange Jun 10, 2021 Classification Graph Classification
— Unverified 0Knowledge distillation: A good teacher is patient and consistent Jun 9, 2021 Image Classification Knowledge Distillation
Code Code Available 2Distilling Image Classifiers in Object Detectors Jun 9, 2021 Knowledge Distillation Object
Code Code Available 1XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation Jun 8, 2021 Knowledge Distillation NER
Code Code Available 1Learning by Distillation: A Self-Supervised Learning Framework for Optical Flow Estimation Jun 8, 2021 Knowledge Distillation Optical Flow Estimation
— Unverified 0BERT Learns to Teach: Knowledge Distillation with Meta Learning Jun 8, 2021 Knowledge Distillation Meta-Learning
Code Code Available 1RoSearch: Search for Robust Student Architectures When Distilling Pre-trained Language Models Jun 7, 2021 Adversarial Robustness Knowledge Distillation
— Unverified 0Zero-Shot Knowledge Distillation from a Decision-Based Black-Box Model Jun 7, 2021 Knowledge Distillation
Code Code Available 1Preservation of the Global Knowledge by Not-True Distillation in Federated Learning Jun 6, 2021 Continual Learning Federated Learning
Code Code Available 1Bidirectional Distillation for Top-K Recommender System Jun 5, 2021 Knowledge Distillation Model Compression
Code Code Available 1MergeDistill: Merging Pre-trained Language Models using Distillation Jun 5, 2021 Cross-Lingual Transfer Knowledge Distillation
— Unverified 0ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression Jun 4, 2021 Knowledge Distillation
Code Code Available 0Not All Knowledge Is Created Equal: Mutual Distillation of Confident Knowledge Jun 2, 2021 All Knowledge Distillation
— Unverified 0Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation Jun 2, 2021 Knowledge Distillation Translation
Code Code Available 0One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers Jun 2, 2021 Knowledge Distillation Language Modeling
— Unverified 0Modality-specific Distillation Jun 1, 2021 Knowledge Distillation Meta-Learning
— Unverified 0Cost-effective Deployment of BERT Models in Serverless Environment Jun 1, 2021 Knowledge Distillation Semantic Textual Similarity
— Unverified 0Continual Learning for Neural Machine Translation Jun 1, 2021 Continual Learning Knowledge Distillation
— Unverified 0Multi-Grained Knowledge Distillation for Named Entity Recognition Jun 1, 2021 Knowledge Distillation named-entity-recognition
— Unverified 0Towards Quantifiable Dialogue Coherence Evaluation Jun 1, 2021 Coherence Evaluation Dialogue Evaluation
Code Code Available 1Claim Matching Beyond English to Scale Global Fact-Checking Jun 1, 2021 Fact Checking Knowledge Distillation
— Unverified 0Natural Statistics of Network Activations and Implications for Knowledge Distillation Jun 1, 2021 Knowledge Distillation
— Unverified 0Reinforced Iterative Knowledge Distillation for Cross-Lingual Named Entity Recognition Jun 1, 2021 Cross-Lingual NER Knowledge Distillation
— Unverified 0Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing May 31, 2021 Knowledge Distillation Unsupervised Pre-training
Code Code Available 0Transformer-Based Source-Free Domain Adaptation May 28, 2021 Domain Adaptation Knowledge Distillation
Code Code Available 1Knowledge Inheritance for Pre-trained Language Models May 28, 2021 Domain Adaptation Knowledge Distillation
Code Code Available 1FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and Representation Learning May 28, 2021 DeepFake Detection Domain Adaptation
— Unverified 0Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax May 28, 2021 Data Augmentation Knowledge Distillation
Code Code Available 0Fair Feature Distillation for Visual Recognition May 27, 2021 Fairness Knowledge Distillation
— Unverified 0Towards Understanding Knowledge Distillation May 27, 2021 Knowledge Distillation Transfer Learning
— Unverified 0How Does Distilled Data Complexity Impact the Quality and Confidence of Non-Autoregressive Machine Translation? May 27, 2021 Diversity Knowledge Distillation
— Unverified 0Selective Knowledge Distillation for Neural Machine Translation May 27, 2021 Knowledge Distillation Machine Translation
Code Code Available 1Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation May 27, 2021 Knowledge Distillation Neural Architecture Search
— Unverified 0Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs May 25, 2021 Attribute Knowledge Distillation
Code Code Available 1Real-time Monocular Depth Estimation with Sparse Supervision on Mobile May 25, 2021 Autonomous Vehicles Depth Estimation
— Unverified 0KnowSR: Knowledge Sharing among Homogeneous Agents in Multi-agent Reinforcement Learning May 25, 2021 Deep Reinforcement Learning Knowledge Distillation
— Unverified 0Experimenting with Knowledge Distillation techniques for performing Brain Tumor Segmentation May 24, 2021 Brain Tumor Segmentation Knowledge Distillation
— Unverified 0AirNet: Neural Network Transmission over the Air May 24, 2021 Knowledge Distillation
— Unverified 0Revisiting Knowledge Distillation for Object Detection May 22, 2021 Domain Adaptation Knowledge Distillation
— Unverified 0Backdoor Attacks on Self-Supervised Learning May 21, 2021 Backdoor Attack Inductive Bias
Code Code Available 1Intra-Document Cascading: Learning to Select Passages for Neural Document Ranking May 20, 2021 Document Ranking Knowledge Distillation
Code Code Available 1Data-Free Knowledge Distillation for Heterogeneous Federated Learning May 20, 2021 Data-free Knowledge Distillation Federated Learning
Code Code Available 1Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation May 19, 2021 Image Classification Knowledge Distillation
Code Code Available 1Weakly Supervised Dense Video Captioning via Jointly Usage of Knowledge Distillation and Cross-modal Matching May 18, 2021 Caption Generation Cross-Modal Retrieval
— Unverified 0Inplace knowledge distillation with teacher assistant for improved training of flexible deep neural networks May 18, 2021 image-classification Image Classification
— Unverified 0