Domain-invariant Feature Exploration for Domain Generalization Jul 25, 2022 Diversity Domain Generalization
— Unverified 0HIRE: Distilling High-order Relational Knowledge From Heterogeneous Graph Neural Networks Jul 25, 2022 Knowledge Distillation Vocal Bursts Intensity Prediction
— Unverified 0Spatial-Channel Token Distillation for Vision MLPs Jul 23, 2022 Image Classification Knowledge Distillation
Code Code Available 0Handling Data Heterogeneity in Federated Learning via Knowledge Distillation and Fusion Jul 23, 2022 Data-free Knowledge Distillation Fairness
Code Code Available 0Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay Jul 22, 2022 class-incremental learning Class Incremental Learning
Code Code Available 0Federated Semi-Supervised Domain Adaptation via Knowledge Transfer Jul 21, 2022 Domain Adaptation Federated Learning
— Unverified 0TinyViT: Fast Pretraining Distillation for Small Vision Transformers Jul 21, 2022 Image Classification Knowledge Distillation
— Unverified 0Aware of the History: Trajectory Forecasting with the Local Behavior Data Jul 20, 2022 Knowledge Distillation Prediction
— Unverified 0Model Compression for Resource-Constrained Mobile Robots Jul 20, 2022 Knowledge Distillation model
— Unverified 0Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure Detection for Low-Power Wearable Internet of Things Systems Jul 20, 2022 Edge-computing Knowledge Distillation
— Unverified 0Knowledge distillation with a class-aware loss for endoscopic disease detection Jul 19, 2022 Diagnostic Knowledge Distillation
— Unverified 0Context Unaware Knowledge Distillation for Image Retrieval Jul 19, 2022 Image Retrieval Knowledge Distillation
Code Code Available 0Learning Knowledge Representation with Meta Knowledge Distillation for Single Image Super-Resolution Jul 18, 2022 Image Super-Resolution Knowledge Distillation
— Unverified 0Subclass Knowledge Distillation with Known Subclass Labels Jul 17, 2022 Binary Classification Knowledge Distillation
— Unverified 0TSPipe: Learn from Teacher Faster with Pipelines Jul 17, 2022 GPU Knowledge Distillation
Code Code Available 0SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video Anomaly Detection Jul 16, 2022 Anomaly Detection Knowledge Distillation
— Unverified 0Deep versus Wide: An Analysis of Student Architectures for Task-Agnostic Knowledge Distillation of Self-Supervised Speech Models Jul 14, 2022 Automatic Speech Recognition Automatic Speech Recognition (ASR)
— Unverified 0Rethinking Attention Mechanism in Time Series Classification Jul 14, 2022 Classification Knowledge Distillation
— Unverified 0Dynamic Low-Resolution Distillation for Cost-Efficient End-to-End Text Spotting Jul 14, 2022 global-optimization Knowledge Distillation
— Unverified 0SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision Jul 13, 2022 Knowledge Distillation Segmentation
— Unverified 0Rich Feature Distillation with Feature Affinity Module for Efficient Image Dehazing Jul 13, 2022 Contrastive Learning image-classification
— Unverified 0DSPNet: Towards Slimmable Pretrained Networks based on Discriminative Self-supervised Learning Jul 13, 2022 Knowledge Distillation Linear evaluation
— Unverified 0Cross-Architecture Knowledge Distillation Jul 12, 2022 Knowledge Distillation
— Unverified 0Normalized Feature Distillation for Semantic Segmentation Jul 12, 2022 Knowledge Distillation Model Compression
— Unverified 0Distilled Non-Semantic Speech Embeddings with Binary Neural Networks for Low-Resource Devices Jul 12, 2022 Emotion Recognition Keyword Spotting
Code Code Available 01st Place Solution to the EPIC-Kitchens Action Anticipation Challenge 2022 Jul 10, 2022 Action Anticipation Knowledge Distillation
— Unverified 0Improving Streaming End-to-End ASR on Transformer-based Causal Models with Encoder States Revision Strategies Jul 6, 2022 Automatic Speech Recognition Automatic Speech Recognition (ASR)
— Unverified 0Low-resource Low-footprint Wake-word Detection using Knowledge Distillation Jul 6, 2022 Knowledge Distillation speech-recognition
— Unverified 0GLANCE: Global to Local Architecture-Neutral Concept-based Explanations Jul 5, 2022 Disentanglement Feature Importance
Code Code Available 0PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient Jul 5, 2022 Knowledge Distillation object-detection
— Unverified 0A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy Jul 5, 2022 Federated Learning Knowledge Distillation
— Unverified 0ACT-Net: Asymmetric Co-Teacher Network for Semi-supervised Memory-efficient Medical Image Segmentation Jul 5, 2022 Image Segmentation Knowledge Distillation
Code Code Available 0VEM^2L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion Jul 4, 2022 Knowledge Distillation Knowledge Graph Completion
— Unverified 0FasterAI: A Lightweight Library for Creating Sparse Neural Networks Jul 3, 2022 Knowledge Distillation
— Unverified 0PrUE: Distilling Knowledge from Sparse Teacher Networks Jul 3, 2022 Knowledge Distillation
Code Code Available 0Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation Jul 2, 2022 Knowledge Distillation Multi-Task Learning
— Unverified 0Lost in Distillation: A Case Study in Toxicity Modeling Jul 1, 2022 Knowledge Distillation
— Unverified 0Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate from the Perspective of DistilBERT Jul 1, 2022 Knowledge Distillation
— Unverified 0Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks Jul 1, 2022 Knowledge Distillation Multi-Task Learning
— Unverified 0End-to-End Simultaneous Speech Translation with Pretraining and Distillation: Huawei Noah’s System for AutoSimTranS 2022 Jul 1, 2022 Decoder Knowledge Distillation
— Unverified 0KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation Jul 1, 2022 Knowledge Distillation Language Modeling
— Unverified 0ListBERT: Learning to Rank E-commerce products with Listwise BERT Jun 30, 2022 Knowledge Distillation Learning-To-Rank
— Unverified 0Extreme compression of sentence-transformer ranker models: faster inference, longer battery life, and less storage on edge devices Jun 29, 2022 Dimensionality Reduction Knowledge Distillation
— Unverified 0Knowledge Distillation of Transformer-based Language Models Revisited Jun 29, 2022 GPU Knowledge Distillation
— Unverified 0Cooperative Retriever and Ranker in Deep Recommenders Jun 28, 2022 Knowledge Distillation Recommendation Systems
Code Code Available 0QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design Jun 28, 2022 Acoustic Scene Classification Knowledge Distillation
— Unverified 0Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search Jun 27, 2022 Bayesian Optimization Knowledge Distillation
— Unverified 0Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification Jun 26, 2022 GPU image-classification
— Unverified 0Mixed Sample Augmentation for Online Distillation Jun 24, 2022 Data Augmentation Knowledge Distillation
— Unverified 0Feature Representation Learning for Robust Retinal Disease Detection from Optical Coherence Tomography Images Jun 24, 2022 Decoder Knowledge Distillation
Code Code Available 0