An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation Feb 28, 2020 Knowledge Distillation Memorization
— Unverified 00 Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression Sep 30, 2023 Inductive Bias Knowledge Distillation
— Unverified 00 BinaryBERT: Pushing the Limit of BERT Quantization Dec 31, 2020 Binarization Model Compression
— Unverified 00 An Effective Information Theoretic Framework for Channel Pruning Aug 14, 2024 Model Compression
— Unverified 00 AdaKD: Dynamic Knowledge Distillation of ASR models using Adaptive Loss Weighting May 11, 2024 Knowledge Distillation Model Compression
— Unverified 00 Accelerating Deep Learning with Dynamic Data Pruning Nov 24, 2021 Attribute Deep Learning
— Unverified 00 DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training Quantization for Vision Transformers Aug 6, 2024 Model Compression Quantization
— Unverified 00 2-bit Model Compression of Deep Convolutional Neural Network on ASIC Engine for Image Retrieval May 8, 2019 Image Retrieval Model Compression
— Unverified 00 DistilDoc: Knowledge Distillation for Visually-Rich Document Applications Jun 12, 2024 document-image-classification Document Image Classification
— Unverified 00 Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures Apr 25, 2023 Model Compression Network Pruning
— Unverified 00 An Automatic and Efficient BERT Pruning for Edge AI Systems Jun 21, 2022 CPU Model Compression
— Unverified 00 Discrete Model Compression With Resource Constraint for Deep Neural Networks Jun 1, 2020 Model Compression
— Unverified 00 Beyond the Tip of Efficiency: Uncovering the Submerged Threats of Jailbreak Attacks in Small Language Models Feb 27, 2025 Knowledge Distillation Model Compression
— Unverified 00 DipSVD: Dual-importance Protected SVD for Efficient LLM Compression Jun 25, 2025 Model Compression Quantization
— Unverified 00 DiPaCo: Distributed Path Composition Mar 15, 2024 Language Modelling Model Compression
— Unverified 00 Analysis of Quantization on MLP-based Vision Models Sep 14, 2022 Model Compression Quantization
— Unverified 00 AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles Jun 8, 2020 Model Compression
— Unverified 00 Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey May 17, 2022 Model Compression Survey
— Unverified 00 Beware of Calibration Data for Pruning Large Language Models Oct 23, 2024 Model Compression
— Unverified 00 Differential Privacy Meets Federated Learning under Communication Constraints Jan 28, 2021 Federated Learning Model Compression
— Unverified 00 Differentially Private Model Compression Jun 3, 2022 model Model Compression
— Unverified 00 Analysis of memory consumption by neural networks based on hyperparameters Oct 21, 2021 Deep Learning Model Compression
— Unverified 00 Differentiable Sparsification for Deep Neural Networks May 21, 2021 Feature Engineering Model Compression
— Unverified 00 Differentiable Sparsification for Deep Neural Networks Oct 8, 2019 Feature Engineering Model Compression
— Unverified 00 Differentiable Network Pruning for Microcontrollers Oct 15, 2021 Model Compression Network Pruning
— Unverified 00 Benchmarking Adversarial Robustness of Compressed Deep Learning Models Aug 16, 2023 Adversarial Robustness Benchmarking
— Unverified 00 An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers Aug 12, 2022 Computational Efficiency Model Compression
— Unverified 00 ACAM-KD: Adaptive and Cooperative Attention Masking for Knowledge Distillation Mar 8, 2025 Autonomous Driving feature selection
— Unverified 00 Differentiable Mask for Pruning Convolutional and Recurrent Networks Sep 10, 2019 Model Compression Multi-Task Learning
— Unverified 00 BD-KD: Balancing the Divergences for Online Knowledge Distillation Dec 25, 2022 Knowledge Distillation Model Compression
— Unverified 00 Differentiable Feature Aggregation Search for Knowledge Distillation Aug 2, 2020 Knowledge Distillation Model Compression
— Unverified 00 Differentiable Architecture Compression Jan 1, 2020 image-classification Image Classification
— Unverified 00 An Efficient Real-Time Object Detection Framework on Resource-Constricted Hardware Devices via Software and Hardware Co-design Aug 2, 2024 Model Compression Neural Network Compression
— Unverified 00 Developing Far-Field Speaker System Via Teacher-Student Learning Apr 14, 2018 Keyword Spotting Model Compression
— Unverified 00 Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey Aug 22, 2022 Deep Learning Edge-computing
— Unverified 00 Bayesian Federated Model Compression for Communication and Computation Efficiency Apr 11, 2024 Bayesian Inference Federated Learning
— Unverified 00 Design and Prototyping Distributed CNN Inference Acceleration in Edge Computing Nov 24, 2022 Distributed Computing Edge-computing
— Unverified 00 Bayesian Deep Learning Via Expectation Maximization and Turbo Deep Approximate Message Passing Feb 12, 2024 Bayesian Inference Federated Learning
— Unverified 00 A Model Compression Method with Matrix Product Operators for Speech Enhancement Oct 10, 2020 Model Compression Speech Enhancement
— Unverified 00 Activation Sparsity Opportunities for Compressing General Large Language Models Dec 13, 2024 Model Compression
— Unverified 00 Deploying Foundation Model Powered Agent Services: A Survey Dec 18, 2024 model Model Compression
— Unverified 00 Dependency-Aware Semi-Structured Sparsity of GLU Variants in Large Language Models May 3, 2024 Computational Efficiency Model Compression
— Unverified 00 Dense Vision Transformer Compression with Few Samples Mar 27, 2024 Model Compression
— Unverified 00 A Mixed Integer Programming Approach for Verifying Properties of Binarized Neural Networks Mar 11, 2022 Collision Avoidance Model Compression
— Unverified 00 Densely Distilling Cumulative Knowledge for Continual Learning May 16, 2024 All Continual Learning
— Unverified 00 Delving Deep into Semantic Relation Distillation Mar 27, 2025 Knowledge Distillation Model Compression
— Unverified 00 Balancing Specialization, Generalization, and Compression for Detection and Tracking Sep 25, 2019 Model Compression
— Unverified 00 DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a Trained Classifier Dec 27, 2019 Data-free Knowledge Distillation Incremental Learning
— Unverified 00 DeepTwist: Learning Model Compression via Occasional Weight Distortion Oct 30, 2018 model Model Compression
— Unverified 00 Balancing Cost and Benefit with Tied-Multi Transformers Feb 20, 2020 Decoder Knowledge Distillation
— Unverified 00