Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy Aug 6, 2022 Graph Learning Model Compression
— Unverified 0Model Blending for Text Classification Aug 5, 2022 Classification Machine Translation
— Unverified 0Quiver neural networks Jul 26, 2022 Model Compression
— Unverified 0Efficient model compression with Random Operation Access Specific Tile (ROAST) hashing Jul 21, 2022 Model Compression
Code Code Available 0Model Compression for Resource-Constrained Mobile Robots Jul 20, 2022 Knowledge Distillation model
— Unverified 0T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit Jul 14, 2022 image-classification Image Classification
— Unverified 0Normalized Feature Distillation for Semantic Segmentation Jul 12, 2022 Knowledge Distillation Model Compression
— Unverified 0Rank-Based Filter Pruning for Real-Time UAV Tracking Jul 5, 2022 Deep Learning Model Compression
— Unverified 0Quantum Neural Network Compression Jul 4, 2022 Model Compression Neural Network Compression
— Unverified 0KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation Jul 1, 2022 Knowledge Distillation Language Modeling
— Unverified 0PCEE-BERT: Accelerating BERT Inference via Patient and Confident Early Exiting Jul 1, 2022 Model Compression
Code Code Available 0Language model compression with weighted low-rank factorization Jun 30, 2022 Language Modeling Language Modelling
— Unverified 0QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration Jun 30, 2022 Model Compression Quantization
— Unverified 0QTI Submission to DCASE 2021: residual normalization for device-imbalanced acoustic scene classification with efficient design Jun 28, 2022 Acoustic Scene Classification Knowledge Distillation
— Unverified 0Fundamental Limits of Communication Efficiency for Model Aggregation in Distributed Learning: A Rate-Distortion Approach Jun 28, 2022 Model Compression Quantization
— Unverified 0Representative Teacher Keys for Knowledge Distillation Model Compression Based on Attention Mechanism for Image Classification Jun 26, 2022 GPU image-classification
— Unverified 0An Automatic and Efficient BERT Pruning for Edge AI Systems Jun 21, 2022 CPU Model Compression
— Unverified 0Knowledge Distillation for Oriented Object Detection on Aerial Images Jun 20, 2022 Knowledge Distillation Model Compression
— Unverified 0Revisiting Self-Distillation Jun 17, 2022 Knowledge Distillation Model Compression
— Unverified 0Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization Jun 16, 2022 Language Modeling Language Modelling
— Unverified 0Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised Convolutional Neural Networks Jun 14, 2022 Atrial Fibrillation Detection Model Compression
— Unverified 0STD-NET: Search of Image Steganalytic Deep-learning Architecture via Hierarchical Tensor Decomposition Jun 12, 2022 Model Compression Steganalysis
Code Code Available 0A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation Jun 11, 2022 Model Compression Neural Network Compression
— Unverified 0HideNseek: Federated Lottery Ticket via Server-side Pruning and Sign Supermask Jun 9, 2022 Federated Learning Model Compression
— Unverified 0Differentially Private Model Compression Jun 3, 2022 model Model Compression
— Unverified 0Canonical convolutional neural networks Jun 3, 2022 Form Model Compression
Code Code Available 0Resource Allocation for Compression-aided Federated Learning with High Distortion Rate Jun 2, 2022 Federated Learning Model Compression
— Unverified 0MiniDisc: Minimal Distillation Schedule for Language Model Compression May 29, 2022 Knowledge Distillation Language Modeling
Code Code Available 0Do we need Label Regularization to Fine-tune Pre-trained Language Models? May 25, 2022 Knowledge Distillation Model Compression
— Unverified 0Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models May 25, 2022 Model Compression Quantization
— Unverified 0Aligning Logits Generatively for Principled Black-Box Knowledge Distillation May 21, 2022 Federated Learning Knowledge Distillation
Code Code Available 0InDistill: Information flow-preserving knowledge distillation for model compression May 20, 2022 Knowledge Distillation Model Compression
Code Code Available 0Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey May 17, 2022 Model Compression Survey
— Unverified 0Perturbation of Deep Autoencoder Weights for Model Compression and Classification of Tabular Data May 17, 2022 BIG-bench Machine Learning Classification
— Unverified 0QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators May 17, 2022 Model Compression Quantization
— Unverified 0Chemical transformer compression for accelerating both training and inference of molecular modeling May 16, 2022 Knowledge Distillation Model Compression
Code Code Available 0DNA data storage, sequencing data-carrying DNA May 11, 2022 Model Compression
— Unverified 0Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures May 10, 2022 AutoML BIG-bench Machine Learning
— Unverified 0Data-Free Adversarial Knowledge Distillation for Graph Neural Networks May 8, 2022 Generative Adversarial Network Graph Classification
— Unverified 0Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks May 7, 2022 Knowledge Distillation Model Compression
— Unverified 0Online Model Compression for Federated Learning with Large Models May 6, 2022 Federated Learning Model Compression
— Unverified 0Can collaborative learning be private, robust and scalable? May 5, 2022 Adversarial Robustness Federated Learning
— Unverified 0Multi-Granularity Structural Knowledge Distillation for Language Model Compression May 1, 2022 Knowledge Distillation Language Modeling
Code Code Available 0Towards Feature Distribution Alignment and Diversity Enhancement for Data-Free Quantization Apr 30, 2022 Data Free Quantization Diversity
— Unverified 0Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications Apr 25, 2022 AutoML Deep Learning
— Unverified 0Neural Network Pruning by Cooperative Coevolution Apr 12, 2022 Evolutionary Algorithms Model Compression
— Unverified 0Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment Apr 8, 2022 Image to text Language Modeling
Code Code Available 0Enabling All In-Edge Deep Learning: A Literature Review Apr 7, 2022 All Deep Learning
— Unverified 0LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification Apr 6, 2022 Model Compression
Code Code Available 0Aligned Weight Regularizers for Pruning Pretrained Neural Networks Apr 4, 2022 Language Modelling Model Compression
— Unverified 0