The Effect of Model Compression on Fairness in Facial Expression Recognition Jan 5, 2022 Facial Expression Recognition Facial Expression Recognition (FER)
— Unverified 0The Impact of Quantization and Pruning on Deep Reinforcement Learning Models Jul 5, 2024 Deep Reinforcement Learning Model Compression
— Unverified 0The Knowledge Within: Methods for Data-Free Model Compression Dec 3, 2019 Model Compression
— Unverified 0The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve? Feb 24, 2025 Arithmetic Reasoning Common Sense Reasoning
— Unverified 0Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks Feb 4, 2025 Low-rank compression Model Compression
— Unverified 0The Potential of AutoML for Recommender Systems Feb 6, 2024 AutoML Machine Translation
— Unverified 0Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method Nov 19, 2018 Model Compression Network Pruning
— Unverified 0Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation Apr 3, 2021 Model Compression
— Unverified 0Time-Correlated Sparsification for Efficient Over-the-Air Model Aggregation in Wireless Federated Learning Feb 17, 2022 Federated Learning Model Compression
— Unverified 0Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation Aug 27, 2019 Model Compression Quantization
— Unverified 0TinyM^2Net-V3: Memory-Aware Compressed Multimodal Deep Neural Networks for Sustainable Edge Deployment May 20, 2024 Knowledge Distillation Model Compression
— Unverified 0TinyR1-32B-Preview: Boosting Accuracy with Branch-Merge Distillation Mar 6, 2025 Model Compression Transfer Learning
— Unverified 0To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference Oct 21, 2018 Deep Learning image-classification
— Unverified 0To Know Where We Are: Vision-Based Positioning in Outdoor Environments Jun 19, 2015 Image Registration Model Compression
— Unverified 0Topology Distillation for Recommender System Jun 16, 2021 Knowledge Distillation Model Compression
— Unverified 0torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation Nov 25, 2020 Image Classification Instance Segmentation
— Unverified 0Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM May 2, 2019 Model Compression Quantization
— Unverified 0Toward Real-World Voice Disorder Classification Dec 5, 2021 Classification Model Compression
— Unverified 0Towards Accurate Post-Training Quantization for Vision Transformer Mar 25, 2023 Model Compression Quantization
— Unverified 0Towards a tailored mixed-precision sub-8-bit quantization scheme for Gated Recurrent Units using Genetic Algorithms Feb 19, 2024 Model Compression Quantization
— Unverified 0Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper Nov 22, 2023 Model Compression parameter-efficient fine-tuning
— Unverified 0Towards Building a Real Time Mobile Device Bird Counting System Through Synthetic Data Training and Model Compression Dec 15, 2019 Crowd Counting Model Compression
— Unverified 0Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillation Mar 1, 2023 Domain Adaptation Knowledge Distillation
— Unverified 0Towards efficient deep autoencoders for multivariate time series anomaly detection Mar 4, 2024 Anomaly Detection Model Compression
— Unverified 0Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning Jun 3, 2024 Model Compression Network Pruning
— Unverified 0Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization May 27, 2021 Model Compression Quantization
— Unverified 0Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework Jul 26, 2021 image-classification Image Classification
— Unverified 0Towards Higher Ranks via Adversarial Weight Pruning Nov 29, 2023 Model Compression Network Pruning
— Unverified 0Towards Modality Transferable Visual Information Representation with Optimal Model Compression Aug 13, 2020 Model Compression Philosophy
— Unverified 0Towards Optimal Compression: Joint Pruning and Quantization Feb 15, 2023 Model Compression Neural Architecture Search
— Unverified 0Towards Superior Quantization Accuracy: A Layer-sensitive Approach Mar 9, 2025 Logical Reasoning Model Compression
— Unverified 0Do we need Label Regularization to Fine-tune Pre-trained Language Models? May 25, 2022 Knowledge Distillation Model Compression
— Unverified 0Towards Zero-Shot Knowledge Distillation for Natural Language Processing Dec 31, 2020 Knowledge Distillation Model Compression
— Unverified 0Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models May 25, 2022 Model Compression Quantization
— Unverified 0Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization Sep 7, 2023 Model Compression Quantization
— Unverified 0T-RECX: Tiny-Resource Efficient Convolutional neural networks with early-eXit Jul 14, 2022 image-classification Image Classification
— Unverified 0TrimLLM: Progressive Layer Dropping for Domain-Specific LLMs Dec 15, 2024 Model Compression Quantization
— Unverified 0Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search Dec 7, 2024 Model Compression Quantization
— Unverified 0Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy Aug 6, 2022 Graph Learning Model Compression
— Unverified 0Tuning Algorithms and Generators for Efficient Edge Inference Jul 31, 2019 Model Compression
— Unverified 0TutorNet: Towards Flexible Knowledge Distillation for End-to-End Speech Recognition Aug 3, 2020 Knowledge Distillation Model Compression
— Unverified 0TwinDNN: A Tale of Two Deep Neural Networks Jan 1, 2021 image-classification Image Classification
— Unverified 0Two-Bit Networks for Deep Learning on Resource-Constrained Embedded Devices Jan 2, 2017 Computational Efficiency General Classification
— Unverified 0Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models Apr 7, 2025 Adversarial Robustness Diversity
— Unverified 0Two-Pass End-to-End ASR Model Compression Jan 8, 2022 Decoder Knowledge Distillation
— Unverified 0Two-Step Knowledge Distillation for Tiny Speech Enhancement Sep 15, 2023 Knowledge Distillation Model Compression
— Unverified 0UDC: Unified DNAS for Compressible TinyML Models Jan 15, 2022 Model Compression Neural Architecture Search
— Unverified 0Understanding and Improving Knowledge Distillation Feb 10, 2020 Knowledge Distillation Model Compression
— Unverified 0Understanding LLMs: A Comprehensive Overview from Training to Inference Jan 4, 2024 Language Modeling Language Modelling
— Unverified 0Unleashing Channel Potential: Space-Frequency Selection Convolution for SAR Object Detection Jan 1, 2024 feature selection Model Compression
— Unverified 0