Neural Architecture Codesign for Fast Bragg Peak Analysis Dec 10, 2023 AutoML Model Compression
— Unverified 0Neural Network Compression for Noisy Storage Devices Feb 15, 2021 Model Compression Neural Network Compression
— Unverified 0Neural Network Compression using Binarization and Few Full-Precision Weights Jun 15, 2023 Binarization CPU
— Unverified 0Neural Network Compression Via Sparse Optimization Nov 10, 2020 Model Compression Neural Network Compression
— Unverified 0Neural Network Pruning by Cooperative Coevolution Apr 12, 2022 Evolutionary Algorithms Model Compression
— Unverified 0Neural Regularized Domain Adaptation for Chinese Word Segmentation Dec 1, 2017 Chinese Word Segmentation Domain Adaptation
— Unverified 0NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing Jul 26, 2024 Model Compression Semantic Similarity
— Unverified 0Noisy Neural Network Compression for Analog Storage Devices Oct 19, 2020 Knowledge Distillation Model Compression
— Unverified 0Understanding the Performance Horizon of the Latest ML Workloads with NonGEMM Workloads Apr 17, 2024 Model Compression
— Unverified 0Non-Structured DNN Weight Pruning -- Is It Beneficial in Any Platform? Jul 3, 2019 Model Compression Quantization
— Unverified 0Normalized Feature Distillation for Semantic Segmentation Jul 12, 2022 Knowledge Distillation Model Compression
— Unverified 0Norm Tweaking: High-performance Low-bit Quantization of Large Language Models Sep 6, 2023 Model Compression Quantization
— Unverified 0NurtureNet: A Multi-task Video-based Approach for Newborn Anthropometry May 9, 2024 Model Compression
— Unverified 0NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models May 27, 2024 Information Retrieval Language Modelling
— Unverified 0NVRC: Neural Video Representation Compression Sep 11, 2024 Model Compression Quantization
— Unverified 0oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes Mar 30, 2023 Knowledge Distillation Model Compression
— Unverified 0On Accelerating Edge AI: Optimizing Resource-Constrained Environments Jan 25, 2025 Knowledge Distillation Model Compression
— Unverified 0On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence Feb 10, 2023 Edge-computing Model Compression
— Unverified 0Data-Independent Neural Pruning via Coresets Jul 9, 2019 Model Compression Network Pruning
— Unverified 0On Attention Redundancy: A Comprehensive Study Jun 1, 2021 Model Compression Sentence
— Unverified 0Onboard Optimization and Learning: A Survey May 7, 2025 Decision Making Model Compression
— Unverified 0Once-Tuning-Multiple-Variants: Tuning Once and Expanded as Multiple Vision-Language Model Variants Jan 1, 2025 Language Modeling Language Modelling
— Unverified 0On-Device Document Classification using multimodal features Jan 6, 2021 Classification Document Classification
— Unverified 0On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration Apr 24, 2025 CPU Model Compression
— Unverified 0One-Shot Model for Mixed-Precision Quantization Jan 1, 2023 model Model Compression
— Unverified 0One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers Jun 2, 2021 Knowledge Distillation Language Modeling
— Unverified 0One Weight Bitwidth to Rule Them All Aug 22, 2020 All image-classification
— Unverified 0On Linearizing Structured Data in Encoder-Decoder Language Models: Insights from Text-to-SQL Apr 3, 2024 Decoder Knowledge Graphs
— Unverified 0Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision Oct 25, 2022 Knowledge Distillation Model Compression
— Unverified 0Online Model Compression for Federated Learning with Large Models May 6, 2022 Federated Learning Model Compression
— Unverified 0On Multilingual Encoder Language Model Compression for Low-Resource Languages May 22, 2025 Knowledge Distillation Language Modeling
— Unverified 0On the Adversarial Robustness of Quantized Neural Networks May 1, 2021 Adversarial Robustness Model Compression
— Unverified 0On the Compression of Recurrent Neural Networks with an Application to LVCSR acoustic modeling for Embedded Speech Recognition Mar 25, 2016 Model Compression speech-recognition
— Unverified 0On the Demystification of Knowledge Distillation: A Residual Network Perspective Jun 30, 2020 Knowledge Distillation Model Compression
— Unverified 0On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model Compression Aug 27, 2019 Model Compression
— Unverified 0On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild'' Sep 25, 2023 Data Augmentation Model Compression
— Unverified 0On the social bias of speech self-supervised models Jun 7, 2024 Model Compression Self-Supervised Learning
— Unverified 0Optimal Policy Sparsification and Low Rank Decomposition for Deep Reinforcement Learning Mar 10, 2024 Deep Reinforcement Learning Edge-computing
— Unverified 0Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices Dec 12, 2024 Knowledge Distillation Mamba
— Unverified 0Optimization and Scalability of Collaborative Filtering Algorithms in Large Language Models Dec 25, 2024 Collaborative Filtering Computational Efficiency
— Unverified 0Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy Jul 20, 2018 Model Compression Vocal Bursts Intensity Prediction
— Unverified 0Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques May 5, 2025 Knowledge Distillation Mixture-of-Experts
— Unverified 0Optimizing Singular Spectrum for Large Language Model Compression Feb 20, 2025 Language Modeling Language Modelling
— Unverified 0Optimizing Small Language Models for In-Vehicle Function-Calling Jan 4, 2025 Model Compression Quantization
— Unverified 0Optimizing Traffic Signal Control using High-Dimensional State Representation and Efficient Deep Reinforcement Learning Nov 12, 2024 Deep Reinforcement Learning Model Compression
— Unverified 0OPTISHEAR: Towards Efficient and Adaptive Pruning of Large Language Models via Evolutionary Optimization Feb 15, 2025 Model Compression
— Unverified 0Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models Nov 5, 2021 Knowledge Distillation Machine Translation
— Unverified 0OTOV2: Automatic, Generic, User-Friendly Mar 13, 2023 Model Compression
— Unverified 0Outsourcing Training without Uploading Data via Efficient Collaborative Open-Source Sampling Oct 23, 2022 Model Compression
— Unverified 0Pacemaker: Intermediate Teacher Knowledge Distillation For On-The-Fly Convolutional Neural Network Mar 9, 2020 Knowledge Distillation Model Compression
— Unverified 0