SOTAVerified

Low-rank compression

Papers

Showing 125 of 34 papers

TitleStatusHype
LiteASR: Efficient Automatic Speech Recognition with Low-Rank ApproximationCode2
TT-NF: Tensor Train Neural FieldsCode1
Compressing Neural Networks: Towards Determining the Optimal Layer-wise DecompositionCode1
Unified Low-rank Compression Framework for Click-through Rate PredictionCode1
Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications0
Cognitive Memory in Large Language Models0
Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression0
Data-aware Low-Rank Compression for Large NLP Models0
FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization0
A Highly Effective Low-Rank Compression of Deep Neural Networks with Modified Beam-Search and Modified Stable Rank0
AntMan: Sparse Low-Rank Compression to Accelerate RNN inference0
Approximate FPGA-based LSTMs under Computation Time Constraints0
ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks0
Feature-based Low-Rank Compression of Large Language Models via Bayesian Optimization0
Adaptive Pruning of Pretrained Transformer via Differential Inclusions0
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks0
LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy0
Low-Rank Compression for IMC Arrays0
MLorc: Momentum Low-rank Compression for Large Language Model Adaptation0
Penrose Tiled Low-Rank Compression and Section-Wise Q&A Fine-Tuning: A General Framework for Domain-Specific Large Language Model Adaptation0
Semi-tensor Product-based TensorDecomposition for Neural Network Compression0
Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks0
DRONE: Data-aware Low-rank Compression for Large NLP Models0
Dynamical Low-Rank Compression of Neural Networks with Robustness under Adversarial Attacks0
Domain-adaptive deep network compressionCode0
Show:102550
← PrevPage 1 of 2Next →

No leaderboard results yet.