SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 14311440 of 4891 papers

TitleStatusHype
FGP: Feature-Gradient-Prune for Efficient Convolutional Layer PruningCode0
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy InterpolationCode0
DeBaCl: A Python Package for Interactive DEnsity-BAsed CLusteringCode0
Dynamics-aware Adversarial Attack of Adaptive Neural NetworksCode0
DCR: Quantifying Data Contamination in LLMs EvaluationCode0
Action Recognition Using Temporal Shift Module and Ensemble LearningCode0
Few-Shot Image-to-Semantics Translation for Policy Transfer in Reinforcement LearningCode0
ModeConv: A Novel Convolution for Distinguishing Anomalous and Normal Structural BehaviorCode0
Finding Influential Training Samples for Gradient Boosted Decision TreesCode0
Federated Multimodal Learning with Dual Adapters and Selective Pruning for Communication and Computational EfficiencyCode0
Show:102550
← PrevPage 144 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified