SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 28312840 of 4891 papers

TitleStatusHype
Peeking with PEAK: Sequential, Nonparametric Composite Hypothesis Tests for Means of Multiple Data StreamsCode0
Sparse-VQ Transformer: An FFN-Free Framework with Vector Quantization for Enhanced Time Series Forecasting0
Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RLCode0
Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models0
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers0
On the Completeness of Invariant Geometric Deep Learning ModelsCode0
Majority Kernels: An Approach to Leverage Big Model Dynamics for Efficient Small Model Training0
Curriculum reinforcement learning for quantum architecture search under hardware errors0
Partially Stochastic Infinitely Deep Bayesian Neural NetworksCode0
A Survey on Graph Condensation0
Show:102550
← PrevPage 284 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified