SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 25712580 of 4891 papers

TitleStatusHype
Retrieval Augmentation via User Interest Clustering0
Retrieval Instead of Fine-tuning: A Retrieval-based Parameter Ensemble for Zero-shot Learning0
RE-tune: Incremental Fine Tuning of Biomedical Vision-Language Models for Multi-label Chest X-ray Classification0
ReverBERT: A State Space Model for Efficient Text-Driven Speech Style Transfer0
Revisiting 2D Convolutional Neural Networks for Graph-based Applications0
Revisiting CHAMPAGNE: Sparse Bayesian Learning as Reweighted Sparse Coding0
Revisiting Ensemble Methods for Stock Trading and Crypto Trading Tasks at ACM ICAIF FinRL Contest 2023-20240
Revisiting Frank-Wolfe for Structured Nonconvex Optimization0
Revisiting Funnel Transformers for Modern LLM Architectures with Comprehensive Ablations in Training and Inference Configurations0
Revisiting Learning-based Video Motion Magnification for Real-time Processing0
Show:102550
← PrevPage 258 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified