SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 12811290 of 4891 papers

TitleStatusHype
A Triple-Inertial Accelerated Alternating Optimization Method for Deep Learning TrainingCode0
Fovea Transformer: Efficient Long-Context Modeling with Structured Fine-to-Coarse AttentionCode0
Missing Data Imputation Based on Dynamically Adaptable Structural Equation Modeling with Self-AttentionCode0
DiffFormer: a Differential Spatial-Spectral Transformer for Hyperspectral Image ClassificationCode0
Diff-GO^n: Enhancing Diffusion Models for Goal-Oriented CommunicationsCode0
Studying K-FAC Heuristics by Viewing Adam through a Second-Order LensCode0
FrameRS: A Video Frame Compression Model Composed by Self supervised Video Frame Reconstructor and Key Frame SelectorCode0
DeepRTE: Pre-trained Attention-based Neural Network for Radiative TranferCode0
A Transformer Framework for Data Fusion and Multi-Task Learning in Smart CitiesCode0
Deep Representation Learning for Prediction of Temporal Event Sets in the Continuous Time DomainCode0
Show:102550
← PrevPage 129 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified