SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 10111020 of 4891 papers

TitleStatusHype
Context-Preserving Gradient Modulation for Large Language Models: A Novel Approach to Semantic Consistency in Long-Form Text Generation0
Contextual Compression Encoding for Large Language Models: A Novel Framework for Multi-Layered Parameter Space Pruning0
Contextually Structured Token Dependency Encoding for Large Language Models0
Contextual Memory Reweaving in Large Language Models Using Layered Latent State Reconstruction0
Contextual Multinomial Logit Bandits with General Value Functions0
Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls0
Contextual Reinforcement in Multimodal Token Compression for Large Language Models0
Contingency-constrained economic dispatch with safe reinforcement learning0
CoDBench: A Critical Evaluation of Data-driven Models for Continuous Dynamical Systems0
A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed0
Show:102550
← PrevPage 102 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified