SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 551560 of 4891 papers

TitleStatusHype
Revisiting Funnel Transformers for Modern LLM Architectures with Comprehensive Ablations in Training and Inference Configurations0
Overcoming Vocabulary Constraints with Pixel-level Fallback0
LLMPi: Optimizing LLMs for High-Throughput on Raspberry Pi0
Test-time Adaptation for Foundation Medical Segmentation Model without Parametric Updates0
Robust Channel Estimation for Optical Wireless Communications Using Neural NetworkCode0
FLAMES: A Hybrid Spiking-State Space Model for Adaptive Memory Retention in Event-Based Learning0
Is Temporal Prompting All We Need For Limited Labeled Action Recognition?0
3D Gaussian Inverse Rendering with Approximated Global Illumination0
An Explainable Reconfiguration-Based Optimization Algorithm for Industrial and Reliability-Redundancy Allocation Problems0
FlowMotion: Target-Predictive Conditional Flow Matching for Jitter-Reduced Text-Driven Human Motion Generation0
Show:102550
← PrevPage 56 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified