SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 191200 of 4891 papers

TitleStatusHype
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial AttacksCode2
RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series TasksCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
SchurVINS: Schur Complement-Based Lightweight Visual Inertial Navigation SystemCode2
FastBlend: a Powerful Model-Free Toolkit Making Video Stylization EasierCode2
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture DesignCode2
An Unforgeable Publicly Verifiable Watermark for Large Language ModelsCode2
Flow Matching in Latent SpaceCode2
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species GenomeCode2
Show:102550
← PrevPage 20 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified