SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 176200 of 4891 papers

TitleStatusHype
Partial Large Kernel CNNs for Efficient Super-ResolutionCode2
Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image DenoisingCode2
GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-MeshCode2
LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance Volumetric Medical Image SegmentationCode2
Grappa -- A Machine Learned Molecular Mechanics Force FieldCode2
vid-TLDR: Training Free Token merging for Light-weight Video TransformerCode2
Harder Tasks Need More Experts: Dynamic Routing in MoE ModelsCode2
RFWave: Multi-band Rectified Flow for Audio Waveform ReconstructionCode2
A Simple Baseline for Efficient Hand Mesh ReconstructionCode2
SparseLLM: Towards Global Pruning for Pre-trained Language ModelsCode2
Fast Adversarial Attacks on Language Models In One GPU MinuteCode2
VOOM: Robust Visual Object Odometry and Mapping using Hierarchical LandmarksCode2
Mercury: A Code Efficiency Benchmark for Code Large Language ModelsCode2
CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular FusionCode2
BEBLID: Boosted efficient binary local image descriptorCode2
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial AttacksCode2
RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series TasksCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
SchurVINS: Schur Complement-Based Lightweight Visual Inertial Navigation SystemCode2
FastBlend: a Powerful Model-Free Toolkit Making Video Stylization EasierCode2
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture DesignCode2
An Unforgeable Publicly Verifiable Watermark for Large Language ModelsCode2
Flow Matching in Latent SpaceCode2
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species GenomeCode2
Show:102550
← PrevPage 8 of 196Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified