SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 101125 of 4891 papers

TitleStatusHype
LandMarkSystem Technical ReportCode2
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial AttacksCode2
LightGNN: Simple Graph Neural Network for RecommendationCode2
I^2-World: Intra-Inter Tokenization for Efficient Dynamic 4D Scene ForecastingCode2
Large Scale Longitudinal Experiments: Estimation and InferenceCode2
Latent Modulated Function for Computational Optimal Continuous Image RepresentationCode2
Integrating Neural Operators with Diffusion Models Improves Spectral Representation in Turbulence ModelingCode2
Learning local equivariant representations for quantum operatorsCode2
BEBLID: Boosted efficient binary local image descriptorCode2
A Closer Look into Mixture-of-Experts in Large Language ModelsCode2
Hybrid 3D-4D Gaussian Splatting for Fast Dynamic Scene RepresentationCode2
Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing DetectionCode2
InteractRank: Personalized Web-Scale Search Pre-Ranking with Cross Interaction FeaturesCode2
2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image ClassificationCode2
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image RestorationCode2
Harder Tasks Need More Experts: Dynamic Routing in MoE ModelsCode2
HeadInfer: Memory-Efficient LLM Inference by Head-wise OffloadingCode2
GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-MeshCode2
Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary DomainsCode2
Miipher-2: A Universal Speech Restoration Model for Million-Hour Scale Data RestorationCode2
Mixture of A Million ExpertsCode2
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive SurveyCode2
GotenNet: Rethinking Efficient 3D Equivariant Graph Neural NetworksCode2
Show:102550
← PrevPage 5 of 196Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified