SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 151200 of 4891 papers

TitleStatusHype
Integrating Neural Operators with Diffusion Models Improves Spectral Representation in Turbulence ModelingCode2
ClearSight: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language ModelsCode2
CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive SurveyCode2
Hybrid 3D-4D Gaussian Splatting for Fast Dynamic Scene RepresentationCode2
InteractRank: Personalized Web-Scale Search Pre-Ranking with Cross Interaction FeaturesCode2
AdaFisher: Adaptive Second Order Optimization via Fisher InformationCode2
Harder Tasks Need More Experts: Dynamic Routing in MoE ModelsCode2
I^2-World: Intra-Inter Tokenization for Efficient Dynamic 4D Scene ForecastingCode2
HeadInfer: Memory-Efficient LLM Inference by Head-wise OffloadingCode2
Latent Modulated Function for Computational Optimal Continuous Image RepresentationCode2
GotenNet: Rethinking Efficient 3D Equivariant Graph Neural NetworksCode2
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial AttacksCode2
SparseLLM: Towards Global Pruning for Pre-trained Language ModelsCode2
LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance Volumetric Medical Image SegmentationCode2
A Light-Weight Framework for Open-Set Object Detection with Decoupled Feature Alignment in Joint SpaceCode2
Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary DomainsCode2
GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-MeshCode2
LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image RestorationCode2
Free Video-LLM: Prompt-guided Visual Perception for Efficient Training-free Video LLMsCode2
AlphaNet: Scaling Up Local-frame-based Atomistic Interatomic PotentialCode2
Many-MobileNet: Multi-Model Augmentation for Robust Retinal Disease ClassificationCode2
FuXi Weather: A data-to-forecast machine learning system for global weatherCode2
Flow Matching in Latent SpaceCode2
Generalized and Efficient 2D Gaussian Splatting for Arbitrary-scale Super-ResolutionCode2
Grappa -- A Machine Learned Molecular Mechanics Force FieldCode2
L4acados: Learning-based models for acados, applied to Gaussian process-based predictive controlCode2
MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision TasksCode2
MonoSplat: Generalizable 3D Gaussian Splatting from Monocular Depth Foundation ModelsCode2
CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular FusionCode2
BiFormer: Vision Transformer with Bi-Level Routing AttentionCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
BitDecoding: Unlocking Tensor Cores for Long-Context LLMs Decoding with Low-Bit KV CacheCode2
Fast Calibrated Explanations: Efficient and Uncertainty-Aware Explanations for Machine Learning ModelsCode2
FastBlend: a Powerful Model-Free Toolkit Making Video Stylization EasierCode2
On the test-time zero-shot generalization of vision-language models: Do we really need prompt learning?Code2
Outlier-robust Kalman Filtering through Generalised BayesCode2
Adaptive Multi-Scale Decomposition Framework for Time Series ForecastingCode2
Fast FullSubNet: Accelerate Full-band and Sub-band Fusion Model for Single-channel Speech EnhancementCode2
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
DaViT: Dual Attention Vision TransformersCode2
Fast and Accurate Blind Flexible DockingCode2
Fast-SNARF: A Fast Deformer for Articulated Neural FieldsCode2
Advances in 4D Generation: A SurveyCode2
Deep Learning Accelerated Quantum Transport Simulations in Nanoelectronics: From Break Junctions to Field-Effect TransistorsCode2
2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image ClassificationCode2
Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical SystemsCode2
Enhancing Autonomous Driving Systems with On-Board Deployed Large Language ModelsCode2
Retrieval Augmented Generation Evaluation in the Era of Large Language Models: A Comprehensive SurveyCode2
BEBLID: Boosted efficient binary local image descriptorCode2
Efficient Large-scale Audio Tagging via Transformer-to-CNN Knowledge DistillationCode2
Show:102550
← PrevPage 4 of 98Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified