SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 51100 of 4891 papers

TitleStatusHype
TSLANet: Rethinking Transformers for Time Series Representation LearningCode3
FlashGS: Efficient 3D Gaussian Splatting for Large-scale and High-resolution RenderingCode3
FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video GenerationCode3
FiT: Flexible Vision Transformer for Diffusion ModelCode3
FiTv2: Scalable and Improved Flexible Vision Transformer for Diffusion ModelCode3
ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem SolvingCode3
TimeMachine: A Time Series is Worth 4 Mambas for Long-term ForecastingCode3
Taming Diffusion Probabilistic Models for Character ControlCode3
Tensorized NeuroEvolution of Augmenting Topologies for GPU AccelerationCode3
Star Attention: Efficient LLM Inference over Long SequencesCode3
Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering RefinementCode3
STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space ModelCode3
TinyGPT-V: Efficient Multimodal Large Language Model via Small BackbonesCode3
Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking PortraitCode3
RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language ProcessingCode3
DUFOMap: Efficient Dynamic Awareness MappingCode3
Residual Kolmogorov-Arnold Network for Enhanced Deep LearningCode3
On the Efficiency of NLP-Inspired Methods for Tabular Deep LearningCode3
Dataset Distillation with Neural Characteristic Function: A Minmax PerspectiveCode3
Nd-BiMamba2: A Unified Bidirectional Architecture for Multi-Dimensional Data ProcessingCode3
NeuralOM: Neural Ocean Model for Subseasonal-to-Seasonal SimulationCode3
Consistency Models Made EasyCode3
MetaDE: Evolving Differential Evolution by Differential EvolutionCode3
CoverM: Read alignment statistics for metagenomicsCode3
MoC: Mixtures of Text Chunking Learners for Retrieval-Augmented Generation SystemCode3
LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge DistillationCode3
MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein EmbeddingCode3
Effects of charging and discharging capabilities on trade-offs between model accuracy and computational efficiency in pumped thermal electricity storageCode3
Apollo: Band-sequence Modeling for High-Quality Audio RestorationCode3
vHeat: Building Vision Models upon Heat ConductionCode3
EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image SegmentationCode3
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial AttacksCode2
Latent Neural Operator for Solving Forward and Inverse PDE ProblemsCode2
Learning local equivariant representations for quantum operatorsCode2
Large Scale Longitudinal Experiments: Estimation and InferenceCode2
LandMarkSystem Technical ReportCode2
Latent Modulated Function for Computational Optimal Continuous Image RepresentationCode2
LEGNet: Lightweight Edge-Gaussian Driven Network for Low-Quality Remote Sensing Image Object DetectionCode2
InteractRank: Personalized Web-Scale Search Pre-Ranking with Cross Interaction FeaturesCode2
3D-RCNet: Learning from Transformer to Build a 3D Relational ConvNet for Hyperspectral Image ClassificationCode2
BEBLID: Boosted efficient binary local image descriptorCode2
Integrating Neural Operators with Diffusion Models Improves Spectral Representation in Turbulence ModelingCode2
L4acados: Learning-based models for acados, applied to Gaussian process-based predictive controlCode2
LeYOLO, New Scalable and Efficient CNN Architecture for Object DetectionCode2
Hybrid 3D-4D Gaussian Splatting for Fast Dynamic Scene RepresentationCode2
HeadInfer: Memory-Efficient LLM Inference by Head-wise OffloadingCode2
Harder Tasks Need More Experts: Dynamic Routing in MoE ModelsCode2
Advances in 4D Generation: A SurveyCode2
SparseLLM: Towards Global Pruning for Pre-trained Language ModelsCode2
Grappa -- A Machine Learned Molecular Mechanics Force FieldCode2
Show:102550
← PrevPage 2 of 98Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified