SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 150 of 4891 papers

TitleStatusHype
LivePortrait: Efficient Portrait Animation with Stitching and Retargeting ControlCode11
TinyLlama: An Open-Source Small Language ModelCode11
Enhancing Fourier Neural Operators with Local Spatial FeaturesCode7
Muon is Scalable for LLM TrainingCode7
Revisiting PCA for time series reduction in temporal dimensionCode7
PromptWizard: Task-Aware Prompt Optimization FrameworkCode7
VMamba: Visual State Space ModelCode7
Mamba: Linear-Time Sequence Modeling with Selective State SpacesCode6
U-Net v2: Rethinking the Skip Connections of U-Net for Medical Image SegmentationCode6
RWKV: Reinventing RNNs for the Transformer EraCode6
YOLOv13: Real-Time Object Detection with Hypergraph-Enhanced Adaptive Visual PerceptionCode5
Continuous Thought MachinesCode5
Comet: Fine-grained Computation-communication Overlapping for Mixture-of-ExpertsCode5
FireRedASR: Open-Source Industrial-Grade Mandarin Speech Recognition Models from Encoder-Decoder to LLM IntegrationCode5
Video Depth Anything: Consistent Depth Estimation for Super-Long VideosCode5
Exploring GLU Expansion Ratios: A Study of Structured Pruning in LLaMA-3.2 ModelsCode5
MambaIRv2: Attentive State Space RestorationCode5
CogView3: Finer and Faster Text-to-Image Generation via Relay DiffusionCode5
Partition Generative Modeling: Masked Modeling Without MasksCode4
High-performance training and inference for deep equivariant interatomic potentialsCode4
Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of ExpertsCode4
On the limits of agency in agent-based modelsCode4
T-MAC: CPU Renaissance via Table Lookup for Low-Bit LLM Deployment on EdgeCode4
RaDe-GS: Rasterizing Depth in Gaussian SplattingCode4
Universal and Extensible Language-Vision Models for Organ Segmentation and Tumor Detection from Abdominal Computed TomographyCode4
LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression ToolkitCode4
An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language ModelsCode4
Mamba-UNet: UNet-Like Pure Visual Mamba for Medical Image SegmentationCode4
TRIPS: Trilinear Point Splatting for Real-Time Radiance Field RenderingCode4
RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization BenchmarkCode4
TorchRL: A data-driven decision-making library for PyTorchCode4
Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesisCode4
Hierarchically Coherent Multivariate Mixture NetworksCode4
A Convergent Single-Loop Algorithm for Relaxation of Gromov-Wasserstein in Graph DataCode4
AudioLDM: Text-to-Audio Generation with Latent Diffusion ModelsCode4
FourCastNet 3: A geometric approach to probabilistic machine-learning weather forecasting at scaleCode3
NeuralOM: Neural Ocean Model for Subseasonal-to-Seasonal SimulationCode3
TensorNEAT: A GPU-accelerated Library for NeuroEvolution of Augmenting TopologiesCode3
GPU-accelerated Evolutionary Many-objective Optimization Using Tensorized NSGA-IIICode3
WeatherMesh-3: Fast and accurate operational global weather forecastingCode3
Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based Spatiotemporal Diffusion for Audio-driven Talking PortraitCode3
MoC: Mixtures of Text Chunking Learners for Retrieval-Augmented Generation SystemCode3
MetaDE: Evolving Differential Evolution by Differential EvolutionCode3
FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video GenerationCode3
CoverM: Read alignment statistics for metagenomicsCode3
Dataset Distillation with Neural Characteristic Function: A Minmax PerspectiveCode3
A Survey on Inference Optimization Techniques for Mixture of Experts ModelsCode3
On the Efficiency of NLP-Inspired Methods for Tabular Deep LearningCode3
Star Attention: Efficient LLM Inference over Long SequencesCode3
Nd-BiMamba2: A Unified Bidirectional Architecture for Multi-Dimensional Data ProcessingCode3
Show:102550
← PrevPage 1 of 98Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified