SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 151200 of 4891 papers

TitleStatusHype
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value EstimationCode2
Learning local equivariant representations for quantum operatorsCode2
Mixture of A Million ExpertsCode2
A Closer Look into Mixture-of-Experts in Large Language ModelsCode2
LeYOLO, New Scalable and Efficient CNN Architecture for Object DetectionCode2
Solving the Inverse Problem of Electrocardiography for Cardiac Digital Twins: A SurveyCode2
DistPred: A Distribution-Free Probabilistic Inference Method for Regression and ForecastingCode2
Voxel Mamba: Group-Free State Space Models for Point Cloud based 3D Object DetectionCode2
Attentive Merging of Hidden Embeddings from Pre-trained Speech Model for Anti-spoofing DetectionCode2
MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision TasksCode2
Adaptive Multi-Scale Decomposition Framework for Time Series ForecastingCode2
Parameter-Inverted Image Pyramid NetworksCode2
Latent Neural Operator for Solving Forward and Inverse PDE ProblemsCode2
Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language ModelsCode2
SoundCTM: Unifying Score-based and Consistency Models for Full-band Text-to-Sound GenerationCode2
Spectral-Refiner: Accurate Fine-Tuning of Spatiotemporal Fourier Neural Operator for Turbulent FlowsCode2
AdaFisher: Adaptive Second Order Optimization via Fisher InformationCode2
PoinTramba: A Hybrid Transformer-Mamba Framework for Point Cloud AnalysisCode2
Wav-KAN: Wavelet Kolmogorov-Arnold NetworksCode2
Outlier-robust Kalman Filtering through Generalised BayesCode2
Retinexmamba: Retinex-based Mamba for Low-light Image EnhancementCode2
On the test-time zero-shot generalization of vision-language models: Do we really need prompt learning?Code2
SSUMamba: Spatial-Spectral Selective State Space Model for Hyperspectral Image DenoisingCode2
Latent Modulated Function for Computational Optimal Continuous Image RepresentationCode2
MultiBooth: Towards Generating All Your Concepts in an Image from TextCode2
Partial Large Kernel CNNs for Efficient Super-ResolutionCode2
Rethinking Transformer-Based Blind-Spot Network for Self-Supervised Image DenoisingCode2
GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-MeshCode2
LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance Volumetric Medical Image SegmentationCode2
Grappa -- A Machine Learned Molecular Mechanics Force FieldCode2
vid-TLDR: Training Free Token merging for Light-weight Video TransformerCode2
Harder Tasks Need More Experts: Dynamic Routing in MoE ModelsCode2
RFWave: Multi-band Rectified Flow for Audio Waveform ReconstructionCode2
A Simple Baseline for Efficient Hand Mesh ReconstructionCode2
SparseLLM: Towards Global Pruning for Pre-trained Language ModelsCode2
Fast Adversarial Attacks on Language Models In One GPU MinuteCode2
VOOM: Robust Visual Object Odometry and Mapping using Hierarchical LandmarksCode2
Mercury: A Code Efficiency Benchmark for Code Large Language ModelsCode2
CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular FusionCode2
BEBLID: Boosted efficient binary local image descriptorCode2
L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial AttacksCode2
RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series TasksCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
SchurVINS: Schur Complement-Based Lightweight Visual Inertial Navigation SystemCode2
FastBlend: a Powerful Model-Free Toolkit Making Video Stylization EasierCode2
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language ModelsCode2
VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture DesignCode2
An Unforgeable Publicly Verifiable Watermark for Large Language ModelsCode2
Flow Matching in Latent SpaceCode2
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species GenomeCode2
Show:102550
← PrevPage 4 of 98Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified