SOTAVerified

GPU

Papers

Showing 951975 of 5629 papers

TitleStatusHype
xPerT: Extended Persistence TransformerCode1
Evaluating Quantized Large Language Models for Code Generation on Low-Resource Language BenchmarksCode0
Takin-ADA: Emotion Controllable Audio-Driven Animation with Canonical and Landmark Loss Optimization0
Parallel Backpropagation for Inverse of a Convolution with Application to Normalizing FlowsCode0
syren-new: Precise formulae for the linear and nonlinear matter power spectra with massive neutrinos and dynamical dark energyCode1
Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching0
FDF: Flexible Decoupled Framework for Time Series Forecasting with Conditional Denoising and Polynomial ModelingCode0
Shavette: Low Power Neural Network Acceleration via Algorithm-level Error Detection and UndervoltingCode0
D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution RefinementCode7
MEGA: Memory-Efficient 4D Gaussian Splatting for Dynamic Scenes0
EP-SAM: Weakly Supervised Histopathology Segmentation via Enhanced Prompt with Segment AnythingCode1
RapidDock: Unlocking Proteome-scale Molecular Docking0
CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment0
FlashAudio: Rectified Flows for Fast and High-Fidelity Text-to-Audio GenerationCode5
Learning Representations for Reasoning: Generalizing Across Diverse Structures0
Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage Gaussian Splats0
Optimization and Application of Cloud-based Deep Learning Architecture for Multi-Source Data Prediction0
nvTorchCam: An Open-source Library for Camera-Agnostic Differentiable Geometric VisionCode2
LR-SQL: A Supervised Fine-Tuning Method for Text2SQL Tasks under Low-Resource ScenariosCode0
GS^3: Efficient Relighting with Triple Gaussian SplattingCode2
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming HeadsCode4
Liger Kernel: Efficient Triton Kernels for LLM TrainingCode9
ET-Former: Efficient Triplane Deformable Attention for 3D Semantic Scene Completion From Monocular Camera0
KBLaM: Knowledge Base augmented Language ModelCode5
Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models0
Show:102550
← PrevPage 39 of 226Next →

No leaderboard results yet.