SOTAVerified

GPU

Papers

Showing 351400 of 5629 papers

TitleStatusHype
LeanDojo: Theorem Proving with Retrieval-Augmented Language ModelsCode2
Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement LearningCode2
λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent SpaceCode2
LongRecipe: Recipe for Efficient Long Context Generalization in Large Language ModelsCode2
LAMP: Learn A Motion Pattern for Few-Shot-Based Video GenerationCode2
KAD: No More FAD! An Effective and Efficient Evaluation Metric for Audio GenerationCode2
JaxMARL: Multi-Agent RL Environments and Algorithms in JAXCode2
JAX MD: A Framework for Differentiable PhysicsCode2
Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation ModelsCode2
MODNet: Real-Time Trimap-Free Portrait Matting via Objective DecompositionCode2
JAX, M.D.: A Framework for Differentiable PhysicsCode2
Latent Neural Operator for Solving Forward and Inverse PDE ProblemsCode2
Instant Volumetric Head AvatarsCode2
INT-FlashAttention: Enabling Flash Attention for INT8 QuantizationCode2
InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information RetrievalCode2
Invertible Diffusion Models for Compressed SensingCode2
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency ModelCode2
ImMesh: An Immediate LiDAR Localization and Meshing FrameworkCode2
Confucius3-Math: A Lightweight High-Performance Reasoning LLM for Chinese K-12 Mathematics LearningCode2
Learning to Fly in SecondsCode2
CoMoSVC: Consistency Model-based Singing Voice ConversionCode2
Im4D: High-Fidelity and Real-Time Novel View Synthesis for Dynamic ScenesCode2
Isaac Gym: High Performance GPU-Based Physics Simulation For Robot LearningCode2
LightSeq2: Accelerated Training for Transformer-based Models on GPUsCode2
HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic SegmentationCode2
CoLLiE: Collaborative Training of Large Language Models in an Efficient WayCode2
HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and DetectionCode2
Holistically-Attracted Wireframe Parsing: From Supervised to Self-Supervised LearningCode2
HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level SynthesisCode2
ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsCode2
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear AlgebraCode2
Collaborative Decoding Makes Visual Auto-Regressive Modeling EfficientCode2
HybridDepth: Robust Metric Depth Fusion by Leveraging Depth from Focus and Single-Image PriorsCode2
HeadInfer: Memory-Efficient LLM Inference by Head-wise OffloadingCode2
Accelerating Transformer Pre-training with 2:4 SparsityCode2
LoQT: Low-Rank Adapters for Quantized PretrainingCode2
Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-FlowCode2
Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM InferenceCode2
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech SynthesisCode2
H_2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language ModelsCode2
GS^3: Efficient Relighting with Triple Gaussian SplattingCode2
Habitat 2.0: Training Home Assistants to Rearrange their HabitatCode2
Grouping First, Attending Smartly: Training-Free Acceleration for Diffusion TransformersCode2
Habitat: A Platform for Embodied AI ResearchCode2
HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE InferenceCode2
GPU Performance Portability needs AutotuningCode2
MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image SynthesisCode2
MathOptAI.jl: Embed trained machine learning predictors into JuMP modelsCode2
GradeADreamer: Enhanced Text-to-3D Generation Using Gaussian Splatting and Multi-View DiffusionCode2
Characterization of Large Language Model Development in the DatacenterCode2
Show:102550
← PrevPage 8 of 113Next →

No leaderboard results yet.