SOTAVerified

GPU

Papers

Showing 31013150 of 5629 papers

TitleStatusHype
Bespoke Solvers for Generative Flow Models0
The Synergy of Speculative Decoding and Batching in Serving Large Language Models0
OpenDMC: An Open-Source Library and Performance Evaluation for Deep-learning-based Multi-frame CompressionCode0
Real-Time Neural Materials using Block-Compressed Features0
PockEngine: Sparse and Efficient Fine-tuning in a Pocket0
Anchor Space Optimal Transport as a Fast Solution to Multiple Optimal Transport ProblemsCode0
Performance Tuning for GPU-Embedded Systems: Machine-Learning-based and Analytical Model-driven Tuning Methodologies0
UncertaintyPlayground: A Fast and Simplified Python Library for Uncertainty EstimationCode0
Benchmarking GPUs on SVBRDF Extractor Model0
Fine-Tuning Generative Models as an Inference Method for Robotic TasksCode0
Cooperative Minibatching in Graph Neural NetworksCode0
Jorge: Approximate Preconditioning for GPU-efficient Second-order Optimization0
Learning to Generate Parameters of ConvNets for Unseen Image DataCode0
FROST: Towards Energy-efficient AI-on-5G Platforms -- A GPU Power Capping Evaluation0
4K4D: Real-Time 4D View Synthesis at 4K Resolution0
Leveraging Knowledge Distillation for Efficient Deep Reinforcement Learning in Resource-Constrained EnvironmentsCode0
Can LSH (Locality-Sensitive Hashing) Be Replaced by Neural Network?0
Unsupervised Discovery of Interpretable Directions in h-space of Pre-trained Diffusion Models0
PC-bzip2: a phase-space continuity enhanced lossless compression algorithm for light field microscopy data0
Neural network scoring for efficient computing0
Revisiting Multi-modal 3D Semantic Segmentation in Real-world Autonomous Driving0
Polynomial Time Cryptanalytic Extraction of Neural Network ModelsCode0
Transformers for Green Semantic Communication: Less Energy, More SemanticsCode0
InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining0
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources0
Distributed Transfer Learning with 4th Gen Intel Xeon Processors0
Look-Up mAI GeMM: Increasing AI GeMMs Performance by Nearly 2.5x via msGeMM0
Scaling Studies for Efficient Parameter Search and Parallelism for Large Language Model Pre-training0
Exploiting Manifold Structured Data Priors for Improved MR Fingerprinting Reconstruction0
Memory-Constrained Semantic Segmentation for Ultra-High Resolution UAV Imagery0
Conversational Factor Information Retrieval Model (ConFIRM)Code0
Towards Non-contact 3D Ultrasound for Wrist Imaging0
Entropic Score metric: Decoupling Topology and Size in Training-free NAS0
PDR-CapsNet: an Energy-Efficient Parallel Approach to Dynamic Routing in Capsule Networks0
End-to-End Training of a Neural HMM with Label and Transition ProbabilitiesCode0
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models0
Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly0
From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference0
MAD Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems0
STAMP: Differentiable Task and Motion Planning via Stein Variational Gradient Descent0
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness0
Who's Harry Potter? Approximate Unlearning in LLMs0
Adaptive Multi-NeRF: Exploit Efficient Parallelism in Adaptive Multiple Scale Neural Radiance Field Rendering0
OneAdapt: Fast Configuration Adaptation for Video Analytics Applications via Backpropagation0
MobileNVC: Real-time 1080p Neural Video Compression on a Mobile Device0
Multi-Source Templates Learning for Real-Time Aerial TrackingCode0
Multi-tiling Neural Radiance Field (NeRF) -- Geometric Assessment on Large-scale Aerial Datasets0
Leveraging Optimization for Adaptive Attacks on Image WatermarksCode0
High Throughput Training of Deep Surrogates from Large Ensemble Runs0
Distill to Delete: Unlearning in Graph Networks with Knowledge Distillation0
Show:102550
← PrevPage 63 of 113Next →

No leaderboard results yet.