SOTAVerified

GPU

Papers

Showing 576600 of 5629 papers

TitleStatusHype
Follow-Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content GenerationCode2
FluidLab: A Differentiable Environment for Benchmarking Complex Fluid ManipulationCode2
Full Parameter Fine-tuning for Large Language Models with Limited ResourcesCode2
Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured SparsityCode2
BiFormer: Vision Transformer with Bi-Level Routing AttentionCode2
FlashRNN: Optimizing Traditional RNNs on Modern HardwareCode2
Latent Neural Operator for Solving Forward and Inverse PDE ProblemsCode2
Black-Box Prompt Optimization: Aligning Large Language Models without Model TrainingCode2
AutoFocus: Efficient Multi-Scale InferenceCode2
$100K or 100 Days: Trade-offs when Pre-Training with Academic ResourcesCode2
Fully-fused Multi-Layer Perceptrons on Intel Data Center GPUsCode2
JAX, M.D.: A Framework for Differentiable PhysicsCode2
Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving with Workload AwarenessCode2
Fine-Tuning Pre-trained Transformers into Decaying Fast WeightsCode1
Fine-tuning of sign language recognition models: a technical reportCode1
Fine-tuning Quantized Neural Networks with Zeroth-order OptimizationCode1
ArchesWeather: An efficient AI weather forecasting model at 1.5° resolutionCode1
FindVehicle and VehicleFinder: A NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval systemCode1
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware ModellingCode1
Fill the K-Space and Refine the Image: Prompting for Dynamic and Multi-Contrast MRI ReconstructionCode1
Fine-tuning giant neural networks on commodity hardware with automatic pipeline model parallelismCode1
FELARE: Fair Scheduling of Machine Learning Tasks on Heterogeneous Edge SystemsCode1
FFB: A Fair Fairness Benchmark for In-Processing Group Fairness MethodsCode1
Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM Inference ServingCode1
A Probabilistic Neuro-symbolic Layer for Algebraic Constraint SatisfactionCode1
Show:102550
← PrevPage 24 of 226Next →

No leaderboard results yet.