SOTAVerified

GPU

Papers

Showing 13011325 of 5629 papers

TitleStatusHype
LLMThinkBench: Towards Basic Math Reasoning and Overthinking in Large Language ModelsCode1
CoSense3D: an Agent-based Efficient Learning Framework for Collective PerceptionCode1
Easy and Efficient Transformer : Scalable Inference Solution For large NLP modelCode1
Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-ExpertsCode1
LMLT: Low-to-high Multi-Level Vision Transformer for Image Super-ResolutionCode1
Long Movie Clip Classification with State-Space Video ModelsCode1
Automatic Polyp Segmentation with Multiple Kernel Dilated Convolution NetworkCode1
FastDOG: Fast Discrete Optimization on GPUCode1
Dynamic Sparse Training with Structured SparsityCode1
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal NetworksCode1
4K-Resolution Photo Exposure Correction at 125 FPS with ~8K ParametersCode1
FastFlowNet: A Lightweight Network for Fast Optical Flow EstimationCode1
CorticalFlow: A Diffeomorphic Mesh Transformer Network for Cortical Surface ReconstructionCode1
FastFormers: Highly Efficient Transformer Models for Natural Language UnderstandingCode1
Dynamic Structure Pruning for Compressing CNNsCode1
Fast Graph Representation Learning with PyTorch GeometricCode1
Fast Light-Field Disparity Estimation With Multi-Disparity-Scale Cost AggregationCode1
Fast k-NN Graph Construction by GPU based NN-DescentCode1
Dynamic Pooling Improves Nanopore Base Calling AccuracyCode1
LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive QuantizationCode1
Fast Neural Representations for Direct Volume RenderingCode1
μKG: A Library for Multi-source Knowledge Graph Embeddings and ApplicationsCode1
Dynamic Mesh-Aware Radiance FieldsCode1
Dynamic Low-Rank Sparse Adaptation for Large Language ModelsCode1
Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded PlatformsCode1
Show:102550
← PrevPage 53 of 226Next →

No leaderboard results yet.