SOTAVerified

CPU

Papers

Showing 176200 of 2231 papers

TitleStatusHype
A Federated Deep Learning Framework for Cell-Free RSMA Networks0
TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response ScenariosCode2
Optimizing Distributed Deployment of Mixture-of-Experts Model Inference in Serverless Computing0
TimeRL: Efficient Deep Reinforcement Learning with Polyhedral Dependence Graphs0
A GPU Implementation of Multi-Guiding Spark Fireworks Algorithm for Efficient Black-Box Neural Network OptimizationCode0
Finite Element Method for HJB in Option Pricing with Stock Borrowing Fees0
Predicting two-dimensional spatiotemporal chaotic patterns with optimized high-dimensional hybrid reservoir computing0
Learning from Ambiguous Data with Hard Labels0
FED: Fast and Efficient Dataset Deduplication Framework with GPU AccelerationCode0
Minimal Interaction Seperated Tuning: A New Paradigm for Visual Adaptation0
Enhancing Deployment-Time Predictive Model Robustness for Code Analysis and OptimizationCode0
Human-like Bots for Tactical Shooters Using Compute-Efficient Sensors0
FPGA-based Acceleration of Neural Network for Image Classification using Vitis AI0
Dynamic Optimization of Storage Systems Using Reinforcement Learning Techniques0
Pushing the Envelope of Low-Bit LLM via Dynamic Error Compensation0
Assessing Text Classification Methods for Cyberbullying Detection on Social Media Platforms0
Dovetail: A CPU/GPU Heterogeneous Speculative Decoding for LLM inference0
TPCH: Tensor-interacted Projection and Cooperative Hashing for Multi-view ClusteringCode0
High-Rank Irreducible Cartesian Tensor Decomposition and Bases of Equivariant SpacesCode0
Unsupervised Learning Approach for Beamforming in Cell-Free Integrated Sensing and Communication0
Data-Juicer 2.0: Cloud-Scale Adaptive Data Processing for and with Foundation ModelsCode9
Power- and Fragmentation-aware Online Scheduling for GPU DatacentersCode0
Hybrid Network- and User-Centric Scalable Cell-Free Massive MIMO for Fronthaul Signaling MinimizationCode0
WebLLM: A High-Performance In-Browser LLM Inference EngineCode11
Energy consumption of code small language models serving with runtime engines and execution providers0
Show:102550
← PrevPage 8 of 90Next →

No leaderboard results yet.