SOTAVerified

CPU

Papers

Showing 19512000 of 2231 papers

TitleStatusHype
FlashRL: A Reinforcement Learning Platform for Flash Games0
FleetX0
Flexible Techniques for Differentiable Rendering with 3D Gaussians0
FLIC: Fast Linear Iterative Clustering with Active Search0
FL-MISR: Fast Large-Scale Multi-Image Super-Resolution for Computed Tomography Based on Multi-GPU Acceleration0
FloE: On-the-Fly MoE Inference on Memory-constrained GPU0
FlowMAC: Conditional Flow Matching for Audio Coding at Low Bit Rates0
FLY-TTS: Fast, Lightweight and High-Quality End-to-End Text-to-Speech Synthesis0
fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving0
Fountain -- an intelligent contextual assistant combining knowledge representation and language models for manufacturing risk identification0
FPGA Acceleration of Sequence Alignment: A Survey0
FPGA Architecture for Deep Learning and its application to Planetary Robotics0
FPGA-based Acceleration of Neural Network for Image Classification using Vitis AI0
FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review0
FP-VEC: Fingerprinting Large Language Models via Efficient Vector Addition0
FRE: A Fast Method For Anomaly Detection And Segmentation0
FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference0
From Research to Production and Back: Ludicrously Fast Neural Machine Translation0
FSD-Inference: Fully Serverless Distributed Inference with Scalable Cloud Communication0
FTRANS: Energy-Efficient Acceleration of Transformers using FPGA0
FuCoLoT -- A Fully-Correlational Long-Term Tracker0
FullPack: Full Vector Utilization for Sub-Byte Quantized Inference on General Purpose CPUs0
Fully-deformable 3D image registration in two seconds0
Fully Learnable Group Convolution for Acceleration of Deep Neural Networks0
FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs0
FusionANNS: An Efficient CPU/GPU Cooperative Processing Architecture for Billion-scale Approximate Nearest Neighbor Search0
Fusion of multispectral satellite imagery using a cluster of graphics processing unit0
FusionStitching: Boosting Memory Intensive Computations for Deep Learning Workloads0
Finite volume method network for acceleration of unsteady computational fluid dynamics: non-reacting and reacting flows0
GANDSE: Generative Adversarial Network based Design Space Exploration for Neural Network Accelerator Design0
Gappy Pattern Matching on GPUs for On-Demand Extraction of Hierarchical Translation Grammars0
Gated Low-rank Adaptation for personalized Code-Switching Automatic Speech Recognition on the low-spec devices0
GATSPI: GPU Accelerated Gate-Level Simulation for Power Improvement0
GNNear: Accelerating Full-Batch Training of Graph Neural Networks with Near-Memory Processing0
GCV-Turbo: End-to-end Acceleration of GNN-based Computer Vision Tasks on FPGA0
GEB-1.3B: Open Lightweight Large Language Model0
Generating Efficient DNN-Ensembles with Evolutionary Computation0
Generative AI on the Edge: Architecture and Performance Evaluation0
Generative Design by Reinforcement Learning: Enhancing the Diversity of Topology Optimization Designs0
GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware0
Genetically Improved BarraCUDA0
GHOST: A Graph Neural Network Accelerator using Silicon Photonics0
Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs0
GnetDet: Object Detection Optimized on a 224mW CNN Accelerator Chip at the Speed of 106FPS0
GnetSeg: Semantic Segmentation Model Optimized on a 224mW CNN Accelerator Chip at the Speed of 318FPS0
GNNIE: GNN Inference Engine with Load-balancing and Graph-Specific Caching0
Google Coral-based edge computing person reidentification using human parsing combined with analytical method0
GossipGraD: Scalable Deep Learning using Gossip Communication based Asynchronous Gradient Descent0
GPGPU Acceleration of the KAZE Image Feature Extraction Algorithm0
GPTVQ: The Blessing of Dimensionality for LLM Quantization0
Show:102550
← PrevPage 40 of 45Next →

No leaderboard results yet.