| Characterizing and Optimizing LLM Inference Workloads on CPU-GPU Coupled Architectures | Apr 16, 2025 | CPUGPU | —Unverified | 0 |
| ConvShareViT: Enhancing Vision Transformers with Convolutional Attention Mechanisms for Free-Space Optical Accelerators | Apr 15, 2025 | GPU | —Unverified | 0 |
| 70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float | Apr 15, 2025 | CPUGPU | CodeCode Available | 4 |
| Bringing together invertible UNets with invertible attention modules for memory-efficient diffusion models | Apr 15, 2025 | DenoisingGPU | —Unverified | 0 |
| PatrolVision: Automated License Plate Recognition in the wild | Apr 15, 2025 | Autonomous DrivingGPU | —Unverified | 0 |
| Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory Constraints | Apr 15, 2025 | GPUInference Optimization | CodeCode Available | 4 |
| CAT: A Conditional Adaptation Tailor for Efficient and Effective Instance-Specific Pansharpening on Real-World Data | Apr 14, 2025 | Computational EfficiencyGPU | —Unverified | 0 |
| Frozen Layers: Memory-efficient Many-fidelity Hyperparameter Optimization | Apr 14, 2025 | GPUHyperparameter Optimization | —Unverified | 0 |
| Anchors no more: Using peculiar velocities to constrain H_0 and the primordial Universe without calibrators | Apr 14, 2025 | GPU | CodeCode Available | 0 |
| Tokenize Image Patches: Global Context Fusion for Effective Haze Removal in Large Images | Apr 13, 2025 | GPU | CodeCode Available | 2 |