SOTAVerified

Computational Efficiency

Methods and optimizations to reduce the computational resources (e.g., time, memory, or power) needed for training and inference in models. This involves techniques that streamline processing, optimize algorithms, or leverage hardware to enhance performance without compromising accuracy.

Papers

Showing 18111820 of 4891 papers

TitleStatusHype
Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large Language ModelsCode0
Quantum-Powered Personalized Learning0
3D-RCNet: Learning from Transformer to Build a 3D Relational ConvNet for Hyperspectral Image ClassificationCode2
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training0
FreqINR: Frequency Consistency for Implicit Neural Representation with Adaptive DCT Frequency Loss0
Batch-FPM: Random batch-update multi-parameter physical Fourier ptychography neural network0
LowCLIP: Adapting the CLIP Model Architecture for Low-Resource Languages in Multimodal Image Retrieval TaskCode0
Prompt-Matcher: Leveraging Large Models to Reduce Uncertainty in Schema Matching Results0
The Ultimate Guide to Fine-Tuning LLMs from Basics to Breakthroughs: An Exhaustive Review of Technologies, Research, Best Practices, Applied Research Challenges and Opportunities0
Interpretable breast cancer classification using CNNs on mammographic imagesCode0
Show:102550
← PrevPage 182 of 490Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ViTaLHamming Loss0.05Unverified