SOTAVerified

Image Classification

Image Classification is a fundamental task in vision recognition that aims to understand and categorize an image as a whole under a specific label. Unlike object detection, which involves classification and location of multiple objects within an image, image classification typically pertains to single-object images. When the classification becomes highly detailed or reaches instance-level, it is often referred to as image retrieval, which also involves finding similar images in a large database.

Source: Metamorphic Testing for Object Detection Systems

Papers

Showing 201250 of 10419 papers

TitleStatusHype
Effective Data Augmentation With Diffusion ModelsCode2
Medical Image Classification with KAN-Integrated Transformers and Dilated Neighborhood AttentionCode2
MedViT: A Robust Vision Transformer for Generalized Medical Image ClassificationCode2
Attention Mechanisms in Computer Vision: A SurveyCode2
MetaFormer: A Unified Meta Framework for Fine-Grained RecognitionCode2
MogaNet: Multi-order Gated Aggregation NetworkCode2
Agent Attention: On the Integration of Softmax and Linear AttentionCode2
Accelerating Transformers with Spectrum-Preserving Token MergingCode2
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural NetworksCode2
Dilated Neighborhood Attention TransformerCode2
ParC-Net: Position Aware Circular Convolution with Merits from ConvNets and TransformerCode2
Efficient Multi-Scale Attention Module with Cross-Spatial LearningCode2
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTsCode2
Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language ModelsCode2
DEYO: DETR with YOLO for End-to-End Object DetectionCode2
Class-Incremental Learning: A SurveyCode2
Decoupled Knowledge DistillationCode2
AutoFormer: Searching Transformers for Visual RecognitionCode2
Multi-Representation Adaptation Network for Cross-domain Image ClassificationCode2
Neighborhood Attention TransformerCode2
Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial ScenariosCode2
NodeFormer: A Scalable Graph Structure Learning Transformer for Node ClassificationCode2
Deep PCB To COCO ConvertorCode2
DAT++: Spatially Dynamic Vision Transformer with Deformable AttentionCode2
AWT: Transferring Vision-Language Models via Augmentation, Weighting, and TransportationCode2
Aligning Domain-specific Distribution and Classifier for Cross-domain Classification from Multiple SourcesCode2
Parameter-Efficient Fine-Tuning with Discrete Fourier TransformCode2
Parameter-Inverted Image Pyramid NetworksCode2
DaViT: Dual Attention Vision TransformersCode2
DGR-MIL: Exploring Diverse Global Representation in Multiple Instance Learning for Whole Slide Image ClassificationCode2
BatchFormerV2: Exploring Sample Relationships for Dense Representation LearningCode2
PlantSeg: A Large-Scale In-the-wild Dataset for Plant Disease SegmentationCode2
Practical Continual Forgetting for Pre-trained Vision ModelsCode2
Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual RecognitionCode2
Current Trends in Deep Learning for Earth Observation: An Open-source Benchmark Arena for Image ClassificationCode2
RandAugment: Practical automated data augmentation with a reduced search spaceCode2
CrypTen: Secure Multi-Party Computation Meets Machine LearningCode2
RemoteCLIP: A Vision Language Foundation Model for Remote SensingCode2
CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale AttentionCode2
Revisiting Unreasonable Effectiveness of Data in Deep Learning EraCode2
Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern GeneratorsCode2
Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality InversionCode2
Beyond Self-attention: External Attention using Two Linear Layers for Visual TasksCode2
ScaleKD: Strong Vision Transformers Could Be Excellent TeachersCode2
Beyond Image Super-Resolution for Image Recognition with Task-Driven Perceptual LossCode2
DAMamba: Vision State Space Model with Dynamic Adaptive ScanCode2
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature DistillationCode2
ConvMAE: Masked Convolution Meets Masked AutoencodersCode2
Continual Forgetting for Pre-trained Vision ModelsCode2
AEM: Attention Entropy Maximization for Multiple Instance Learning based Whole Slide Image ClassificationCode2
Show:102550
← PrevPage 5 of 209Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1CoCa (finetuned)Top 1 Accuracy91Unverified
2Model soups (BASIC-L)Top 1 Accuracy90.98Unverified
3Model soups (ViT-G/14)Top 1 Accuracy90.94Unverified
4DaViT-GTop 1 Accuracy90.4Unverified
5Meta Pseudo Labels (EfficientNet-L2)Top 1 Accuracy90.2Unverified
6DaViT-HTop 1 Accuracy90.2Unverified
7SwinV2-GTop 1 Accuracy90.17Unverified
8MAWS (ViT-6.5B)Top 1 Accuracy90.1Unverified
9Florence-CoSwin-HTop 1 Accuracy90.05Unverified
10RevCol-HTop 1 Accuracy90Unverified