SOTAVerified

Image Classification

Image Classification is a fundamental task in vision recognition that aims to understand and categorize an image as a whole under a specific label. Unlike object detection, which involves classification and location of multiple objects within an image, image classification typically pertains to single-object images. When the classification becomes highly detailed or reaches instance-level, it is often referred to as image retrieval, which also involves finding similar images in a large database.

Source: Metamorphic Testing for Object Detection Systems

Papers

Showing 251300 of 10419 papers

TitleStatusHype
HGRN2: Gated Linear RNNs with State ExpansionCode2
A Simple Episodic Linear Probe Improves Visual Recognition in the WildCode2
EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space DualityCode2
Efficient Multi-Scale Attention Module with Cross-Spatial LearningCode2
UNetFormer: A UNet-like Transformer for Efficient Semantic Segmentation of Remote Sensing Urban Scene ImageryCode2
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural NetworksCode2
EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision ApplicationsCode2
Dilated Neighborhood Attention TransformerCode2
Effective Data Augmentation With Diffusion ModelsCode2
DEYO: DETR with YOLO for End-to-End Object DetectionCode2
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTsCode2
DGR-MIL: Exploring Diverse Global Representation in Multiple Instance Learning for Whole Slide Image ClassificationCode2
EMR-Merging: Tuning-Free High-Performance Model MergingCode2
Decoupled Knowledge DistillationCode2
DaViT: Dual Attention Vision TransformersCode2
Class-Incremental Learning: A SurveyCode2
An Overview of Deep Semi-Supervised LearningCode2
DAT++: Spatially Dynamic Vision Transformer with Deformable AttentionCode2
Deep PCB To COCO ConvertorCode2
Current Trends in Deep Learning for Earth Observation: An Open-source Benchmark Arena for Image ClassificationCode2
DAMamba: Vision State Space Model with Dynamic Adaptive ScanCode2
Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality InversionCode2
CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale AttentionCode2
CrypTen: Secure Multi-Party Computation Meets Machine LearningCode2
DataDream: Few-shot Guided Dataset GenerationCode2
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic SegmentationCode2
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural NetworksCode2
ParC-Net: Position Aware Circular Convolution with Merits from ConvNets and TransformerCode2
MobileOne: An Improved One millisecond Mobile BackboneCode2
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature DistillationCode2
A Self-Supervised Descriptor for Image Copy DetectionCode2
MogaNet: Multi-order Gated Aggregation NetworkCode2
Context Encoding for Semantic SegmentationCode2
Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-TuningCode2
Continual Forgetting for Pre-trained Vision ModelsCode2
A Simple Framework for Contrastive Learning of Visual RepresentationsCode2
ConvMAE: Masked Convolution Meets Masked AutoencodersCode2
CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text LabelsCode2
Adapter is All You Need for Tuning Visual TasksCode2
AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed GradientsCode2
Fixing the train-test resolution discrepancy: FixEfficientNetCode2
FixMatch: Simplifying Semi-Supervised Learning with Consistency and ConfidenceCode2
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense PredictionCode2
AdaFisher: Adaptive Second Order Optimization via Fisher InformationCode2
CLIP-Art: Contrastive Pre-training for Fine-Grained Art ClassificationCode2
GalLoP: Learning Global and Local Prompts for Vision-Language ModelsCode2
Generative Pretraining from PixelsCode2
GeoVision Labeler: Zero-Shot Geospatial Classification with Vision and Language ModelsCode2
GPipe: Efficient Training of Giant Neural Networks using Pipeline ParallelismCode2
CLIP-MoE: Towards Building Mixture of Experts for CLIP with Diversified Multiplet UpcyclingCode2
Show:102550
← PrevPage 6 of 209Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1CoCa (finetuned)Top 1 Accuracy91Unverified
2Model soups (BASIC-L)Top 1 Accuracy90.98Unverified
3Model soups (ViT-G/14)Top 1 Accuracy90.94Unverified
4DaViT-GTop 1 Accuracy90.4Unverified
5Meta Pseudo Labels (EfficientNet-L2)Top 1 Accuracy90.2Unverified
6DaViT-HTop 1 Accuracy90.2Unverified
7SwinV2-GTop 1 Accuracy90.17Unverified
8MAWS (ViT-6.5B)Top 1 Accuracy90.1Unverified
9Florence-CoSwin-HTop 1 Accuracy90.05Unverified
10RevCol-HTop 1 Accuracy90Unverified