SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 176200 of 6661 papers

TitleStatusHype
Self-Reinforced Graph Contrastive LearningCode0
Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping0
GMM-Based Comprehensive Feature Extraction and Relative Distance Preservation For Few-Shot Cross-Modal Retrieval0
SPKLIP: Aligning Spike Video Streams with Natural Language0
The Computation of Generalized Embeddings for Underwater Acoustic Target Recognition using Contrastive LearningCode0
Multiscale Adaptive Conflict-Balancing Model For Multimedia Deepfake Detection0
Representation of perceived prosodic similarity of conversational feedback0
LLM-CoT Enhanced Graph Neural Recommendation with Harmonized Group Policy Optimization0
Multi-modal contrastive learning adapts to intrinsic dimensions of shared latent variables0
Bridging Generative and Discriminative Learning: Few-Shot Relation Extraction via Two-Stage Knowledge-Guided Pre-trainingCode0
Contrastive Alignment with Semantic Gap-Aware Corrections in Text-Video RetrievalCode0
Not All Documents Are What You Need for Extracting Instruction Tuning Data0
ViEEG: Hierarchical Neural Coding with Cross-Modal Progressive Enhancement for EEG-Based Visual Decoding0
Fine-Grained ECG-Text Contrastive Learning via Waveform Understanding Enhancement0
Towards Sustainability in 6G Network Slicing with Energy-Saving and Optimization Methods0
Robust Cross-View Geo-Localization via Content-Viewpoint Disentanglement0
DC-Seg: Disentangled Contrastive Learning for Brain Tumor Segmentation with Missing ModalitiesCode1
CellCLIP -- Learning Perturbation Effects in Cell Painting via Text-Guided Contrastive Learning0
Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch MiningCode1
Think Twice Before You Act: Enhancing Agent Behavioral Safety with Thought CorrectionCode2
SoftCoT++: Test-Time Scaling with Soft Chain-of-Thought ReasoningCode2
Fractal Graph Contrastive Learning0
MoCLIP: Motion-Aware Fine-Tuning and Distillation of CLIP for Human Motion Generation0
Less is More: Multimodal Region Representation via Pairwise Inter-view LearningCode0
FRET: Feature Redundancy Elimination for Test Time Adaptation0
Show:102550
← PrevPage 8 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified