SOTAVerified

Self-Supervised Learning

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Papers

Showing 821830 of 5044 papers

TitleStatusHype
From Glucose Patterns to Health Outcomes: A Generalizable Foundation Model for Continuous Glucose Monitor Data Analysis0
Speech Representation Learning Revisited: The Necessity of Separate Learnable Parameters and Robust Data Augmentation0
CooPre: Cooperative Pretraining for V2X Cooperative Perception0
Do Neural Scaling Laws Exist on Graph Self-Supervised Learning?Code0
PooDLe: Pooled and dense self-supervised learning from naturalistic videos0
OCTCube-M: A 3D multimodal optical coherence tomography foundation model for retinal and systemic diseases with cross-cohort and cross-device validation0
kNN Retrieval for Simple and Effective Zero-Shot Multi-speaker Text-to-Speech0
Near, far: Patch-ordering enhances vision foundation models' scene understanding0
Accelerating Goal-Conditioned RL Algorithms and ResearchCode3
Leveraging Superfluous Information in Contrastive Representation Learning0
Show:102550
← PrevPage 83 of 505Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pretraining: NoneImages & Text57.5Unverified
2Pretraining: ShEDImages & Text54.3Unverified
3Pretraining: e-MixImages & Text48.9Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50Accuracy91.7Unverified
2ResNet18Accuracy91.02Unverified
3MV-MRAccuracy89.67Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy93.89Unverified
2ResNet18average top-1 classification accuracy92.58Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy72.51Unverified
2ResNet18average top-1 classification accuracy69.31Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy82.64Unverified
2CorInfomax (ResNet18)Top-1 Accuracy80.48Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy51.84Unverified
2ResNet18average top-1 classification accuracy51.67Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy93.18Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy71.61Unverified
#ModelMetricClaimedVerifiedStatus
1Hybrid BYOL-S/CvTAccuracy67.2Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy54.86Unverified