LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching
Qun Liu, Matthew Shreve, Raja Bala
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Although data is abundant, data labeling is expensive. Semi-supervised learning methods combine a few labeled samples with a large corpus of unlabeled data to effectively train models. This paper introduces our proposed method LiDAM, a semi-supervised learning approach rooted in both domain adaptation and self-paced learning. LiDAM first performs localized domain shifts to extract better domain-invariant features for the model that results in more accurate clusters and pseudo-labels. These pseudo-labels are then aligned with real class labels in a self-paced fashion using a novel iterative matching technique that is based on majority consistency over high-confidence predictions. Simultaneously, a final classifier is trained to predict ground-truth labels until convergence. LiDAM achieves state-of-the-art performance on the CIFAR-100 dataset, outperforming FixMatch (73.50% vs. 71.82%) when using 2500 labels.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| cifar-100, 10000 Labels | LiDAM | Percentage error | 23.22 | — | Unverified |
| CIFAR-100, 2500 Labels | LiDAM | Percentage error | 26.5 | — | Unverified |
| CIFAR-100, 5000 Labels | LiDAM | Accuracy (%) | 75.14 | — | Unverified |
| CIFAR-100, 5000Labels | LiDAM | Percentage correct | 75.14 | — | Unverified |
| CIFAR-10, 1000 Labels | LiDAM | Accuracy | 89.04 | — | Unverified |
| CIFAR-10, 250 Labels | LiDAM | Percentage error | 19.17 | — | Unverified |
| CIFAR-10, 4000 Labels | LiDAM | Percentage error | 7.48 | — | Unverified |