SOTAVerified

Robust Semi-supervised Learning via f-Divergence and α-Rényi Divergence

2024-05-01Unverified0· sign in to hype

Gholamali Aminian, Amirhossien Bagheri, Mahyar JafariNodeh, Radmehr Karimian, Mohammad-Hossein Yassaee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper investigates a range of empirical risk functions and regularization methods suitable for self-training methods in semi-supervised learning. These approaches draw inspiration from various divergence measures, such as f-divergences and -R\'enyi divergences. Inspired by the theoretical foundations rooted in divergences, i.e., f-divergences and -R\'enyi divergence, we also provide valuable insights to enhance the understanding of our empirical risk functions and regularization techniques. In the pseudo-labeling and entropy minimization techniques as self-training methods for effective semi-supervised learning, the self-training process has some inherent mismatch between the true label and pseudo-label (noisy pseudo-labels) and some of our empirical risk functions are robust, concerning noisy pseudo-labels. Under some conditions, our empirical risk functions demonstrate better performance when compared to traditional self-training methods.

Tasks

Reproductions