SOTAVerified

Alpha-divergence loss function for neural density ratio estimation

2024-02-03Unverified0· sign in to hype

Yoshiaki Kitazawa

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Density ratio estimation (DRE) is a fundamental machine learning technique for capturing relationships between two probability distributions. State-of-the-art DRE methods estimate the density ratio using neural networks trained with loss functions derived from variational representations of f-divergences. However, existing methods face optimization challenges, such as overfitting due to lower-unbounded loss functions, biased mini-batch gradients, vanishing training loss gradients, and high sample requirements for Kullback--Leibler (KL) divergence loss functions. To address these issues, we focus on -divergence, which provides a suitable variational representation of f-divergence. Subsequently, a novel loss function for DRE, the -divergence loss function (-Div), is derived. -Div is concise but offers stable and effective optimization for DRE. The boundedness of -divergence provides the potential for successful DRE with data exhibiting high KL-divergence. Our numerical experiments demonstrate the effectiveness of -Div in optimization. However, the experiments also show that the proposed loss function offers no significant advantage over the KL-divergence loss function in terms of RMSE for DRE. This indicates that the accuracy of DRE is primarily determined by the amount of KL-divergence in the data and is less dependent on -divergence.

Tasks

Reproductions