SOTAVerified

Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance

2023-11-11Code Available1· sign in to hype

June-Woo Kim, Chihyeon Yoon, Miika Toikkanen, Sangmin Bae, Ho-Young Jung

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep generative models have emerged as a promising approach in the medical image domain to address data scarcity. However, their use for sequential data like respiratory sounds is less explored. In this work, we propose a straightforward approach to augment imbalanced respiratory sound data using an audio diffusion model as a conditional neural vocoder. We also demonstrate a simple yet effective adversarial fine-tuning method to align features between the synthetic and real respiratory sound samples to improve respiratory sound classification performance. Our experimental results on the ICBHI dataset demonstrate that the proposed adversarial fine-tuning is effective, while only using the conventional augmentation method shows performance degradation. Moreover, our method outperforms the baseline by 2.24% on the ICBHI Score and improves the accuracy of the minority classes up to 26.58%. For the supplementary material, we provide the code at https://github.com/kaen2891/adversarial_fine-tuning_using_generated_respiratory_sound.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ICBHI Respiratory Sound DatabaseAFT on Mixed-500ICBHI Score61.79Unverified

Reproductions