SOTAVerified

CVAE-SM: A Conditional Variational Autoencoder with Style Modulation for Efficient Uncertainty Quantification

2024-05-13IEEE International Conference on Robotics and Automation (ICRA) 2024Code Available0· sign in to hype

Amin Ullah, Taiqing Yan, Li Fuxin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep learning has brought transformative advancements to object segmentation, especially in marine robotics contexts such as waste management and subaquatic infrastructure oversight. However, a central challenge persists: calibrating the prediction confidence of the model to ensure robust and reliable outcomes, especially within the demanding underwater environment. Existing solutions for estimating uncertainty are often computationally intensive and have largely centered around Bayesian neural networks or ensemble methods. In this paper, we present a Conditional Variational Autoencoder-based framework (CVAE-SM), which is capable of generating diverse latent codes for improved uncertainty quantification in image segmentation. Our method, enhanced by a style modulator, merges content features, and latent codes more effectively, leading to refined prediction of uncertainty levels. We further introduce a dataset of perturbed underwater images to benchmark uncertainty quantification in this domain. The proposed model not only surpasses peers in segmentation metrics but also matches ensemble models in uncertainty predictions, all while being 2.5 times faster.

Tasks

Reproductions