SOTAVerified

Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness

2023-10-09Code Available0· sign in to hype

Bhavya Vasudeva, Kameron Shahabi, Vatsal Sharan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Neural networks (NNs) are known to exhibit simplicity bias where they tend to prefer learning 'simple' features over more 'complex' ones, even when the latter may be more informative. Simplicity bias can lead to the model making biased predictions which have poor out-of-distribution (OOD) generalization. To address this, we propose a framework that encourages the model to use a more diverse set of features to make predictions. We first train a simple model, and then regularize the conditional mutual information with respect to it to obtain the final model. We demonstrate the effectiveness of this framework in various problem settings and real-world applications, showing that it effectively addresses simplicity bias and leads to more features being used, enhances OOD generalization, and improves subgroup robustness and fairness. We complement these results with theoretical analyses of the effect of the regularization and its OOD generalization properties.

Tasks

Reproductions