SOTAVerified

Invariant Batch Normalization for Multi-source Domain Generalization

2021-01-01Unverified0· sign in to hype

Qing Lian, LIN Yong, Tong Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider the domain generalization problem, where the test domain differs from the training domain. For deep neural networks, we show that the batch normalization layer is a highly unstable component under such domain shifts, and we identify two sources for its instability. Based on this observation, we propose a new learning formulation that can learn robust neural networks so that the corresponding batch normalization layers are invariant under domain shifts. Experimental results on three standard domain generalization benchmarks demonstrate that our method can learn neural network models with significantly more stable batch normalization layers on unseen domains, and the improved stability leads to superior generalization performances.

Tasks

Reproductions