SOTAVerified

Towards Unified and Effective Domain Generalization

2023-10-16Code Available1· sign in to hype

Yiyuan Zhang, Kaixiong Gong, Xiaohan Ding, Kaipeng Zhang, Fangrui Lv, Kurt Keutzer, Xiangyu Yue

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We propose UniDG, a novel and Unified framework for Domain Generalization that is capable of significantly enhancing the out-of-distribution generalization performance of foundation models regardless of their architectures. The core idea of UniDG is to finetune models during the inference stage, which saves the cost of iterative training. Specifically, we encourage models to learn the distribution of test data in an unsupervised manner and impose a penalty regarding the updating step of model parameters. The penalty term can effectively reduce the catastrophic forgetting issue as we would like to maximally preserve the valuable knowledge in the original model. Empirically, across 12 visual backbones, including CNN-, MLP-, and Transformer-based models, ranging from 1.89M to 303M parameters, UniDG shows an average accuracy improvement of +5.4% on DomainBed. These performance results demonstrate the superiority and versatility of UniDG. The code is publicly available at https://github.com/invictus717/UniDG

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
DomainNetUniDG + CORAL + ConvNeXt-BAverage Accuracy59.5Unverified
Office-HomeUniDG + CORAL + ConvNeXt-BAverage Accuracy88.9Unverified
PACSUniDG + CORAL + ConvNeXt-BAverage Accuracy95.6Unverified
TerraIncognitaUniDG + CORAL + ConvNeXt-BAverage Accuracy69.6Unverified
VLCSUniDG + CORAL + ConvNeXt-BAverage Accuracy84.5Unverified

Reproductions