The Limitations of Model Retraining in the Face of Performativity
2024-08-16Unverified0· sign in to hype
Anmol Kabra, Kumar Kshitij Patel
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study stochastic optimization in the context of performative shifts, where the data distribution changes in response to the deployed model. We demonstrate that naive retraining can be provably suboptimal even for simple distribution shifts. The issue worsens when models are retrained given a finite number of samples at each retraining step. We show that adding regularization to retraining corrects both of these issues, attaining provably optimal models in the face of distribution shifts. Our work advocates rethinking how machine learning models are retrained in the presence of performative effects.