A robust methodology for long-term sustainability evaluation of Machine Learning models
Jorge Paz-Ruza, João Gama, Amparo Alonso-Betanzos, Bertha Guijarro-Berdiñas
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Sustainability and efficiency have become essential considerations in the development and deployment of Artificial Intelligence systems, but existing regulatory practices for Green AI still lack standardized, model-agnostic evaluation protocols. Recently, sustainability auditing pipelines for ML and usual practices by researchers show three main pitfalls: 1) they disproportionally emphasize epoch/batch learning settings, 2) they do not formally model the long-term sustainability cost of adapting and re-training models, and 3) they effectively measure the sustainability of sterile experiments, instead of estimating the environmental impact of real-world, long-term AI lifecycles. In this work, we propose a novel evaluation protocol for assessing the long-term sustainability of ML models, based on concepts inspired by Online ML, which measures sustainability and performance through incremental/continual model retraining parallel to real-world data acquisition. Through experimentation on diverse ML tasks using a range of model types, we demonstrate that traditional static train-test evaluations do not reliably capture sustainability under evolving datasets, as they overestimate, underestimate and/or erratically estimate the actual cost of maintaining and updating ML models. Our proposed sustainability evaluation pipeline also draws initial evidence that, in real-world, long-term ML life-cycles, higher environmental costs occasionally yield little to no performance benefits.