SOTAVerified

Measuring Pre-training Data Quality without Labels for Time Series Foundation Models

2024-12-09Unverified0· sign in to hype

Songkang Wen, Vasilii Feofanov, Jianfeng Zhang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Recently, there has been a growing interest in time series foundation models that generalize across different downstream tasks. A key to strong foundation models is a diverse pre-training dataset, which is particularly challenging to collect for time series classification. In this work, we explore the performance of a contrastive-learning-based foundation model as a function of the data used for pre-training. We introduce contrastive accuracy, a new measure to evaluate the quality of the representation space learned by the foundation model. Our experiments reveal the positive correlation between the proposed measure and the accuracy of the model on a collection of downstream tasks. This suggests that the contrastive accuracy can serve as a criterion to search for time series datasets that can enhance the pre-training and improve thereby the foundation model's generalization.

Tasks

Reproductions