Multi-Modal Forecaster: Jointly Predicting Time Series and Textual Data
Kai Kim, Howard Tsai, Rajat Sen, Abhimanyu Das, ZiHao Zhou, Abhishek Tanpure, Mathew Luo, Rose Yu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Rose-STL-Lab/Multimodal_ForecastingOfficialIn paperpytorch★ 21
Abstract
Current forecasting approaches are largely unimodal and ignore the rich textual data that often accompany the time series due to lack of well-curated multimodal benchmark dataset. In this work, we develop TimeText Corpus (TTC), a carefully curated, time-aligned text and time dataset for multimodal forecasting. Our dataset is composed of sequences of numbers and text aligned to timestamps, and includes data from two different domains: climate science and healthcare. Our data is a significant contribution to the rare selection of available multimodal datasets. We also propose the Hybrid Multi-Modal Forecaster (Hybrid-MMF), a multimodal LLM that jointly forecasts both text and time series data using shared embeddings. However, contrary to our expectations, our Hybrid-MMF model does not outperform existing baselines in our experiments. This negative result highlights the challenges inherent in multimodal forecasting. Our code and data are available at https://github.com/Rose-STL-Lab/Multimodal_ Forecasting.