The study presents aLLM4TS, an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning. The approach reimagines time-series forecasting as a self-supervised, multi-patch prediction task, capturing temporal dynamics more effectively than traditional methods. The strategy includes a causal continual pre-training phase on various time-series datasets, followed by fine-tuning for multi-patch prediction in the targeted time-series context. The model’s distinctive element is the patch-wise decoding layer, which transposes individual patches into temporal sequences, enhancing the model’s proficiency in mastering temporal patch-based representations. aLLM4TS shows superior performance in several downstream tasks, proving its effectiveness in deriving temporal representations and marking a significant advancement in the adaptation of LLMs for time-series analysis.

 

Publication date: 8 Feb 2024
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2402.04852