The paper presents a Spatial-Temporal Large Language Model (ST-LLM) for traffic prediction. The ST-LLM redefines the timesteps at each location as tokens and incorporates a spatial-temporal embedding module to learn the spatial location and global temporal representations of tokens. These representations are then fused to provide each token with unified spatial and temporal information. The authors also propose a novel partially frozen attention strategy of the LLM, designed to capture spatial-temporal dependencies for traffic prediction. The ST-LLM outperforms state-of-the-art models and exhibits robust performance in both few-shot and zero-shot prediction scenarios.

 

Publication date: 19 Jan 2024
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2401.10134