ST-LLM+: Graph Enhanced Spatio-Temporal Large Language Models for Traffic Prediction

Page view(s)
0
Checked on
ST-LLM+: Graph Enhanced Spatio-Temporal Large Language Models for Traffic Prediction
Title:
ST-LLM+: Graph Enhanced Spatio-Temporal Large Language Models for Traffic Prediction
Journal Title:
IEEE Transactions on Knowledge and Data Engineering
Publication Date:
15 May 2025
Citation:
Liu, C., Hettige, K. H., Xu, Q., Long, C., Xiang, S., Cong, G., Li, Z., & Zhao, R. (2025). ST-LLM+: Graph Enhanced Spatio-Temporal Large Language Models for Traffic Prediction. IEEE Transactions on Knowledge and Data Engineering, 37(8), 4846–4859. https://doi.org/10.1109/tkde.2025.3570705
Abstract:
Traffic prediction is a crucial component of data management systems, leveraging historical data to learn spatio-temporal dynamics for forecasting future traffic and enabling efficient decision-making and resource allocation. Despite efforts to develop increasingly complex architectures, existing traffic prediction models often struggle to generalize across diverse datasets and contexts, limiting their adaptability in real-world applications. In contrast to existing traffic prediction models, large language models (LLMs) progress mainly through parameter expansion and extensive pre-training while maintaining their fundamental structures. In this paper, we propose ST-LLM+, the graph enhanced spatio-temporal large language models for traffic prediction. Through incorporating a proximity-based adjacency matrix derived from the traffic network into the calibrated LLMs, ST-LLM+ captures complex spatio-temporal dependencies within the traffic network. The Partially Frozen Graph Attention (PFGA) module is designed to retain global dependencies learned during LLMs pre-training while modeling localized dependencies specific to the traffic domain. To reduce computational overhead, ST-LLM+ adopts the LoRA-augmented training strategy, allowing attention layers to be fine-tuned with fewer learnable parameters. Comprehensive experiments on real-world traffic datasets demonstrate that ST-LLM+ outperforms state-of-the-art models. In particular, ST-LLM+ also exhibits robust performance in both few-shot and zero-shot prediction scenarios. Additionally, our case study demonstrates that ST-LLM+ captures global and localized dependencies between stations, verifying its effectiveness for traffic prediction tasks.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation, Prime Minister's Office, Singapore - Aviation Transformation Programme
Grant Reference no. : ATP2.0_ATM-MET_I2R

This research / project is supported by the Agency for Science, Technology and Research - RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative
Grant Reference no. :
Description:
© 2025 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
ISSN:
1041-4347
1558-2191
2326-3865
Files uploaded:

File Size Format Action
tkde-2025-st-llm.pdf 3.82 MB PDF Open