The study investigates the capabilities of Large Language Models (LLMs) in performing zero-shot tasks such as time-series forecasting. The researchers analyze the ability of LLMs to extrapolate the behavior of dynamical systems governed by principles of physical interest. The results reveal that LLaMA 2, a language model trained primarily on texts, can accurately predict dynamical system time series without fine-tuning or prompt engineering. The accuracy of the learned physical rules increases with the length of the input context window, demonstrating an in-context version of the neural scaling law. The study also presents a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers from LLMs.

 

Publication date: 2 Feb 2024
Project Page: Not Provided
Paper: https://arxiv.org/pdf/2402.00795