This academic article delves into the computational capabilities of language models based on recurrent neural networks (RNNs). It extends the Turing completeness, a concept showing the computational potential of a system, to the probabilistic case, demonstrating how a rationally weighted language model with unrestricted computation time can simulate any probabilistic Turing machine. The article also establishes a lower bound by showing that under the restriction to real-time computation, such models can simulate deterministic real-time rational probabilistic Turing machines. The study’s findings provide a deeper understanding of the inherent capabilities of language models.

 

Publication date: 20 Oct 2023
Project Page: https://github.com/rycolab/rnn-turing-completeness
Paper: https://arxiv.org/pdf/2310.12942