The authors, Jobst Landgrebe and Barry Smith, critically examine the claim that machines, specifically large language models (LLMs) like GPT-4, can understand language. They argue that these models do not possess semantics or pragmatics, but rather operate based on syntactic relationships. The authors assert that the idea of machines “learning” or “understanding” is a misnomer, as these processes in machines are fundamentally different from their human counterparts. They argue that machines merely perform syntactic operations, and any attribution of meaning to these operations is observer-dependent and does not reflect the mathematical processes which machines perform.

The authors also challenge the idea that machines could potentially “understand” in the future. They argue that while machines can emulate certain operations of the human mind, they can only create partial emulations of complex systems like the human mind-body-continuum. The authors conclude that machines will never have a will or consciousness, as these capabilities are impossible to model mathematically, and only what can be modeled mathematically can be emulated in a machine.

 

Publication date: July 11, 2023
Project Page: N/A
Paper: https://arxiv.org/pdf/2307.04766.pdf