The paper explores the use of neural language models to predict human behavior. The study found that training these models on more developmentally plausible data, such as in the BabyLM Challenge, could improve their linguistic knowledge acquisition. However, this did not result in better alignment with human reading behavior. The models still generated predictions that were misaligned with human behavior. Therefore, training on developmentally plausible datasets alone may not be sufficient to generate language models capable of accurately predicting human language processing.

 

Publication date: 1 Dec 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2311.18761