The paper focuses on the use of Transformer-based language models, specifically BERT and (Chat)GPT, in detecting semantic changes in language over time. This is a crucial aspect in understanding historical texts and has various research applications. The author’s experiments indicate that while ChatGPT has shown impressive capabilities in generating fluent responses to human queries, it performs significantly worse than BERT in detecting short-term semantic changes. This is the first attempt to assess the use of (Chat)GPT for studying semantic change.

 

Publication date: 26 Jan 2024
Project Page: N/A
Paper: https://arxiv.org/pdf/2401.14040