This paper discusses the complexity of language model architectures, particularly Transformers, and their use in natural language processing tasks. The author proposes a linguistically motivated strategy for encoding the contextual sense of words. This approach focuses on semantic compositionality and gives particular attention to dependency relations and semantic notions. The model is implemented and compared with Transformer-based architectures for a semantic task, specifically the similarity calculation of word senses in context. The results suggest that linguistically motivated models can compete with complex neural architectures.

 

Publication date: 4 Dec 2023
Project Page: https://doi.org/…
Paper: https://arxiv.org/pdf/2312.00680