This paper investigates the potential for enhancing cross-lingual generalization in large language models (LLMs), particularly for low-resource languages. It explores the possibility of explicitly aligning conceptual correspondence between languages. The research involves analyses of 43 languages and reveals a high degree of alignability among structural concepts in each language. A meta-learning-based method is proposed to align different languages’ conceptual spaces, facilitating zero-shot and few-shot generalization in concept classification. The method also provides insights into the cross-lingual in-context learning phenomenon. The experiments show that this approach achieves competitive results and narrows the performance gap between languages.

 

Publication date: 20 Oct 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2310.12794