Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization
This paper investigates the potential for enhancing cross-lingual generalization in large language models (LLMs), particularly for low-resource languages. It explores the possibility of explicitly aligning conceptual correspondence between languages. The…
Continue reading