The research by Zhou Mingjun, Daiqing Zhuoma, Qun Nuo, and Nyima Tashi from Tibet University focuses on the efficient fine-tuning of Tibetan pre-trained language models (PLMs). The study fills a gap in the exploration of low-resource languages like Tibetan. Three types of fine-tuning experiments were conducted on the TNCC-title dataset: prompt-tuning, Adapter lightweight fine-tuning, and a combination of both. The results showed significant improvements, offering valuable insights for the advancement of Tibetan language applications in the context of pre-trained models.

 

Publication date: 22 Sep 2023
Project Page: https://arxiv.org/abs/2309.12109
Paper: https://arxiv.org/pdf/2309.12109