This research paper delves into the theoretical aspects of Low-Rank Adaptation (LoRA), a prevalent technique for fine-tuning pre-trained models like large language models and diffusion models. Despite its practical success, the theoretical foundations of LoRA are largely unexplored. This paper aims to bridge this gap by analyzing the expressive power of LoRA, proving that it can adapt any model to accurately represent any smaller target model under certain conditions. It also quantifies the approximation error when the LoRA-rank is lower than the threshold. The study validates all theoretical insights with numerical experiments.

 

Publication date: 26 Oct 2023
Project Page: https://arxiv.org/abs/2310.17513v1
Paper: https://arxiv.org/pdf/2310.17513