The article discusses a comparative analysis of three Large Language Models (LLMs) adaptation techniques – Low-Rank Adaptation (LoRA), Soft Prompt Tuning (SPT), and In-Context Learning (ICL). These techniques help in adapting LLMs with private data. However, their security and privacy aspects have not been systematically investigated. The study evaluates the robustness of LoRA, SPT, and ICL against three types of attacks: membership inference, backdoor, and model stealing. The findings reveal that no single technique is superior in terms of privacy and security. Each method has its strengths and weaknesses, and the choice of technique depends on the specific scenario.

 

Publication date: 19 Oct 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2310.11397