This paper provides a comprehensive privacy assessment of prompts learned by visual prompt learning. The authors demonstrate that these prompts are vulnerable to property inference and membership inference attacks. The study also shows that an adversary can successfully launch a property inference attack at a limited cost and that membership inference attacks can be successful even with relaxed adversarial assumptions. Although the authors propose a method to mitigate membership inference attacks with a decent utility-defense trade-off, it fails to defend against property inference attacks. The authors aim to shed light on the privacy risks of the popular prompt learning paradigm.

 

Publication date: 19 Oct 2023
Project Page: https://github.com/yxoh/prompt_leak_usenix2024/
Paper: https://arxiv.org/pdf/2310.11970