The article introduces PromptCrypt, a novel encryption mechanism designed to enhance user privacy in cloud-based language models (LLMs) like ChatGPT. The method encrypts user inputs with Emoji, making the data unreadable to humans and LLMs while preserving the original intent. This ensures the model’s performance is not affected. Experiment results show that PromptCrypt effectively encrypts personal information in prompts, preventing the discernment of sensitive data and maintaining or improving precision without additional tuning. The findings suggest the feasibility of encryption measures that protect user privacy without compromising the functionality and performance of LLMs.
Publication date: 8 Feb 2024
Project Page: https://github.com/agiresearch/PromptCrypt
Paper: https://arxiv.org/pdf/2402.05868