This article investigates Machine Unlearning (MU), which is focused on neural models inadvertently retaining personal or sensitive data. A novel approach is introduced to achieve precise and selective forgetting within language models. This approach aims to mitigate adverse effects on language model performance, particularly in generation tasks. Two innovative evaluation metrics are proposed: Sensitive Information Extraction Likelihood (S-EL) and Sensitive Information Memory Accuracy (S-MA), designed to gauge the effectiveness of sensitive information elimination. An effective method for annotating sensitive scopes is presented, involving both online and offline strategies. The online selection mechanism leverages language probability scores to ensure computational efficiency, while the offline annotation entails a robust two-stage process based on Large Language Models (LLMs).

 

Publication date: 9 Feb 2024
Project Page: https://doi.org/XXXXXXX.XXXXXXX
Paper: https://arxiv.org/pdf/2402.05813