This academic report by Jiameng Pu and Zafar Takhirov summarizes the Membership Inference Attacks (MIA) experiments of the Embedding Attack Project. The paper covers the evaluation of two main MIA strategies on six AI models ranging from Computer Vision to Language Modelling. The research focuses on the potential for sensitive information leakage from the embeddings of ML models, posing privacy concerns. The report also explores the feasibility of inferring membership and property information of data samples from their embeddings. The study has real-world implications in areas like cross-border data/model transfers, recommender systems, e-commerce, and NLP applications.
Publication date: 24 Jan 2024
Project Page: arXiv:2401.13854v1
Paper: https://arxiv.org/pdf/2401.13854