Deep learning models have achieved significant success in various fields. However, they are prone to security and privacy challenges. The models can be subjected to different types of attacks like Model Extraction Attacks, Model Inversion Attacks, Adversarial Attacks, and Data Poisoning Attacks. These attacks can compromise the model’s security, data privacy, and integrity at various stages of their lifecycle. The article provides an in-depth understanding of these security and privacy problems of Deep Neural Networks by analyzing the types of attacks, their functioning, and their challenges and drawbacks.

 

Publication date: 23 Nov 2023
Project Page: https://arxiv.org/abs/2311.13744v1
Paper: https://arxiv.org/pdf/2311.13744