The research paper discusses the vulnerability of Deep Neural Networks (DNNs) to adversarial examples. Traditional methods assume that attackers can modify any features, neglecting the causal generating process of data, which is impractical. The authors propose CADE, a framework that generates more realistic adversarial examples by considering the causal generating process. The study shows that CADE performs effectively across various attack scenarios.
Publication date: 22 Dec 2023
Project Page: Not provided
Paper: https://arxiv.org/pdf/2312.13628
Leave a comment