Where and How to Attack? A Causality-Inspired Recipe for Generating Counterfactual Adversarial Examples
The research paper discusses the vulnerability of Deep Neural Networks (DNNs) to adversarial examples. Traditional methods assume that attackers can modify any features, neglecting the causal generating process of data,…
Continue reading