Graph anomaly detection is an important problem with wide-spread applications. These include bots trying to infiltrate social networks or researchers using false citations to gain credibility. While there has been a focus on developing techniques for graph anomaly detection, the robustness of such models to adversarial attacks has not been sufficiently considered so far. Therefore, in this work, we investigate the robustness of graph anomaly detection techniques against adversarial attacks.
In order to achieve this, we implemented three different attacks for testing the vulnerability as well as created an extensible framework for attacking graph anomaly detection models. This framework facilitates direct comparability between different models and attacks. Beyond that, it allows for the creation of additional models and attacks with minimal effort. Our experimental results indicate a vulnerability of graph anomaly detection models to evasion attacks. For instance, our heuristic attack was able to achieve an evasion rate of 90.9% for anomalous nodes.