Attacking Machine Learning Models

Prof. Yang Zhang
Tenured Faculty (equiv. Professor), CISPA Helmholtz Center for Information Security, Germany

Abstract: Machine learning has made tremendous progress during the past decade. While improving our daily lives, recent research shows that machine learning models are vulnerable to various security and privacy attacks. In this talk, I will cover our three recent works in this field. First, I will talk about some recent development in membership inference. Second, I will present link stealing attacks against graph neural networks. In the end, I will introduce model hijacking attacks.

Bio: Yang Zhang (https://yangzhangalmo.github.io) is a tenured faculty (equivalent to full professor) at CISPA Helmholtz Center for Information Security, Germany. His research concentrates on trustworthy machine learning. Moreover, he works on measuring and understanding misinformation and unsafe content like hateful memes on the Internet. Over the years, he has published multiple papers at top venues in computer science, including CCS, NDSS, Oakland, and USENIX Security. His work has received the NDSS 2019 distinguished paper award and the CCS 2022 best paper award runner-up.

Adversarial Attacks and Defenses in Deep Learning: from a Perspective of Cybersecurity

Prof. Tianqing Zhu
Associate Professor, The University of Technology Sydney, Australia

Abstract: The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains. However, the potential risks caused by adversarial samples have hindered the large-scale deployment of deep learning. In these scenarios, adversarial perturbations, imperceptible to human eyes, significantly decrease the model’s final performance. Many prior works have been published on adversarial attacks and their countermeasures in the realm of deep learning. It is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. Hence, with this talk, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. The framework is built from the perspective of cybersecurity so as to provide a lifecycle for adversarial attacks and defences. In addition, we provided a case study to show the defense on an deep learning attack in this framework.

Bio: Tianqing Zhu holds BEng and MEng degrees from Wuhan University, Wuhan, China in 2000 and 2004, respectively. And a PhD in Computer Science from Deakin University, Australia (2014). She is currently an Associate Professor with the School of Computer Science at the University of Technology Sydney, Australia. She is serving as an Australian Research Council College of Expert from 2021. Prior to that, she was a Lecturer with the School of Information Technology, Deakin University, from 2014 to 2018. Her research interests include privacy-preserving and AI security.