Secure and Trustworthy Deep Learning Systems (SecTL) Workshop
August 2025, Ha Noi, Vietnam
co-located with ACM ASIACCS 2025
Keynotes
Reproducible Research Is Both Hard and Helpful: A Case Study of Privacy-Preserving Machine Learning
![]() |
Prof. Baochun Li Professor, The University of Toronto, Canada |
Abstract: Modern distributed machine learning systems are highly complex, and research on these systems involves a large number of moving parts, making research results much less likely to be reproducible. However, reproducible research is not just a gem waiting to be discovered, but a necessary characteristic to move a research field forward. In this talk, I will discuss some of my experiences over the past three years on why reproducible research is hard, with a focus on privacy-preserving machine learning and federated learning. In particular, I will show a case study where potential data reconstruction attacks on large language models were studied, and how such studies may not be technically sound when scrutinized carefully. As a second example, I will also be using actual code from Plato—a new open-source framework that I developed from scratch—as examples, and show how much existing claims in the literature over most major research topics have deviated substantially from our experiences in Plato. In conclusion, I will present several guiding principles and recent technological advances that can make research more reproducible, and advocate for more benchmarking platforms that allow research ideas to be compared fairly under the same roof.
Bio: Baochun Li received his Ph.D. degree from the University of Illinois at Urbana-Champaign, Urbana, in 2000. Since 2000, he has been with the University of Toronto, where he is currently a Professor. He holds the Bell Canada Endowed Chair in Computer Engineering since August 2005. He has co-authored more than 480 research papers, with a total of over 27,000 citations, an H-index of 89 and an i10-index of 359, according to Google Scholar Citations. He was the recipient of the IEEE Communications Society Leonard G. Abraham Award in the Field of Communications Systems in 2000, the Multimedia Communications Best Paper Award from the IEEE Communications Society in 2009, the University of Toronto McLean Award in 2009, the Best Paper Award from IEEE INFOCOM in 2023, and the IEEE INFOCOM Achievement Award in 2024. He is a Fellow of the Canadian Academy of Engineering, a Fellow of the Engineering Institute of Canada, and a Fellow of IEEE.
SIGuard: Guarding Secure Inference with Post Data Privacy
![]() |
Dr. Xiaoning (Maggie) Liu Lecturer, RMIT University, Australia |
Abstract: Secure inference is designed to enable encrypted machine learning model prediction over encrypted data. It will ease privacy concerns when models are deployed in Machine Learning as a Service. For efficiency, most of recent secure inference protocols are constructed using secure multi-party computation (MPC) techniques. However, MPC-based protocols do not hide information revealed from their output. In the context of secure inference, prediction outputs (i.e., inference results of encrypted user inputs and models) are revealed to the users. As a result, adversaries can compromise output privacy of secure inference, i.e., launching Membership Inference Attacks (MIAs) by querying encrypted models, just like MIAs in plaintext inference. In this talk, I will first share our observations on the vulnerability of MPC-based secure inference to MIAs, though it yields perturbed predictions due to approximations. Then I will report on our recent research effort in guarding the output privacy of secure inference from being exploited by MIAs. I will also discuss the future research along with the line of privacy-preserving machine learning and deep learning.
Bio: Dr Xiaoning (Maggie) Liu is a Lecturer at the School of Computing Technologies, RMIT University, Australia. Her research pivots on data privacy and security related to machine learning, cloud computing, and digital health. Her current focus is on designing practical secure multiparty computation protocols and systems to its applications in privacy-preserving machine learning. In the past few years, her work has appeared in prestigious venues in computer security, such as USENIX Security Symposium, NDSS, and European Symposium on Research in Computer Security (ESORICS), IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE Transactions on Information Forensics and Security (TIFS). Her research has been supported by Australian Research Council, and CSIRO. She is the recipient of the Best Paper Award of ESORICS 2021 and the RMIT HDR Research Prize 2023.