The 3rd AsiaCCS 2025 Workshop on Secure and Trustworthy Deep Learning Systems (SecTL) serves as a premier platform for researchers, engineers, and professionals from academia, government, and industry. The workshop aims to facilitate the exchange of ideas, present emerging research, and discuss strategies for designing and deploying secure and trustworthy deep learning systems for real-world applications.

Deep learning systems are increasingly applied across sectors such as data science, robotics, healthcare, economics, and safety-critical infrastructures. However, current deep learning algorithms often fall short in ensuring consistent safety and robustness, especially when faced with unpredictable conditions. To be considered reliable, these systems must be resilient against disturbances, attacks, failures, and inherent biases. Additionally, they must be able to avoid unsafe or irrecoverable situations.

We invite research papers, position papers, and work-in-progress papers that explore best practices, new methods, and secure design principles for deep learning systems. SecTL 2025 follows successful editions held in Melbourne (2023) (see link) and Singapore (2024) (see link).