On October 15, 2024, the Cyber Security Agency of Singapore (“CSA”) published Guidelines on Securing AI Systems (the “Guidelines”) alongside a Companion Guide for Securing AI Systems (the “Companion Guide”), which is intended to serve as support for the Guidelines. In its announcement, the CSA states that while artificial intelligence (“AI”) offers significant benefits for the economy and society, it is crucial to ensure AI systems behave as intended and that the cybersecurity risks are properly addressed. As such, the CSA notes that AI should be secure by design and by default, and companies should take a proactive approach to managing security risks from the outset.
The Guidelines section on “Securing AI” recommends taking a lifecycle approach, stating that hardening only the AI model is insufficient to ensure a holistic defense against AI-related threats. There are the following five key stages in the AI lifecycle, according to CSA:
- Planning and Design – Raise awareness of AI security threats and conduct security risk assessments.
- Development – Secure the AI supply chain, protect AI assets, and secure the development environment.
- Deployment – Secure the deployment infrastructure, establish incident management processes, and release AI systems responsibly after security checks and evaluation.
- Operations and Maintenance – Monitor all system inputs and outputs for security anomalies and establish vulnerability disclosure processes.
- End of Life – Ensure secure and proper disposal of training data and AI models in accordance with relevant industry standards and/or regulations.
The more detailed Companion Guide, created in collaboration with AI and cybersecurity practitioners, offers practical measures and controls for the secure design, development, and adoption of AI systems. The Companion Guide references various international industry guidelines and standards, including the NIST Risk Management Framework and the MITRE ATLAS database.