To mark Cybersecurity Awareness Month, global tech trade association ITI released a new document on Tuesday that provides in-depth suggestions on how policymakers can improve the cybersecurity of AI models and systems.
While AI models and systems introduce a new threat vector for malicious actors to exploit, they also offer an opportunity to bolster proactive cybersecurity measures, ITI said.
In its paper, AI Security Policy Principles, ITI offers five suggestions around how policymakers can help bolster the cybersecurity of AI systems and reiterates how AI can be used to enhance proactive cybersecurity measures.
For example, AI is being utilized to thwart dynamic, quickly evolving threats, and AI-powered analytical tools can help identify the new and novel tactics, techniques, and behaviors of sophisticated and well-resourced adversaries.
“Cyber threats to AI models and systems are borderless and continuing to evolve. The tech industry urges policymakers around the globe to prioritize engagement with likeminded partners and allies to advance a common, consistent approach to AI security,” said ITI VP of Policy Courtney Lang. “ITI’s new policy guide aims to give lawmakers the tools necessary to develop interoperable AI security frameworks that protect consumers, mitigate potential risks, and empower the global cybersecurity ecosystem and workforce in an AI era.”
ITI’s AI Security Policy Principles outlines five key principles for policymakers to follow:
- Leverage existing cybersecurity practices, standards, and controls where they are already sufficient;
- Coordinate with likeminded allies and partners to ensure that policy approaches to AI security are global and interoperable;
- Ensure that any AI security policy reflects a comprehensive approach across the AI life cycle and value chain;
- Utilize public-private partnerships to achieve AI cybersecurity outcomes; and
- Ensure adequate support for foundational AI R&D and for training and growing the existing cybersecurity workforce.