AI requires an applied standard and security framework that can keep up with its explosive growth. Since Google was aware that this was only the beginning, Google released the Secure AI Framework (SAIF) last year. Any industrial framework must, of course, be operationalized through close cooperation with others, and above all, a forum.
Together with their industry colleagues, Google is launching the Coalition for Secure AI (CoSAI) today at the Aspen Security Forum. Over the past year, Google have been trying to bring this coalition together in order to achieve comprehensive security measures for addressing the particular vulnerabilities associated with AI, for both immediate and long-term challenges.
Creating Safe AI Systems for Everyone
In order to share best practices for secure AI deployment and work together on AI security research and product development, the Coalition for Secure AI (CoSAI) is an open ecosystem of AI and security specialists from top industry organisations.
What is CoSAI?
Collective action is necessary for security, and using AI itself is the greatest approach to secure AI. Individuals, developers, and businesses must all embrace common security standards and best practices in order to engage in the digital ecosystem securely and ensure that it is safe for all users. AI is not an exception. In order to address this, a diverse ecosystem of stakeholders came together to form the Coalition for Secure AI (CoSAI), which aims to build technical open-source solutions and methodologies for secure AI development and deployment, share security expertise and best practices, and invest in AI security research collectively.
In partnership with business and academia, CoSAI will tackle important AI security concerns through a number of vital workstreams, including initiatives like:
- AI Systems’ Software Supply Chain Security
- Getting Defenders Ready for a Changing Security Environment
- Governance of AI Security
How It Benefits You
By taking part in CoSAI, you may get in touch with a thriving network of business executives who exchange knowledge and best practices about the development and application of safe AI. By participating, you get access to standardised procedures, collaborative efforts in AI security research, and open-source solutions aimed at enhancing the security of AI systems. In order to strengthen the security and trust of AI systems inside your company, CoSAI provides tools and guidelines for putting strong security controls and mitigations into place.
Participate!
Do you have any questions regarding CoSAI or would you like to help with some of Google’s projects? Any developer is welcome to participate technically for no cost. Google is dedicated to giving each and every contributor a transparent and friendly atmosphere. Become a CoSAI sponsor to contribute to the project’s success by financing the essential services that the community needs.
CoSAI will be headquartered under OASIS Open, the global standards and open source organisation, and comprises founding members Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, Paypal, and Wiz.
Announcing the first workstreams of CoSAI
CoSAI will support this group investment in AI security as people, developers, and businesses carry out their efforts to embrace common security standards and best practices. Additionally, Google is releasing today the first three priority areas that the alliance will work with business and academia to address:
Software Supply Chain Security for Artificial Intelligence Systems: Google has been working to expand the use of SLSA Provenance to AI models in order to determine when AI software is secure based on the way it was developed and managed along the software supply chain. By extending the current efforts of SSDF and SLSA security principles for AI and classical software, this workstream will strive to improve AI security by offering guidance on analysing provenance, controlling risks associated with third-party models, and examining the provenance of the entire AI application.
Getting defenders ready for an evolving cybersecurity environment: Security practitioners don’t have an easy way to handle the intricacy of security problems when managing daily AI governance. In order to address the security implications of AI use, this workstream will offer a framework for defenders to identify investments and mitigation strategies. The framework will grow mitigation measures in tandem with the development of AI models that progress offensive cybersecurity.
AI security governance: Managing AI security concerns calls for a fresh set of tools and knowledge of the field’s particularities. To assist practitioners in readiness assessments, management, monitoring, and reporting of the security of their AI products, CoSAI will create a taxonomy of risks and controls, a checklist, and a scorecard.
In order to promote responsible AI, CoSAI will also work with groups like the Partnership on AI, Open Source Security Foundation, Frontier Model Forum, and ML Commons.
Next up
Google is dedicated to making sure that as AI develops, efficient risk management techniques do too. The industry support for safe and secure AI development that Google has witnessed over the past year is encouraging. The efforts being made by developers, specialists, and large and small businesses to assist organisations in securely implementing, training, and utilising AI give them even more hope.
AI developers require and end users should have access to a framework for AI security that adapts to changing circumstances and ethically seizes opportunities. The next phase of that journey is CoSAI, and in the upcoming months, further developments should be forthcoming. You can go to coalitionforsecureai.org to find out how you can help with CoSAI.