Announcing AI Protection: Security for the AI era
As AI usage expands, businesses are often concerned about security threats from its rapid adoption. Google Cloud is dedicated to supporting the clients in securely, legally, and privately developing and implementing AI.
Google Cloud launching a new product today that can assist you in reducing risk at every stage of the AI lifecycle. AI Protection, a suite of features created to protect AI workloads and data across models and clouds, regardless of the platforms you decide to utilise.
Teams may effectively manage AI risk with the use of AI Protection by:
- Finding the AI inventory in your surroundings and evaluating it for possible weaknesses
- Using regulations, guidelines, and controls to safeguard AI assets
- Using detection, investigation, and response skills to manage risks against AI systems
Google multicloud risk-management platform, Security Command Centre (SCC), is linked with AI Protection to give security teams a centralised view of their AI posture and enable them to manage AI risks holistically in relation to other cloud issues.
Discovering AI inventory
Knowing exactly where and how AI is employed in your environment is the first step towards effective AI risk management. With the use of models, apps, data, and their relationships, it capabilities assist you in automatically finding and cataloguing AI assets.
It’s critical to comprehend what data AI applications use and how it’s now safeguarded. To better understand data sensitivity and the sorts of data that comprise training and tuning data, Sensitive Data Protection (SDP) has expanded automatic data discovery to Vertex AI datasets. Additionally, it can produce data profiles that offer more detailed information about the nature and sensitivity of your training data.
After determining the location of sensitive data, AI Protection can use the virtual red teaming feature of Security Command Centre to find toxic combinations related to AI and possible avenues for threat actors to compromise this vital information. It can then suggest actions to fix vulnerabilities and modify posture.
Securing AI assets
One of AI Protection’s primary features, Model Armour, is now widely accessible. It protects against malicious URLs, offensive material, data loss, jailbreak, and prompt injection. Customers can get constant security for the models and platforms they wish to employ, even if they change in the future, to Model Armor’s ability to support a wide variety of models across many clouds.
Using a REST API or an Apigee interface, developers can now quickly incorporate Model Armor’s prompt and answer screening into apps. With interfaces with Vertex AI and Google Cloud Networking technologies, it will soon be possible to install Model Armour in-line without requiring any changes to the application.
“Model Armour offers strong defence against jailbreaks, rapid injections, and sensitive data leaks, but it also utilise it because Security Command Centre gives us a unified security posture. Potential vulnerabilities can be promptly found, ranked, and fixed without affecting the apps’ or Google Cloud development teams’ user experience. According to Jay DePaul, chief cybersecurity and technology risk officer at Dun & Bradstreet, “It see Model Armour as essential to protecting the+ AI applications, and being able to centralise the monitoring of AI security threats alongside with other security findings within SCC is a game-changer.”
By implementing postures in Security Command Centre, organisations can use AI Protection to improve the security of Vertex AI applications. These posture controls enable organisations avoid drift or unauthorised changes by defining safe resource configurations based on first-party knowledge of the Vertex AI architecture.
Managing AI threats
To help protect your AI systems, AI Protection operationalises security intelligence and research from Mandiant and Google. Initial access attempts, privilege escalation attempts, and persistence attempts for AI workloads can all be found using Security Command Centre detectors. Soon, AI Protection will include new detectors built on the most recent frontline intelligence to assist detect and handle runtime threats like foundational model hijacking.
“Securing AI systems is crucial and goes beyond simple data security as AI-driven solutions become more widespread. Model integrity, data provenance, compliance, and strong governance are all essential components of a comprehensive approach to AI security, according to Dr. Grace Trinidad, research director at IDC.
In addition to adding to the overload that security teams face, piecemeal solutions can and have exposed important weaknesses, making organisations vulnerable to threats like adversarial assaults or data poisoning. Organisations can successfully manage the growing security responsibilities and combat the multifaceted dangers brought to light by generative AI by adopting a thorough, lifecycle-focused approach. Customers’ experience of safeguarding AI is made easier by Google Cloud’s comprehensive approach to AI protection.
Complement AI Protection with frontline expertise
Services to assist businesses in evaluating and putting strong security measures in place for AI systems across platforms and clouds are available through the Mandiant AI Security Consulting Portfolio. Consultants can suggest ways to harden AI systems and assess the overall security of AI implementations. In light of the most recent attacks on AI services observed in frontline engagements, it also offer red teaming for AI.
Building on a secure foundation
Utilising Google Cloud’s infrastructure to develop and execute AI workloads has additional advantages for users. Google Cloud multi-layered security, encryption, and stringent software supply chain controls make up the secure-by-design, secure-by-default cloud platform.
In order to facilitate the creation of regulated environments with stringent policy guardrails that enforce rules like data residency and customer-managed encryption, they provide Assured Workloads to clients whose AI workloads are subject to regulation. Evidence of compliance with emerging AI standards and regulations can be produced by the audit manager. By protecting data along the processing pipeline, Confidential Computing can lessen the possibility of unauthorised access, even by privileged users or malevolent actors operating within the system.
Additionally, Chrome Enterprise Premium may offer insight into end-user behaviour and stop the unintentional and deliberate exfiltration of sensitive data in gen AI applications, which is useful for organisations trying to identify the unapproved use of AI, or shadow AI, in their workforce.