Sunday, December 22, 2024

Cloud Virtual CISO: 3 Intriguing AI Cybersecurity Use Cases

- Advertisement -

Cloud Virtual CISO

Three intriguing AI cybersecurity use cases from a Cloud Virtual CISO intriguing cybersecurity AI use cases

For years, They’ve believed artificial intelligence might transform cybersecurity and help defenders. According to Google Cloud, AI can speed up defences by automating processes that formerly required security experts to labour.

- Advertisement -

While full automation is still a long way off, AI in cybersecurity is already providing assisting skills. Today’s security operations teams can benefit from malware analysis, summarization, and natural-language searches, and AI can speed up patching.

AI malware analysis

Attackers have created new malware varieties at an astonishing rate, despite malware being one of the oldest threats. Defenders and malware analyzers have more varieties, which increases their responsibilities. Automation helps here.

Their Gemini 1.5 Pro was tested for malware analysis. They gave a simple query and code to analyse and requested it to identify dangerous files. It was also required to list compromising symptoms and activities.

Gemini 1.5 Pro’s 1 million token context window allowed it to parse malware code in a single pass and normally in 30 to 40 seconds, unlike previous foundation models that performed less accurately. Decompiled WannaCry malware code was one of the samples They tested Gemini 1.5 Pro on. The model identified the killswitch in 34 seconds in one pass.

- Advertisement -

They tested decompiled and disassembled code with Gemini 1.5 Pro on multiple malware files. Always correct, it created human-readable summaries.

The experiment report by Google and Mandiant experts stated that Gemini 1.5 Pro was able to accurately identify code that was obtaining zero detections on VirusTotal. As They improve defence outcomes, Gemini 1.5 Pro will allow a 2 million token context frame to transform malware analysis at scale.

Boosting SecOps with AI

Security operations teams use a lot of manual labour. They can utilise AI to reduce that labour, train new team members faster, and speed up process-intensive operations like threat intelligence analysis and case investigation noise summarising. Modelling security nuances is also necessary. Their security-focused AI API, SecLM, integrates models, business logic, retrieval, and grounding into a holistic solution. It accesses Google DeepMind’s cutting-edge AI and threat intelligence and security data.

Onboarding new team members is one of AI’s greatest SecOps benefits. Artificial intelligence can construct reliable search queries instead of memorising proprietary SecOps platform query languages.

Natural language inquiries using Gemini in Security Operations are helping Pfizer and Fiserv onboard new team members faster, assist analysts locate answers faster, and increase security operations programme efficiency.

Additionally, AI-generated summaries can save time by integrating threat research and explaining difficult facts in natural language. The director of information security at a leading multinational professional services organisation told Google Cloud that Gemini Threat Intelligence AI summaries can help write an overview of the threat actor, including relevant and associated entities and targeted regions.

The customer remarked the information flows well and helps us obtain intelligence quickly.
Investigation summaries can be generated by AI. As security operations centre teams manage more data, they must detect, validate, and respond to events faster. Teams can locate high-risk signals and act with natural-language searches and investigation summaries.

Security solution scaling with AI

In January, Google’s Machine Learning for Security team published a free, open-source fuzzing platform to help researchers and developers improve vulnerability-finding. The team told AI foundation models to write project-specific code to boost fuzzing coverage and uncover additional vulnerabilities. This was added to OSS-Fuzz, a free service that runs open-source fuzzers and privately alerts developers of vulnerabilities.

Success in the experiment: With AI-generated, extended fuzzing coverage, OSS-Fuzz covered over 300 projects and uncovered new vulnerabilities in two projects that had been fuzzed for years.

The team noted, “Without the completely LLM-generated code, these two vulnerabilities could have remained undiscovered and unfixed indefinitely.” They patched vulnerabilities with AI. An automated pipeline for foundation models to analyse software for vulnerabilities, develop patches, and test them before picking the best candidates for human review was created.

The potential for AI to find and patch vulnerabilities is expanding. By stacking tiny advances, well-crafted AI solutions can revolutionise security and boost productivity.
They think AI foundation models should be regulated by Their Secure AI Framework or a similar risk-management foundation to maximise effect and minimise risk.

Please contact Ask Office of the CISO or attend Their security leader events to learn more. Attend Their June 26 Security Talks event to learn more about Their AI-powered security product vision.

Perhaps you missed it

Recent Google Cloud Security Talks on AI and cybersecurity: Google Cloud and Google security professionals will provide insights, best practices, and concrete ways to improve your security on June 26.

Quick decision-making: How AI improves OODA loop cybersecurity: The OODA loop, employed in boardrooms, helps executives make better, faster decisions. AI enhances OODA loops.

Google rated a Leader in Q2 2024 Forrester Wave: Cybersecurity Incident Response Services Report.

From always on to on demand access with Privileged Access Manager: They are pleased to introduce Google Cloud’s built-in Privileged Access Manager to reduce the dangers of excessive privileges and elevated access misuse.

A FedRAMP high compliant network with Assured Workloads: Delivering FedRAMP High-compliant network design securely.

Google Sovereign Cloud gives European clients choice today: Customer, local sovereign partner, government, and regulator collaboration has developed at Google Sovereign Cloud.

Threat Intel news

A financially motivated threat operation targets Snowflake customer database instances for data theft and extortion, according to Mandiant.

Brazilian cyber hazards to people and businesses: Threat actors from various reasons will seek opportunities to exploit Brazil’s digital infrastructure, which Brazilians use in all sectors of society, as its economic and geopolitical role grows.

Gold-phishing: Paris 2024 Olympics cyberthreats: The Paris Olympics are at high risk of cyber espionage, disruptive and destructive operations, financially driven behaviour, hacktivism, and information operations, according to Mandiant.Return of ransomware Compared to 2022, data leak site posts and Mandiant-led ransomware investigations increased in 2023.

- Advertisement -
Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes