Saturday, December 7, 2024

Voice Spoofing Attacks, How to Using Future AI Vishing

- Advertisement -

Voice Spoofing Attacks Introduction

In a blog post last year, Mandiant examined how threat actors were leveraging generative AI (gen AI) in phishing campaigns and information operations (IO), particularly to create more convincing images and videos. Google Cloud discussed attackers’ use of LLMs to create malware. Google Cloud noted in the post that attackers are interested in gen AI, but use has been limited.

This post expands on that research by discussing new AI TTPs and trends. They illustrate how Mandiant red teams employ AI-powered Voice Spoofing Attacks to test defenses and advise security considerations to keep ahead of the danger.

- Advertisement -

The Rise of AI-Powered Voice Spoofing Attacks

No more robotic scammers with barely legible scripts. AI-powered voice cloning can now accurately replicate human speech, giving phishing schemes realism. News tales about voice cloning and deepfakes stealing over HK$200 million from a corporation are increasing, and the Mandiant Red Team is testing defenses with these TTPs.

Brief Vishing Overview

Vishing AI

Vishing (voice phishing) uses audio instead of email. Threat actors call victims to build trust and influence emotions, frequently by generating a sense of urgency, rather than sending emails to get clicks.

Like phishing, threat actors use social engineering to trick people into disclosing sensitive information, committing crimes, or transferring finances. These fake calls often impersonate banks, government agencies, or tech support, adding legitimacy to the fraud.

Strong AI tools like word generators, image producers, and voice synthesizers have spurred open-source efforts, making them more accessible. AI’s rapid progress is giving more people access to it, making vishing assaults more believable.

- Advertisement -

Voice Spoofing

AI-Powered Attack Lifecycle Voice Spoofing

Audio processing and model training are used in modern voice cloning. Training the model uses a powerful combination of open-source tools and techniques, which are popular today. After these early steps, attackers may spend more time understanding the impersonator’s speech patterns and writing a script before launching operations. The attack is more likely to succeed because to this added authenticity.

Next, attackers may leverage AI-powered Voice Spoofing Attacks throughout the attack lifecycle.

Initial Access

A threat actor can get early access via a faked voice in several ways. Threat actors can pose as executives, coworkers, or IT support to get victims to give personal information, grant remote access to systems, or transfer payments. Familiar voices can be used to trick victims into clicking on harmful links, downloading malware, or disclosing sensitive information.

Voice-based trust systems are rarely employed, however AI-spoofed voices can circumvent multi-factor authentication and password reset systems, allowing unauthorized access to key accounts.

Lateral Movement and Privilege Escalation

Threat actors can use AI Voice Spoofing Attacks to impersonate trusted people and get higher access levels. This might go several ways.

A lateral movement method is chaining impersonations. Imagine an attacker impersonating a helpdesk employee to obtain access. After communicating with a network administrator, the attacker could secretly capture their voice.

The attacker can easily mimic the administrator and communicate with other unsuspecting network targets by training a new AI Voice Spoofing Attacks model using this collected audio. The attacker can move laterally and get access to more sensitive systems and data by chaining together impersonations.

Threat actors may find voicemails, meeting recordings, or training materials on a compromised host during initial access. The attacker can use these recordings to create AI Voice Spoofing Attacks models to mimic specific employees without interacting with them. This works well for targeting high-value persons or circumventing speech biometric access control systems.

Mandiant Red Team Proactive Case Study

Mandiant used AI Voice Spoofing Attacks to obtain early access to a client’s internal network in a controlled red team exercise in late 2023. a case study shows the efficacy of an increasingly sophisticated attack method.

Client agreement and a credible social engineering pretext were the first steps. The Red Team needed a natural voice sample to impersonate a client security team member. After studying the pretext, the customer gave explicit permission to use their voice for this exercise.

Next, Google Cloud collected audio data to train a model and attained a reasonable level of realism. OSINT was vital in the next phase. The Red Team identified targets most likely to recognise the impersonated voice and have the requisite rights by collecting employee data (job titles, locations, phone numbers). The team used VoIP and number spoofing to spoof calls for a selected list.

After voicemail pleasantries and other obstacles, the first unsuspecting victim said “Hey boss, what’s up?A security administrator was contacted by Red Team, and they reported the phoney voice. With the pretext of a “VPN client misconfiguration,” the Red Team took advantage of a global outage affecting the client’s VPN provider. This well crafted scenario created urgency and made the victim more receptive to Google Cloud directions.

The victim ignored Microsoft Edge and Windows Defender SmartScreen security prompts and downloaded and executed a pre-prepared malicious payload onto their workstation because they trusted the phone’s voice. The simulation ended with the payload detonation, showing how easily AI Voice Spoofing Attacks can breach an organisation.

Concerns about security: This social exploitation has insufficient technical detection controls. Three main mitigations are awareness, source verification, and future technology considerations.

AI vishing

Awareness

Inform employees, especially those with money and access, about AI vishing attacks. Consider including AI-enhanced dangers in security awareness training. With threat actors’ efficient and accessible impersonation, everyone should be sceptical about phone calls, especially if they fall under one of the following categories:

  • The caller is making claims that don’t seem real.
  • The caller is not someone or anything you would trust.
  • The caller makes an attempt to impose dubious authority.
  • The caller doesn’t seem like the source at all.

High-priority calls demanding financial or access information, such as a one-time password, should be avoided by trusted employees. Employees should be able to hang up and report questionable calls, especially if they suspect AI vishing. A similar attack is likely on another employee.

Verifying Source

Check the information with reliable sources. This includes hanging up and calling back a source-verified number. Ask the caller to send a text from a validated number, email, or business chat message.

Train personnel to notice audio irregularities like rapid background noise changes, which may indicate the threat actor didn’t clean the audio adequately. Expect odd speech patterns like a completely different vernacular than the source utilizes. Look for unusual inflections, fillers, clicks, pauses, and repetition. Consider voice timbre and cadence.

Set code words for executives and important workers who handle sensitive and/or financial data. Perform this outside of band to minimize enterprise exposure in the event of a breach. In case of uncertainty, code words can verify individuals.

If feasible, send unfamiliar calls to voicemail. Calls should be treated as carefully as emails. Report questionable public awareness calls.

Future Tech Considerations

At best, organizations can protect audio conversations by employing distinct networks for VoIP channels, authentication, and transmission encryption. It doesn’t stop attacks on employees’ personal phones.

Future organizations should safeguard all audio assets with digital watermarking, which is undetectable to humans but detectable by AI.

Caller verification will be included in mobile device management tools. In the interim, organizations should require all critical communications to occur across corporate chat channels, where strong authentication is necessary and identities are hard to spoof.

Research and methods are being developed to detect deepfakes. They can detect deepfakes in voicemail or offline voice notes despite their uneven accuracy. Over time, detection will improve and become enterprise tooling. DF-Captcha, a simple application that queue human prompts utilising challenge answer to verify the other party’s identification, is an example of real-time detection research.

Conclusion

Modern AI tools can assist develop more convincing vishing attacks, as discussed in this blog post. The shocking success of Mandiant’s vishing highlights the need for increased AI Voice Spoofing Attacks protection. Technology gives attackers and defenders strong tools, but humans are the biggest vulnerability. Google Cloud case study should rouse up organizations and individuals to take action.

Mandiant began using AI voice-spoofing assaults in its more complicated Red Team and Social Engineering Assessments to show how an attack could affect an organisation. As threat actors employ this tactic more, defenders must plan and prepare.

- Advertisement -
Drakshi
Drakshi
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.
RELATED ARTICLES

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes