GenAI: A new era of social engineering threats for Australian businesses
AI has held the attention of the cybersecurity world for some time. In the early days of the technology, opinions varied wildly from 'this is the silver bullet cybercriminals have been waiting for' to 'it might help them write a few more emails.'
Fast forward to the era of GenAI, and there's little doubt that threat actors now have an incredibly powerful tool in their arsenal. We're past the stage of AI assisting with the donkey work. GenAI is now transforming social engineering, helping cybercriminals convincingly spoof faces and personalities or create new identities.
It's little wonder that GenAI tops the list of current Australian CISO concerns. In 2024's Voice of the CISO Report, more than half (51%) believed it posed a security risk to their organisation.
They're right to be worried. Our people are the gateway to the vast majority of cyber-attacks – the more convincing, the higher the chances of success. And, from deepfakes to automated profiles, GenAI is nothing if not convincing.
Understanding the GenAI arsenal
Unfortunately, cybercriminals have been manipulating our people into opening the door to our organisations for some time. However, while email remains by far the biggest threat vector, recent GenAI developments allow them to launch convincing social engineering attacks across all channels.
Natural Language Processing (NLP) models analyse vast conversational datasets, from social media feeds to breached chat logs, to help cybercriminals mimic a trusted person's tone and conversational patterns. This makes it more difficult than ever to distinguish between legitimate and spoofed communications via email, LinkedIn, WhatsApp or other messaging apps.
NLP also helps cybercriminals launch convincing attacks in non-English speaking countries. Traditionally, threat actors avoided places such as Japan, Korea and the UAE due to language and culture barriers. However, Gen AI's ability to accurately mimic communications is driving a rise of targeted BEC attacks across these regions.
As well as spoofing legitimate contacts, threat actors can use advanced GenAI models to create entirely new identities. These identities can be used to interact and build trust with targets on platforms like Facebook or LinkedIn before sending a lure. Alternatively, they may mimic the accounts of a trusted media outlet or industry body.
Deepfake technology further ups the ante, using advanced machine learning models (ML) to create highly realistic content that resembles a person's likeness, voice and mannerisms. One of the most popular models, the Generative Adversarial Network (GAN), adjusts parameters and fine-tunes its training processes to further enhance the realism of the fake. It is this combination of highly advanced technologies that gives threat actors an edge. Where a phishing message may be dismissed, one backed up by a prolonged email conversation, phone call, and even video conference is unlikely to raise the same suspicion.
Cybercriminals are increasingly leveraging powerful AI technology like Generative Adversarial Networks (GANs) to create eerily convincing deepfakes for phone and video scams – and they're seeing alarming success. Just recently, a sophisticated deepfake investment scam featuring Australian Finance Minister Penny Wong and other senior politicians targeted thousands of Australians on social media. This incident, along with a surge in similar attacks, has spurred the government to combat AI-related harms through measures like mandatory watermarking of AI-generated images.
The problem is undeniably already widespread. Recent research by ISMS.online revealed that a staggering one in four Australian businesses have fallen victim to a deepfake information security incident in the past year alone.
Meeting sophistication with suspicion
While GenAI has undoubtedly made social engineering attacks much more convincing, several simple steps can help your people stop them in their tracks.
Where possible, limit the amount of personal and professional information available online to reduce the chances of a successful deepfake. This could mean reminding users to check and update privacy settings in their social media accounts.
Your organisation should also have a clear policy in place for any communications relating to the transfer of funds, financial details or sensitive information. At a minimum, any changes to bank details or requests to expedite payments should require additional verification. Any unexpected or suspicious calls should also be verified with the legitimate source.
As always, education and awareness are essential. The more our people know about these sophisticated attacks, the better placed they are to keep them at bay. Training should target those most at risk, whether due to their job role, privilege level or cybersecurity proficiency.
Unfortunately, only around half of Australian organisations educate their people about security best practice. This has to change. Security awareness programmes must be comprehensive, ongoing and based on the latest threat intelligence.
An adaptive learning framework that progresses from basic habits to advanced concepts can help build a culture in which every member of your team understands their role in keeping sophisticated threats at bay.
Modern threat actors rarely break into our networks with tools and technology. More often than not, they trick and manipulate our people into letting them in. In this environment, it is essential to approach security education with an open and critical mindset.
In a cyber landscape where almost anything is now possible, awareness of every possibility might just be the difference between an attempted social engineering attack and a successful one.