SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

AI security report warns of rising deepfakes & Dark LLM threat

Today

Check Point Research has released its inaugural AI Security Report, detailing how artificial intelligence is affecting the cyber threat landscape, from deepfake attacks to generative AI-driven cybercrime and defences.

The report explores four main areas where AI is reshaping both offensive and defensive actions in cyber security.

According to Check Point Research, one in 80 generative AI prompts poses a high risk of sensitive data leakage, with one in 13 containing potentially sensitive information that could be exploited by threat actors.

The study also highlights incidents of AI data poisoning linked to disinformation campaigns, as well as the proliferation of so-called 'Dark LLMs' such as FraudGPT and WormGPT. These large language models are being weaponised for cybercrime, enabling attackers to bypass existing security protocols and carry out malicious activities at scale.

Lotem Finkelstein, Director of Check Point Research, commented on the rapid transformation underway, stating, "The swift adoption of AI by cyber criminals is already reshaping the threat landscape. While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren't just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It's not a distant future - it's just around the corner."

The report examines how AI is enabling attackers to impersonate and manipulate digital identities, diminishing the boundary between what is authentic and fake online.

The first threat identified is AI-enhanced impersonation and social engineering. Threat actors are now using AI to generate convincing phishing emails, audio impersonations, and deepfake videos. In one case, attackers successfully mimicked Italy's defence minister with AI-generated audio, demonstrating the sophistication of current techniques and the difficulty in verifying online identities.

Another prominent risk is large language model (LLM) data poisoning and disinformation. The study refers to an example involving Russia's disinformation network Pravda, where AI chatbots were found to repeat false narratives 33% of the time. This trend underscores the growing risk of manipulated data feeding back into public discourse and highlights the challenge of maintaining data integrity in AI systems.

The report also documents the use of AI for malware development and data mining. Criminal groups are reportedly harnessing AI to automate the creation of tailored malware, conduct distributed denial-of-service (DDoS) campaigns, and process stolen credentials. Notably, services like Gabbers Shop are using AI to validate and clean stolen data, boosting its resale value and targeting efficiency on illicit marketplaces.

A further area of risk is the weaponisation and hijacking of AI models themselves. Attackers have stolen LLM accounts or constructed custom Dark LLMs, such as FraudGPT and WormGPT. These advanced models allow actors to circumvent standard safety mechanisms and commercialise AI as a tool for hacking and fraud, accessible through darknet platforms.

On the defensive side, the report makes it clear that organisations must now presume that AI capabilities are embedded within most adversarial campaigns. This shift in assumption underlines the necessity for a revised approach to cyber defence.

Check Point Research outlines several strategies for defending against AI-driven threats. These include using AI-assisted detection and threat hunting to spot synthetic phishing content and deepfakes, and adopting enhanced identity verification techniques that go beyond traditional methods. Organisations are encouraged to implement multi-layered checks encompassing text, voice, and video, recognising that trust in digital identity can no longer be presumed.

The report also stresses the importance of integrating AI context into threat intelligence, allowing cyber security teams to better recognise and respond to AI-driven tactics.

Lotem Finkelstein added, "In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defences. This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X