SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Mandiant reveals how threat actors perceive generative AI
Tue, 29th Aug 2023

The latest research from Mandiant on generative AI reveals threat actors are interested in generative AI, but usage remains limited.

Threat intelligence and security researchers at Mandiant have been monitoring threat actors' interest and utilisation of AI capabilities since 2019. Despite this interest, the adoption of AI in malicious activities remains constrained, primarily focused on social engineering. 

In contrast, information operations actors with varying motives have increasingly embraced AI-generated content, especially images and videos, for disinformation campaigns. The recent introduction of new generative AI tools has sparked renewed attention to these capabilities. 

Mandiant predicts that generative AI tools will accelerate the integration of AI into both information operations and intrusion activities. 

These technologies possess the potential to significantly amplify malicious operations by providing threat actors, even those with limited resources, advantages akin to those offered by exploit frameworks such as Metasploit or Cobalt Strike. Although adversaries are experimenting with AI tools, their practical application in operations remains limited.

Generative AI technologies offer information operations actors two significant advantages: the efficient expansion of their activities beyond their inherent capabilities and the creation of realistic fabricated content for deceptive purposes. 

Generative AI models can create articles, political cartoons, or benign filler content, streamlining persona building. Conversational AI chatbots based on large language models can help operators overcome language barriers when targeting foreign audiences. Additionally, hyper-realistic AI-generated content can be more persuasive than traditional fabricated content.

Generative adversarial networks (GANs) and generative text-to-image models are the main categories of AI-generated imagery. Threat actors have been using GANs to create realistic profile pictures for inauthentic personas, often with affiliations to nation-states or non-state actors. 

Text-to-image models present an even more deceptive threat, enabling a more comprehensive range of applications, including disinformation spread.

Information operations actors have also employed AI-generated and manipulated videos to support narratives. Examples include DRAGONBRIDGE, using AI-generated news presenters and deepfake videos. These technologies are expected to gain prominence as advancements continue. 

While AI-generated text and audio have seen limited use in information operations, Mandiant predicts that the emergence of user-friendly tools will lead to increased adoption. Threat actors can use AI-generated text to lure materials in phishing campaigns, benefiting from the natural speech patterns produced by large language models.

AI-generated audio remains underutilised, although potential applications include impersonations and voice cloning. These technologies can make social engineering attempts more convincing.

Mandiant anticipates that as awareness and capabilities surrounding generative AI develop, threat actors will increasingly exploit its potential. The public's difficulty discerning between authentic and counterfeit content opens avenues for misinformation.

Despite the interest in generative AI, its adoption remains limited, providing an opportunity for the security community to counteract its potential misuse proactively. 

While threat actors' adoption of generative AI presents challenges, security practitioners can leverage similar AI technologies to bolster defence mechanisms. This can enhance threat detection, response, and mitigation strategies. 

Google, for instance, has implemented policies to address misinformation within AI technologies, and security frameworks like SAIF offer guidance on securing AI systems. Mandiant says the security community can maintain an advantage in the evolving cybersecurity landscape by staying ahead of threat actors in adopting AI-driven defences.