SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
How AI is powering the next-generation of cybercriminals
Wed, 20th Mar 2024

The pace of artificial intelligence (AI) adoption by businesses is increasing. However, the technology is also being rapidly embraced by cybercriminals.

Keen to improve the effectiveness of their malicious attacks, cybercriminals are using AI tools in a range of innovative ways to make attacks both more effective and less obvious to detect.

Creating malware and phishing messages
It's clear that cybercriminals are already making use of generative AI tools to improve the success rates of their attacks. Some are creating new types of malware without the need for sophisticated coding skills.

In some cases, ChatGPT is being used to mutate malware code, allowing it to evade endpoint detection and response (EDR) systems. As a result, major AI service providers have now put in place filters that prevent users from directing them to write malware and assist with other malicious activity.

However, generative AI services such as ChatGPT can still be tricked into writing attack tools. If someone asks ChatGPT to write a script to test their company's servers for a specific vulnerability, it may comply. Attackers could use a similar tactic to generate code.

Aside from the well-known generative AI tools, cybercriminals also have access to several other AI applications available on the dark web - for a price. One example is WormGPT, which has been described as being like ChatGPT but with no ethical boundaries. These types of tools have no guardrails in place to prevent cybercriminals from using them to write effective malware code and other hostile tools. 

There is also evidence that attackers are using generative AI to automate the task of writing phishing emails and smishing texts. Previously, these have tended to be relatively easy to spot as they often contain poor grammar and misspellings. Now, with AI, attackers can generate highly personalised phishing emails and fraudulent SMS messages using text that seems to be more genuine. As a result, the number of messages that are opened by recipients is likely to increase.

Thankfully, as with the creation of malware, commonly used AI tools such as ChatGPT and Google Bard will decline to write phishing emails. However, attackers can work around the controls in place. If they ask ChatGPT to write an email to test their company's anti-phishing policies, the AI is likely to produce credible text.

Malicious chat sessions
Another type of attack that is growing in usage is chat server and service abuse. These attacks tend to start with an email, text, or social media message to a victim, inviting them to join a chat group such as a Slack channel or a Discord server. While these group chats may look legitimate and innocuous, they can lead to serious security problems as the services allow users to share files, including Microsoft Word documents containing malicious macros.

Also, concerningly, browser-based chat apps bypass an organisation's perimeter controls, including firewalls, that typically let most web traffic through. This can give an attacker uncontrolled access to people inside the corporate network.

In addition to allowing users to share files, chat apps typically enable users to send direct messages to each other. This makes it easier for attackers to engage in social engineering schemes.

Chat services that are connected to social media sites may also ask for usernames and passwords, giving attackers a means to compromise even more information and potentially elevate their privileges.

Thankfully, an effective antivirus application or EDR platform will alert security teams to known malware passed through chat apps. However, attackers are constantly looking for ways to evade EDR and antivirus protections, and new or obfuscated malware may still get through.

Targeting APIs
Another attack vector being exploited by AI-equipped cybercriminals is the targeting of vulnerabilities in APIs as a method of stealing data. In some cases, APIs share user data - such as usernames and passwords - in plain text as users log into web-based applications. The exposure of a website's user ID number could lead attackers to extrapolate other user ID numbers and hack their accounts.

Security teams should be aware that APIs are vulnerable to a range of different attacks, and several powerful applications available in the market allow both organisations (and attackers) to monitor APIs for weaknesses.

One common API attack involves a technique known as 'fuzzing'. This involves a cybercriminal sending increasingly large garbage strings to a website or API in place of the data it's expecting. Fuzzing allows attackers to create buffer overflows in code, potentially causing APIs to expose sensitive data. 

The good news is that today's network detection and response (NDR) platforms can help organisations pick up on these attacks. If an attacker, for example, manages to get onto an enterprise's network by using new AI-generated malware, compromising a chat service, or exploiting vulnerabilities in APIs, today's solutions can show their attempts to move laterally inside the network, conduct reconnaissance, and steal data. In addition, they can also detect cross-site scripting, server-side request forgeries, and SQL injections, among hundreds of other attack techniques. 

By being aware of these AI-powered attack techniques and potential solutions, security teams will be better positioned to guard against them. AI tools can then be used to enhance an organisation's performance while the risk of suffering cyberattacks is diminished.