SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Five ways AI is already strengthening cyber-defences
Tue, 12th Mar 2024

The release of the generative artificial intelligence (AI) tool ChatGPT in November 2022 changed the global perception of artificial intelligence (AI). Seemingly overnight, AI became a mainstream field of computer science. 

Today, the challenge facing every organisation and national government is how to understand and address the risks and uncertainties that are being introduced or accelerated by AI systems, including generative AI (GenAI), machine learning, and more, while at the same time harnessing their potential for good. 

How attackers can leverage AI
AI gives attackers another tool that they can use to write convincing content for phishing emails, automate attacks so they can launch more of them more quickly, generate code to exploit vulnerabilities, and both identify and target many more potential victims. According to our latest research, 50% of the IT managers and leaders surveyed expect to see an increase in the number of attacks due to the use of AI. 

It’s not all bad news, though. Just as AI is a game-changer for attackers, it is also a game-changer for cyber defences. 

How defenders can leverage AI to stop them
Here are five ways in which cybersecurity is already implementing AI to strengthen cyber-defence.

1. Threat detection and intelligence

AI, particularly machine learning algorithms, can analyse vast amounts of data to establish baseline behaviour and detect anomalies that may indicate security threats. This could include the detection of unusual network traffic, atypical user behaviour, or unexpected system activities. AI can then alert security analysts or remediate malicious activities.

AI can be highly effective at spotting insider threats, identifying unusual account access patterns, and recognising deviations from standard communication behaviour. AI also excels at recognising complex patterns that may not be immediately apparent to human analysts and at analysing historical data to predict potential future threats. 

2. Email security

AI can identify known phishing patterns and signatures, allowing it to recognise and flag suspicious emails. Beyond known patterns, AI looks for anomalies in email behaviour and characteristics. It identifies irregular sender behaviour, unusual email content, or deviations from established communication patterns. Natural language processing is used to analyse the content of incoming messages for sentiment, context, tone, and potentially malicious intent.

This approach allows for more accurate and effective detection of personalised phishing attacks, including those created with the help of generative AI techniques. 

3. Security awareness training

Organisations need to prepare employees so they are ready for all attacks, including those that are AI-enabled. GenAI can help provide targeted, personalised, in-the-moment training to end users.

Traditional training approaches are generally scheduled and involve simulations or fabricated attacks. We believe the best time to learn is when an actual attack happens to you. Barracuda is finalising just-in-time, smart training where, confronted with a phishing attempt using a malicious link, a specialised feature neutralises the weaponised link and instead connects the recipient with tailored resources and a chat opportunity concerning the encountered threat.

4. Automated and augmented incident response

Faster response to threats and incidents is the top benefit IT professionals expect from deploying AI. AI-driven systems can operate faster and more efficiently to respond to security threats in real-time while reducing human error. AI can use natural language processing to make decisions and extract the information needed during investigations, correlating signals across attack surfaces so it can start disabling an attack sooner. 

5. Application Security

In application security, AI can be used to detect anomalies and provide the correct responses to stop attacks. AI-powered solutions use machine learning models to detect bots and adjust the machine learning models intelligently. 

AI can also be used to detect initial access and reconnaissance attempts. Attackers who use zero-days often bypass existing protections because they are looking for specific patterns of access. AI can better detect the anomalous accesses, weigh the risks, and alert admins while blocking the attacks, effectively reducing the attack surface.

Conclusion – beyond the headlines
The many headlines and doom-laden predictions regarding AI and its ability to give cyber attackers the upper hand can obscure the fact that AI can also be a powerful force for good. In this article we’ve summarised five areas where it can strengthen cyber-security. There are more – and many security vendors have been quietly and effectively integrating increasingly powerful AI into their security technologies for years – including Barracuda.