SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Ps 0048 maria digital armour20250915

AI plus cyber security: The double-edged sword

Thu, 20th Nov 2025

When people talk about AI, the conversation usually leans toward innovation, productivity and possibility. But if you spend your life in cybersecurity, as I have, you start to see the other side of the story; the side most people don't realise is accelerating just as fast. Over the past few weeks alone, we've seen a level of sophistication in cyberattacks in Australia that only AI could generate.

MyGov scams that once looked suspicious now replicate government branding and authentication prompts almost perfectly. Centrelink recipients were targeted at the exact moment their benefits were due, with AI-cloned messages redirecting funds to criminals. Fake portals were spun up in minutes, tricking thousands into entering credentials that ended up on the dark web within hours. These weren't amateur scams; they were engineered with precision, using the same AI tools businesses are excited to explore only without any ethical boundaries…

It's a confronting reminder that AI is not inherently good or bad. It simply amplifies the intention of whoever uses it.

What worries me even more than criminal capability is how quickly organisations are adopting AI without the foundations needed to support it. Every tool, chatbot, workflow and integration create another entry point into the business. AI moves data around in ways many leaders don't yet understand, and if identity management, backups, multi-factor authentication or cloud configurations aren't strong, those gaps widen silently.

I've seen businesses unintentionally increase their cyber risk by 30 to 40 per cent simply because they introduced AI before reviewing their cyber hygiene. The excitement of innovation eclipses the need for stability, but it's crucial that these both move together. Progress without protection isn't progress at all; it's exposure.

On the other side of the equation, AI is also accelerating the speed and precision of attacks in ways humans simply can't match. Criminals can now create highly targeted phishing in seconds. Deepfake voice attacks are emerging locally, where executives are "heard" authorising payments they never approved. Even social media posts, corporate bios and years of breach data are being scraped to craft messages that feel alarmingly personal.

Business email compromise, already one of Australia's most damaging threats, is evolving too. Attackers quietly monitor inboxes using automated scripts, studying communication styles, invoice patterns and approval chains. When they step in, the emails feel legitimate because they've learned exactly how the organisation communicates.

The challenge is no longer just technical, it's behavioural. The biggest vulnerabilities I see aren't always in systems but in people. Well-meaning employees paste confidential information into public AI tools without understanding where that data goes. Teams experiment with AI apps that have no governance, no oversight, and no guarantee of data safety. These behaviours are rarely malicious; they're born from curiosity and pressure to keep up. But they have very real consequences.

This is why I believe governance must come before tools. Not because governance slows innovation, but because it makes innovation safe. We need to help people understand what they can and cannot do with AI. We need frameworks, boundaries, and clear practices for handling data. And we need to choose tech partners and tools that keep information within the organisation, not floating in public AI models where it can never be retrieved.

At Digital Armour, we built a new service, Impact40, because we were watching businesses run headfirst into AI adoption without the basic cyber or behavioural guardrails required. So, we reversed the pattern. We start by strengthening the essentials (backups, data segregation and authentication pathways) and only then do we build governance and safe usage behaviours. The AI comes last, not first. 

Because the truth is, AI is capable of extraordinary good. It can detect threats faster, surface anomalies instantly, and give organisations a level of visibility they've never had before. When used well, it can unlock 20 to 40 per cent in productivity gains, which is a staggering competitive advantage in any market.

But like any powerful tool, the outcome depends on the order of operations. Adopt AI too quickly, and it becomes a risk. Adopt it with strong cyber foundations and clear behavioural guardrails, and it becomes a force multiplier.

AI is a double-edged sword. 

The edge you feel depends entirely on the preparation you put in. And the leaders who take this moment seriously will be the ones who use AI not just to keep up, but to stay ahead.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X