SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers

Exclusive: Garrett O’Hara on Mimecast’s AI fight against cyber risk

Fri, 8th Aug 2025

In a world where cyberattacks are growing more sophisticated and frequent, organisations are increasingly focusing on what Garrett O'Hara calls the "most unpredictable element in security" - humans.

Speaking during a recent interview, Garrett O'Hara, Senior Director of Solutions Engineering for APAC at Mimecast, explained how artificial intelligence (AI) is now being deployed to manage and mitigate human risk at scale.

"Human risk is anything people can do that exposes an organisation to risk, either by accident or intent," he said.

"Most of the time, it's not malicious - it's tiredness, deadlines, or someone trying to do their job more efficiently."

He pointed out that employees often unintentionally bypass security policies under pressure.

"They might upload sensitive documents to a personal drive just so they can work from home, not realising the huge risk that introduces," he added.

AI tools, while offering productivity benefits, have also opened new doors for attackers.

"We're seeing employees use tools like ChatGPT to summarise documents or create presentations, not realising they're potentially uploading sensitive corporate data to third-party platforms," he said.

On the flip side, O'Hara said AI is a vital asset in the fight against these new types of threats.

"AI is incredibly good at detecting patterns and threats that traditional methods might miss. For example, analysing URLs for slight variations that indicate a phishing attempt or identifying AI-generated scam emails."

He described how phishing campaigns have become almost indistinguishable from genuine communications. "The old advice about bad grammar or strange formatting doesn't apply anymore. With AI, attackers are producing flawless emails in seconds," he said. "But the good news is that AI on the defensive side is just as powerful."

Mimecast's platform uses AI throughout its stack, from sandboxing and behavioural analysis to identifying language markers in emails associated with business email compromise (BEC). "We look for those AI fingerprints - which often show up in machine-generated messages," he explained.

For example, if there was an email that simulates a CEO urgently requesting staff to buy gift cards - a common BEC tactic - Mimecast's AI can intercept it.

"Instead of an employee reacting to that urgency, we use AI to throw bubble wrap around them, flagging the threat before any action is taken," he said.

Trust in AI is still an issue, however. "It's a double-edged sword," O'Hara acknowledged. "There's hype fatigue in cybersecurity - zero trust, now AI. And the problem is when vendors slap 'AI' onto everything, it erodes trust."

He noted that some vendors rely solely on AI, which leads to high false positive rates and overburdened security teams. "AI is probability-based. Without cross-checking, it can trigger too many false alarms, and analysts burn out sifting through them," he said.

"Our platform uses a layered approach - AI decisions are supported by additional checks across other systems, improving accuracy."

Mimecast has gone a step further by achieving ISO certification for ethical use of AI, addressing concerns about bias and data misuse.

"Transparency matters. You need to understand how the model works, especially if it goes off track," he said.

Looking ahead, O'Hara envisions a future where AI acts as a sort of digital guardian angel. "Imagine a Clippy-like assistant - but useful - that knows your role, your habits, and quietly keeps you safe behind the scenes," he said.

He also discussed how application programming interfaces (APIs) play a crucial role in integrating Mimecast's human risk platform with other systems. "We pull in data from HR, endpoint and identity platforms to paint a picture of risk - right down to the individual level," he explained. "If someone's on notice or switching roles, their risk profile changes. APIs help us adapt protection accordingly."

Importantly, AI in cybersecurity is no longer just about detection and defence. Mimecast also uses it for prediction and prevention.

"With data from 44,000 companies and billions of emails daily, our AI tools can identify emerging threats early and act before damage is done," he said. "That's where we're moving - from reactive to proactive security."

But for smaller organisations, predictive security can seem out of reach.

"The average Australian SMB doesn't have the budget or capacity for that level of protection," he noted. "We offer it as a service - so they benefit without the overhead."

As for the future of cybersecurity training, O'Hara predicts a shift from generic instruction to highly tailored behavioural nudges. "Instead of monthly sessions, we'll see hyper-contextual, AI-generated interventions in the moment," he said. "That's the power of AI - it knows how to reach each individual in a way that resonates."

He added that balancing automation with human oversight remains a key concern. "Right now, most organisations use automation to assist - not replace - analysts. And that's wise," he said. "False positives can grind a business to a halt if something like Salesforce gets blocked. But as AI improves, that balance will shift."

Ultimately, he believes that the most exciting developments are still unknown.

"I'm genuinely excited by what we don't yet see coming," he said. "AI has unlocked possibilities that feel like magic."

And while security teams dream of AI replacing their most tedious tasks, O'Hara points out there's a long way to go.

"If AI can act like Cinderella's godmother - guiding users to return home just before the stroke of midnight - then we're on the right track," he said.