SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Networked cloud interconnected glowing shields human robot arms cybersecurity defense ai

AI projects & machine identities shift cybersecurity focus

Wed, 1st Oct 2025

Cybersecurity professionals are highlighting the growing need for rigorous security strategies as organisations increasingly deploy artificial intelligence and manage rapidly proliferating non-human identities.

With the arrival of Cybersecurity Awareness Month, industry experts are calling for a broader understanding of evolving threats and the urgent governance gaps posed by new technology within enterprises.

AI project adoption

Mick McCluney, ANZ Field CTO at Trend Micro, stated that enterprises embracing AI-driven projects must prioritise security alongside innovation. He said many organisations move quickly to leverage AI, sometimes with insufficient consideration for the associated risks, such as data exposure, manipulation of models, or new regulatory responsibilities.

"AI systems also introduce unique risks such as data poisoning, model theft, and adversarial attacks that traditional controls don't fully cover," McCluney said. "A robust cybersecurity approach means treating AI projects with the same discipline as any critical asset: securing data pipelines, continuous monitoring for model drift or manipulation, and applying access controls to models and the data they learn from."

McCluney explained that beyond technical controls, clear governance frameworks aligned with relevant regulations are needed to ensure accountability and build trust in AI deployments. He noted that Cybersecurity Awareness Month offers an opportunity to recognise AI not only as a defensive capability but also as a potential new avenue for attack if not secured appropriately.

He added, "Embedding security from the start enables safer, responsible innovation."

Non-human identities

The dramatic surge in non-human identities within corporate IT ecosystems is another focus area for security professionals this year. According to Paul Walker, field strategist at Omada, most organisations now have a far greater number of machine or automated identities - including service accounts, APIs, workloads, bots, and AI agents - than human users.

"When we think about identity in cybersecurity, we instinctively think of people. But that picture has shifted dramatically. In most organizations today, non-human identities - service accounts, APIs, bots, workloads, and increasingly AI agents - outnumber human identities by a huge margin. Research shows the ratio is roughly 82 to one. That's not just a matter of scale; it's a structural change in how identity works," Walker said.

Walker noted that autonomous or semi-autonomous agentic AI agents are among the quickest-growing categories. These entities act on behalf of users or organisations and require their own credentials for authentication and authorisation, making them new actors in digital environments. Each must be brought under robust governance, he said, or they risk becoming exposure points for attackers.

Unlike human users, non-human identities are created rapidly and at scale, persist across cloud and hybrid environments, and frequently interface directly with sensitive systems. They often lack standard life-cycle management processes, such as onboarding or deprovisioning, and their dynamism adds further complexity.

"Traditional IAM simply wasn't built for this complexity and unpredictability. That gap creates blind spots and an expanded attack surface. And with regulations like DORA and NIS2 demanding accountability for all identities, not just human ones, the urgency is clear," Walker added.

Regulatory landscape

Regulatory developments are intensifying the pressure on organisations to address the risks associated with both AI systems and the rapid proliferation of non-human identities. Frameworks such as the Digital Operational Resilience Act (DORA) and the NIS2 Directive expect organisations to demonstrate accountability and risk management practices for all entities with access to systems and data.

This broader definition of 'identity' requires enterprises to reconsider their security strategies and adopt solutions that can handle both the scale and fluidity inherent in modern IT architectures. Failure to adapt may result in compliance failures, potential data breaches, or disruption to core operations.

Broader security perspective

McCluney emphasised the importance of building security into AI initiatives from the earliest stages, integrating both technical controls and governance. Continuous monitoring and controls over data pipelines and model behaviour are now necessary to detect abnormal activity or attempted manipulation.

Walker observed that the shift caused by machine identities and agentic AI is no longer an emerging trend but a structural shift in how organisations must view and manage risk. "Cybersecurity Awareness Month is a reminder: identity governance must now extend beyond people. Securing the vast, fast-moving ecosystem of non-human identities, especially AI-driven ones, is becoming central to resilience and trust," he said.

Industry commentators agree that while digital transformation and AI bring benefits to productivity and operations, they also require a parallel investment in appropriate controls, monitoring, and governance to manage increasingly complex and automated environments.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X