SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Australian office dusk security cameras ai identity data protect

AI, identity & physical security raise data stakes

Mon, 2nd Feb 2026

Australian organisations face rising pressure to tighten control of sensitive data as AI adoption, machine identities, and physical security systems expand the potential impact of breaches, industry leaders have warned ahead of World Data Privacy Day.

Security executives from Adactin, Genetec, and Delinea said boards, regulators, and customers now expect stronger oversight of how data is accessed and used, particularly in government, financial services, and AI-driven environments.

Board scrutiny

Data privacy has shifted from a technical issue to a governance concern for Australian institutions after several large-scale incidents in recent years. These incidents exposed weaknesses in how organisations manage identity, control data flows and monitor third-party access.

Vab Mittal, Country Head - ANZ at Adactin, said the impact of recent breaches has changed how boards approach privacy risk.

"Taking control of privacy has become a board-level imperative for Australian government agencies and financial institutions. These organisations manage vast volumes of sensitive citizen and customer data, and when control is lost, the consequences are immediate. In a widely reported Australian breach, unauthorised access to personal records exposed identity data at scale, leading to regulatory investigations, class actions, significant remediation costs, and long-term erosion of public trust. The incident demonstrated that privacy is not just about preventing breaches, but about maintaining control across systems, data flows, and decision-making processes," said Mittal.

AI and data risk

Adoption of AI in public and private sector services has introduced new privacy considerations. Organisations now use AI models and agents in areas such as customer service, fraud detection and decision support. These systems rely on large datasets and often extend access to information beyond traditional user groups.

Mittal said the combination of AI, large data platforms and weak governance increases the stakes.

"As AI is increasingly used for service delivery, fraud detection and decision support, the need for control has intensified. AI amplifies both value and risk when data governance, access controls and testing are not embedded upfront. Organisations today need to take control of their privacy by designing AI and data platforms with security, governance and continuous validation at their core. This enables responsible use of AI within controlled enterprise environments, allowing their businesses to innovate with confidence while meeting regulatory expectations. In 2026, leaders who take control of privacy will be best positioned to scale AI safely and sustain trust," said Mittal.

Identity focus

Vendors and security teams are turning their attention to identity as a central element of data protection strategies. This includes human users and a growing volume of non-human identities such as bots, service accounts and autonomous AI agents.

Nigel Tan, APAC SE Director at Delinea, said recent breaches have highlighted how identity weaknesses continue to drive data loss.

"Data has become one of the most valuable assets in the digital economy, and protecting it demands ongoing control over who and what can access sensitive information. As AI adoption accelerates and non-human identities (NHIs) continue to outnumber human users, data privacy cannot be guaranteed without robust identity controls. We've seen the consequences first-hand. In the past year alone, breaches like the Salesloft incident and Qantas exposure showed how compromised identities remain the leading cause of data loss, with stolen or misused credentials at the heart of most breaches. And the risk is compounded with AI. According to Delinea's recent AI in Identity Security Report, 73% of Australian organisations already use agentic or generative AI, yet only 46% have full visibility into machine identities, and 16% admit to having none at all. That lack of oversight leaves autonomous systems making unchecked decisions, often with access to critical data. That's why identity security can't be reactive. It's no longer enough to know who or what has access. You must ensure credentials are granted with the least privilege, for the shortest time necessary, and continuously evaluated based on real-time risk. This Data Privacy Day serves as a timely reminder for you to assess how well your organisation governs the identity that control and protect data. Enforcing least privilege, monitoring privileged activity, and adopting always-on identity security are now essential steps toward protecting sensitive data and the trust that depends on it," said Tan.

Physical security data

Security experts said data privacy concerns now extend beyond conventional IT systems into physical environments. Modern physical security deployments collect video, access logs and sensor data, often linked to identities and behavioural information.

Mathieu Chevalier, Principal Security Architect at Genetec, said organisations should reassess how they treat and handle physical security information.

"Physical security data can be highly sensitive, and protecting it requires more than basic safeguards or vague assurances. Some approaches in the market treat data as an asset to be exploited or shared beyond its original purpose. That creates real privacy risks. Organisations should expect clear limits on how their data is used, strong controls throughout its lifecycle, and technology that is designed to respect privacy by default, not as an afterthought," said Chevalier.

Regulatory expectations

Industry observers expect Australian regulators to continue focusing on breach reporting, data minimisation, and stronger identity oversight. Organisations now face closer scrutiny of how they justify data collection, how long they retain information, and how they secure both digital and physical records.

Security leaders said the shift in risk has created a more complex environment for boards and technology teams. They now need coordinated controls across AI systems, identity platforms, and physical security infrastructure.

Mittal said organisations that embed privacy, governance, and validation into their AI and data architectures will be better placed as scrutiny increases.