AI assistants to surpass humans in causing corporate data leaks
Cybersecurity threats are expected to become more complicated in the coming years, as autonomous artificial intelligence (AI) agents take on a greater role in business environments and become targets as well as potential risks themselves. Enterprises deploying AI assistants without sufficient controls could face new forms of data leakage, internal risk, and compliance challenges.
AI as insiders
By 2026, AI agents are predicted to surpass human employees as the leading source of internal data leaks within organisations. The growing reliance on AI copilots and assistants-often implemented without a full understanding of existing data management weaknesses-may lead to sensitive company information being inadvertently disclosed or accessed by unintended parties. AI agents could inherit existing problems such as excessive sharing permissions, unclassified documents, and obsolete access rights on platforms including cloud storage and company intranets.
These AI assistants will start to function as independent identities within IT environments. Each agent will carry unique profiles, including individual trust scores. They will act as peer participants, requiring security teams to extend identity management controls traditionally reserved for human actors to these AI entities. Cybercrime methods may also shift: rather than targeting users through phishing campaigns, attackers could attempt to mislead AI agents into exposing confidential information by manipulating their prompts or commands.
"Security teams will no longer focus solely on human actors; they will be forced to treat their AI agents as first-class identities, managing their privileges, monitoring their behaviours, and scoring their risks," said Ravi Ithal, Chief Product and Technology Officer for AI Security, Proofpoint.
Regulatory change
Australia, like many countries, is expected to strengthen cybersecurity regulation as AI becomes further integrated into business processes. Recent incidents, in which the rapid roll-out of AI tools contributed to data breaches due to misunderstanding of data handling requirements, have increased scrutiny over how organisations store and process information. The Australian government is reviewing regulatory frameworks for AI, noting that certifications such as ISO 42001 have helped, but may not be sufficient for the risks arising from widespread AI adoption.
Organisations are being encouraged to review and audit all AI use internally, reinforce information governance, and meet international standards ahead of anticipated regulatory tightening. The public sector is likely to be subject to increased requirements, prompting Australian businesses to reassess compliance posture and internal policies sooner rather than later.
"To prepare, organisations should proactively audit their AI use, tighten data handling controls, and align governance with current recognised standards so their programs are ready for stricter 2026 regulation and public‐sector expectations," said Adrian Covich, Vice President, Systems Engineering for Proofpoint in Asia-Pacific and Japan.
Espionage tactics
Cyber-espionage activity is projected to become more covert, personal, and sophisticated by 2026. Threat actors, including those backed by nation-states, are reducing their reliance on methods such as phishing emails. Instead, they are using encrypted messaging applications and direct conversation to build trust before orchestrating attacks, making detection by organisations significantly harder.
There has been increased targeting of Western organisations-particularly in sectors like technology, defence, and policy-by threat actors from South Asia and India. Attacks are often timed alongside major geopolitical developments, with attackers using device code phishing and legitimate management tools to move undetected within networks. By using everyday platforms for malicious purposes and focusing on stealing nontraditional credentials, these espionage campaigns are difficult to discern from legitimate business activity.
"In 2026, the most effective espionage won't be loud or flashy - it'll be invisible, hiding in plain sight behind the tools and platforms we trust every day," said Alexis Dorais-Joncas, Head of Espionage Research, Proofpoint.