SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Computer screen suspicious email hook digital data streams warning symbols australian office cybersecurity

Australian AI use & phishing risks surge as data leaks climb

Tue, 2nd Sep 2025

New research from Netskope Threat Labs reports a sharp increase in Australian workers clicking on phishing links, alongside expanding usage and associated risks of generative AI (genAI) applications in the workplace.

The research found that an average of 1.2% of Australian employees clicked on a phishing link each month over the past year, representing a 140% increase on the prior period. Phishing attempts most often involved impersonations of Microsoft and Google, accounting for nearly one in five of all successful clicks.

Phishing trends

Attackers targeting personal accounts, including those linked to gaming platforms, personal cloud storage, or government services, were highlighted as particularly successful in securing click-throughs. The focus on corporate credentials and sensitive company data remains prevalent, alongside attempts to access personal data via compromised accounts.

Ray Canzanese, Director of Netskope Threat Labs, said:

"The general availability of AI tools continues to enable threat actors to refine their social engineering techniques, and sophisticated phishing campaigns and convincing voice or video deepfakes are now regularly reported as the source of high profile data breaches. However, deliberate data theft is only part of the picture. Our data shows that the use of AI in the workplace is also a major risk vector for accidental data loss."

GenAI adoption and risks

Use of genAI applications in Australian organisations is widespread, with 87% of companies reporting employees accessing at least one such application monthly. This marks an increase from 75% nine months ago. Among the most commonly used tools are ChatGPT (73%), Google Gemini (52%), and Microsoft Copilot (44%). However, the report notes a recent decline in ChatGPT usage between May and June, with Gemini and Copilot usage trending upward. DeepSeek is the most blocked genAI application in Australian organisations (69%), while 30% have also banned Grok.

The report identifies inadvertent data exposure as a critical challenge, with survey data indicating that employees frequently enter sensitive information into genAI tools. Intellectual property, source code, and regulated data comprise 42%, 31%, and 20% respectively of information most frequently leaked through prompts or uploads. The use of personal genAI accounts for work purposes by 55% of workers further complicates monitoring and protection efforts.

Australian organisations have begun implementing company-approved genAI solutions to increase oversight and deploy security controls. Authorised access to validated tools is seen as one step towards mitigating so-called 'shadow AI,' where new or unapproved AI tools are used without IT knowledge or sanction. This trend follows broader adoption of AI models and large language model (LLM) interfaces, with 29% and 23% of organisations, respectively, using such platforms in mid-2025.

The report warns that direct integration of genAI systems with enterprise datasets poses additional risks. Permission levels within these tools must be monitored and restricted to avoid exposing sensitive data. Use of LLM interfaces with insufficient default security configuration may also lead to vulnerabilities unless properly managed by internal security teams.

As AI use expands, Canzanese commented:

"We expect more individuals within organisations to experiment with generative or agentic AI deployments, which presents significant shadow AI and data security risks. We are seeing positive signs from Australian organisations, who have been proactive in deploying data loss prevention to avoid data leaks via genAI applications specifically, but they should now turn their attention to detecting and securing emerging and future AI systems so that teams can enjoy the benefits of AI innovation without leaving the front door wide open."

Other findings

The study also examined employee use of personal cloud applications at work. Data commonly transferred to these platforms includes regulated data (54%), intellectual property (28%), and passwords or encryption keys (9%), all of which could create further exposure for organisations if not adequately secured.

The report registered that 0.2% of Australian workers encounter malicious code, infected documents, or malware each month, adding another layer of concern for organisational IT security.

The findings are based on anonymised data collected from Netskope Australian clients between June 2024 and June 2025.