SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

GenAI use in healthcare sparks rise in sensitive data violations

Today

A new report from Netskope Threat Labs has found that healthcare workers are frequently trying to upload sensitive and regulated health data to personal cloud and generative AI accounts while at work.

The report highlights that 81% of all data policy violations identified in healthcare organisations over the past year involved regulated healthcare data, including sensitive medical and clinical details protected by various regulations. The remaining 19% of violations concerned passwords and keys, source code, and intellectual property, with many incidents traced to uploads to personal Microsoft OneDrive or Google Drive accounts.

Generative AI (genAI) applications such as ChatGPT and Google Gemini are now used in 88% of healthcare organisations, according to the report. This widespread adoption is resulting in a significant proportion of data policy violations taking place during the use of these applications, with 44% involving regulated data, 29% related to source code, 25% tied to intellectual property, and 2% including passwords and keys.

The report indicates that additional risks of data leaks stem from the use of applications that leverage user data for training or that have genAI features. Such applications are now used in 96% and 98% of healthcare organisations, respectively.

Netskope Threat Labs notes that more than two-thirds of genAI users in healthcare upload sensitive data to their personal genAI accounts at work. This practice is making it more challenging for security teams to maintain oversight of genAI-related activities, and inhibits their ability to identify and prevent potential data breaches without robust data protection procedures.

Gianpietro Cutolo, Cloud Threat Researcher at Netskope Threat Labs, commented on the report's findings: "GenAI applications offer innovative solutions, but also introduce new vectors for potential data breaches, especially in high-pressure, high-stakes environments like healthcare, where workers and practitioners often need to operate with speed and agility. Healthcare organisations must balance the benefits of genAI with the deployment of security and data protection guardrails to mitigate those risks."

The report outlines strategies for organisations aiming to manage these risks.

One recommendation is to deploy organisation-approved genAI applications across the workforce. This centralises genAI usage in approved, monitored, and secured platforms and limits the reliance on personal accounts and so-called "shadow AI". The use of personal genAI accounts by healthcare employees has fallen from 87% to 71% over the past twelve months as more organisations turn to enterprise-approved solutions.

Netskope Threat Labs also advocates the implementation of strict Data Loss Prevention (DLP) policies, which are designed to monitor and control access to genAI applications and restrict the types of data that can be shared with them. The adoption of DLP policies in healthcare organisations has increased from 31% to 54% within the past year.

Another measure detailed in the report is the use of real-time user coaching. This tool provides immediate prompts to employees who attempt to perform risky actions, such as uploading a file containing patient names to ChatGPT. According to a separate report, 73% of employees across industries choose not to proceed with such actions after receiving a coaching prompt.

Gianpietro Cutolo added: "In the healthcare sector, the rapid adoption of genAI apps and growing use of cloud platforms have brought new urgency to protecting regulated health data. As genAI becomes more embedded in clinical and operational workflows, organisations are accelerating the rollout of controls like DLP and app blocking policies to reduce risk. Healthcare organisations are making progress, but continued focus on secure, enterprise-approved solutions will be critical to ensure data remains protected in this evolving landscape."

The report is based on anonymised data collected from a subset of Netskope healthcare customers, including organisations in Australia and New Zealand, with all data used under appropriate authorisation.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X