SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Soc monitoring unauthorized ai tools secure office environment

SailPoint launches Shadow AI tool to rein in staff use

Wed, 18th Mar 2026

SailPoint has launched Shadow AI Remediation, a product designed to give security teams real-time visibility and control over employees' use of unsanctioned generative AI tools.

The release targets "Shadow AI" - staff use of services such as ChatGPT, Claude, and Gemini outside approved IT processes. Security leaders have raised concerns about data exposure and compliance risks when employees upload documents or paste sensitive information into third-party AI services.

Shadow AI Remediation is part of SailPoint's broader real-time AI governance and security framework, extending identity security into a fast-growing risk area driven by workplace adoption of generative AI.

Real-time monitoring

The product tracks how employees use unauthorised AI tools, including interaction frequency and when documents are uploaded. The aim is to provide immediate insight into activity that might otherwise go untracked.

In a report it published, SailPoint found that 80% of organisations said their AI agents had performed unintended actions, including accessing or sharing inappropriate data.

Enterprises have struggled to set clear policy boundaries for generative AI, particularly when consumer tools are easily accessible through a browser. Many employees see these tools as productivity aids, which can drive adoption in teams that lack formal guidance or access to approved services.

Remediation controls

In addition to visibility, Shadow AI Remediation includes controls to block unauthorised uploads and redirect users to sanctioned AI tools. It can also prompt users to provide a business justification when attempting certain actions.

The approach brings Shadow AI into a central governance process rather than relying on a patchwork of browser security settings and endpoint controls. SailPoint argues that identity data provides the context needed to interpret risk and apply policy consistently across users and systems.

"Many vendors are trying to solve the Shadow AI problem with isolated browser or endpoint tools, but that misses the bigger picture. This is fundamentally an identity challenge," said Chandra Gnanasambadam, EVP of Product and chief technology officer at SailPoint.

Gnanasambadam tied the release to SailPoint's broader platform strategy around identity and data signals. "We believe controlling AI usage is best achieved through a platform-centric approach that unifies identity, data, and security intelligence in real-time. Our real-time AI governance and security framework is built on this principle. By linking human and non-human identities, we provide the context needed to not just see Shadow AI, but to govern it effectively. Shadow AI delivers robust real-time visibility, proactive remediation, and seamless deployment, all deeply integrated with the SailPoint Platform," he said.

Deployment model

Shadow AI Remediation can be deployed through a browser extension and rolled out using common device management tools, including Microsoft Intune and Jamf.

The deployment does not require networking or infrastructure updates. This model reflects a broader trend in security tooling, as vendors aim to reduce implementation effort and shorten the time to policy enforcement by relying on endpoint and browser controls.

Browser-based enforcement also reflects how generative AI is often adopted. Employees typically access AI services through web interfaces rather than approved enterprise integrations, particularly in organisations that have not standardised on a single AI tool or vendor.

Framework expansion

Shadow AI Remediation adds to SailPoint's AI governance and security framework, which also includes Agent Identity Security, Machine Identity Security, and Data Access Security. These components sit within its Identity Security Cloud.

By feeding AI tool usage activity into its cloud platform, SailPoint says customers can enrich their "identity graph." This could add context about how users and other identities interact with AI services, influencing access decisions and risk assessments.

SailPoint has increasingly emphasised non-human identities in its market messaging, including service accounts, bots, and machine identities. As AI agents become more common in enterprise environments, some organisations are treating them as identities with permissions and audit trails similar to those of human users.

SailPoint says Shadow AI Remediation links human and non-human identities with data and security intelligence, and views that linkage as central to managing AI-related risk across different tools and usage patterns.

Future updates are likely to focus on extending policy enforcement and identity context to new AI services as they emerge, and as enterprises formalise internal rules for staff use of generative AI.