SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Flux result 30411c27 c886 47a9 9e1e 8a8aa46a7a09

UK firms face AI data exposure despite Copilot confidence

Tue, 21st Apr 2026 (Yesterday)

ShareGate has published research suggesting many organisations have experienced AI-driven data exposure incidents, even as AI adoption becomes widespread across Microsoft 365 environments.

The study surveyed more than 850 IT and security leaders across the US, Canada and Europe. It found that 29% said AI tools had surfaced sensitive data they should not have accessed, while 93% said they were confident their Microsoft 365 governance framework could support AI use responsibly.

UK responses suggest the market is ahead of the global average in Microsoft Copilot deployment, but is also already facing governance strain and data exposure. According to the survey, 94% of UK organisations had deployed Copilot to some degree, and 63% reported full deployment, compared with 56% globally.

Confidence levels were also high in the UK, where 97% of IT leaders said they were confident in their governance frameworks. Of those, 60% said they were very confident, compared with 51% globally.

That confidence sits alongside reported incidents. In the UK, 26% of IT leaders said AI tools had surfaced sensitive internal data, including internal documents for 36% of that group, customer data for 30%, and personal data for 29%.

Governance load

The findings suggest governance is becoming a central issue as companies seek a return on AI spending. Globally, more than 80% of respondents said they expected measurable return on investment from Microsoft 365 AI initiatives within 18 months.

UK organisations were more likely than the global average to say they were very confident Microsoft 365 AI projects would deliver measurable returns: 46% compared with 39% globally. At the same time, 55% said governance complexity was the main barrier to measuring that return.

Across the full sample, only 51% said they had completed an organisation-wide governance review after enabling Microsoft 365 AI tools, including Copilot. Respondents said exposed information included customer records, sensitive internal documents, personal data and personally identifiable information, HR records, financial data and proprietary intellectual property.

Pressure on IT and security teams has also increased. More than 70% of respondents globally said AI had increased their governance burden after those tools were enabled. Nearly eight in 10 said they were at least moderately concerned about AI accessing content whose permissions had not been reviewed recently.

The same pattern appeared in the UK data. Thirty percent of UK organisations reported a significant increase in governance workload, compared with 24% globally. Meanwhile, 47% said they were very likely to bring in external partners to assess AI governance before expanding further.

Exposure risk

The research focused on Microsoft 365 environments, where many companies are testing or rolling out generative AI tools through Copilot. The results suggest the issue is not just whether companies adopt AI, but whether data permissions, content reviews and oversight processes are keeping pace with deployment.

Benjamin Niaulin, Vice President of Product at ShareGate, said the survey points to longstanding governance weaknesses rather than a problem created solely by new AI products.

"AI and Copilot didn't create the governance problem. They exposed it," Niaulin said. "IT teams have been papering over fragmented tools and blind spots for years. Now every oversharing group and forgotten permission is one Copilot prompt away from becoming a real incident. You can't govern what you can't see, and right now, most teams can't see it."

The survey was conducted by Centiment on behalf of ShareGate and covered IT and security leaders in the US, UK, Canada, France, Germany, the Netherlands and Ireland. Respondents worked across IT leadership, security leadership, data governance, compliance and digital workplace leadership.

The figures add to a broader debate over whether companies can expand AI use inside mainstream workplace software without first tightening control over access to internal content. In this survey, the gap between confidence in governance and reports of data exposure was one of the clearest signs of that tension.

Internal capacity also appears to be under strain. Globally, eight in 10 respondents said they were likely to bring in an external partner for an AI governance assessment before scaling further.