SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Server room technical difficulties red warnings security breaches

AI deployment delays rise as data security & accuracy issues grow

Thu, 9th Oct 2025

New research from AvePoint has found that artificial intelligence deployments are being delayed by up to 12 months, with 75% of organisations reporting AI-related security breaches as concerns around data security and accuracy continue to hinder progress.

The annual report, entitled The State of AI in 2025: Go Beyond the Hype to Navigate Trust, Security, and Value, examines the shifting landscape as companies transition from initial AI experimentation to broader, enterprise-wide deployment. The findings indicate that significant operational challenges persist, particularly in the areas of data security and data quality, resulting in considerable deployment slowdowns for the majority of businesses.

Deployment delays

The report states that the average AI deployment is delayed by nearly six months, with one in four organisations experiencing postponements of up to 12 months due to unresolved issues regarding data quality and security measures. The leading reasons for delaying generative AI assistant rollouts are inaccurate AI outputs and concerns about data security, cited by 68.7% and 68.5% of organisations, respectively. Additionally, nearly a third (32.5%) of those surveyed identified so-called 'AI hallucinations'-instances where AI systems produce plausible but incorrect information-as the most significant threat posed by these technologies.

Employee engagement has also emerged as a key barrier, with 64.2% of respondents noting a "lack of perceived value" among staff as a critical factor impeding successful AI integration. This points to the need for robust enablement and education programmes to articulate the value that AI technology can generate.

"We're seeing organisations treat AI governance as a checkbox exercise rather than an operational imperative," said Dana Simberkoff, Chief Risk, Privacy and Information Security Officer at AvePoint. "The gap between having policies and implementing them effectively is where most security incidents occur. This challenge becomes exponentially more critical as organisations move toward agentic AI systems that can act independently and make decisions without human oversight. Basic security measures cannot keep pace with the complexity and sprawl of AI-generated data, leaving organisations vulnerable unless they evolve their governance models to handle autonomous AI agents."

Governance paradox

The findings highlight a notable inconsistency between how ready organisations believe they are for AI and their actual operational preparedness. While 90.6% of organisations claim to have effective information management programmes in place, only 30.3% have effective data classification systems implemented. In the subgroup that self-identifies as having the highest level of information management effectiveness, more than three-quarters (77.2%) still reported data security incidents.

Ongoing development of AI policies is evident, with 43.4% of organisations actively working to formalise their approach. At the same time, the trend of unsanctioned AI use continues to rise, reflecting ongoing deficiencies in the monitoring and enforcement aspects of governance at many firms.

Data management pressures

Organisations are also confronting steep growth in data volumes, particularly as the proportion of AI-generated content rises. Nearly one in five companies predict that more than half of their data will originate from generative AI within twelve months. Current growth rates for data stand at 23.8%, projected to reach 31.6% next year, while the prevalent use of multiple storage platforms (84.6%) continues to contribute to data sprawl. Furthermore, the age of data is an issue, with 70.7% of respondents indicating the majority of their organisational data is over five years old, potentially limiting the effectiveness of AI training data.

"The exponential growth in AI-generated content is fundamentally changing how organisations must approach data security and governance," said John Peluso, Chief Technology Officer at AvePoint. "We're seeing enterprises struggle not just with the volume of new data, but with maintaining data lineage and ensuring quality control when AI systems are both consuming and creating information at scale. The organisations succeeding in this environment are those building governance directly into their AI workflows rather than treating it as an afterthought."

Strategic responses

According to the report, organisations are attempting to address these challenges through increased investment in critical infrastructure. Survey results show 64.4% are expanding their investment in AI governance tools, while 54.5% are prioritising data security tools. Nearly all organisations surveyed (99.5%) are implementing AI literacy training initiatives, with 79.4% identifying role-based training as particularly effective. To monitor their AI programmes, 73.9% employ both quantitative and qualitative feedback mechanisms.

The AvePoint report is based on responses from 775 professionals across 26 countries, including the financial services, government, and healthcare sectors. The survey targeted individuals responsible for AI, information management, or data security at various organisational levels.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X