Human-linked cyber incidents surge as AI use grows
KnowBe4 has reported a sharp rise in human-related cybersecurity incidents as artificial intelligence becomes more deeply embedded in workplace tools and processes.
The security training provider released new research that suggests many organisations are struggling to manage behavioural risk as staff and AI systems work more closely together.
The study, based on a survey of 700 cybersecurity leaders and 3,500 employees worldwide, found that incidents involving the human element rose by 90% over the past year.
The sample included respondents in Australia and New Zealand across sectors such as financial services, manufacturing, healthcare, retail, government and critical infrastructure.
KnowBe4 said human-related incidents covered a range of scenarios. These included social engineering attacks such as phishing or business email compromise, risky or malicious behaviour, and employee mistakes.
The research paints a picture of a complex threat environment in which organisations must handle both long-standing attack methods and newer AI-driven risks.
According to the survey, 93% of cybersecurity leaders experienced incidents caused by cybercriminals exploiting employees. The report said most of these incidents involved attackers targeting staff through familiar communication channels.
Email remained the primary route for attacks. The data showed a 57% increase in email-related incidents, and 64% of organisations reported external attacks that exploited employees via email.
Human error continued to feature heavily in breach statistics. The study found that 90% of organisations experienced incidents caused by employee mistakes.
Insider threats also remained a concern. Malicious insiders were linked to incidents at 36% of organisations in the survey.
Almost all cybersecurity leaders said they felt under-resourced in this area. The research found that 97% of them saw a need for increased budget allocations for securing the human element in their organisations.
AI-driven risks
The report highlighted a rapid rise in incidents involving AI applications. These incidents increased by 43% over the past 12 months, which the study said was the second-largest increase across all attack channels.
Cybersecurity leaders identified AI-powered threats as their top security risk. The research found that 45% of them saw constantly evolving AI threats as the greatest challenge when dealing with behavioural risk.
Deepfake-related attacks also increased. The survey said 32% of organisations reported a rise in incidents that involved deepfake content.
Almost every organisation surveyed had taken steps to address AI-related cybersecurity risks. The report said 98% of cybersecurity leaders had initiated some measures in this area.
However, there was a clear disconnect between these programmes and employee sentiment. The study found that 56% of employees were unhappy with their company's approach to AI tools.
The research said this dissatisfaction could drive staff towards unsanctioned AI platforms. It described this trend as a source of "shadow AI" risk, where employees adopt external tools without formal oversight or controls.
Email still exposed
The authors of the study predicted that email will remain the most exposed channel for several years. They said attackers are increasingly combining email with other messaging platforms and voice channels.
The report pointed to a rise in multi-channel attacks. These include incidents that mix email, messaging applications and voice phishing, also known as vishing.
It said cybercriminals are making greater use of AI tools to create more sophisticated and scalable attacks. This includes more convincing phishing content and more realistic impersonation attempts.
Against this backdrop, KnowBe4 urged organisations to update their approaches to human risk. The company said security programmes must reflect both the behaviour of employees and the actions of AI agents that operate alongside them.
Javvad Malik, Lead CISO Advisor at KnowBe4, said organisations cannot ignore the productivity impact of AI. He also said they must plan for the security implications.
"The productivity gains from AI are too great to ignore, so the future of work requires seamless collaboration between humans and AI," said Javvad Malik, lead CISO advisor at KnowBe4. "Employees and AI agents will need to work in harmony, supported by a security program that proactively manages the risk of both. Human risk management must evolve to cover the AI layer before critical business activity migrates onto unmonitored, high-risk platforms."
The company said it expects the blend of human and AI-driven risk to remain a central issue for security leaders over the coming years.