AI cyber threats outpace staff readiness, report warns
Fortinet's latest global research finds organisations are paying more attention to AI-linked cyber risks, but many still doubt their employees can spot and respond to AI-based attacks.
For its 2025 Security Awareness and Training Global Research Report, Fortinet surveyed 1,850 senior IT and security leaders worldwide. The results point to a gap between growing awareness and day-to-day readiness, even as organisations increase spending on security awareness training.
Nearly nine in 10 organisations said attackers' use of AI has increased employee awareness of security risks. Yet only about 40% said employees are prepared to identify, avoid, and report AI-based threats. The findings suggest AI has raised cyber security's profile in many businesses, but skills and behaviour have not kept pace.
AI policies
As workplace use of AI expands, organisations report taking a range of steps. Many train staff on the proper use of generative AI tools. Others monitor or restrict the sharing of sensitive data. Respondents also pointed to formal AI security policies.
Nearly all participants said they already have security policies for AI and large language model tools, or are implementing them. This reflects broad agreement that governance must accompany adoption, particularly when staff can input company information into third-party systems.
Even so, the figures suggest ongoing exposure to human error. While employees may understand that AI has changed attacker tactics, leaders remain unconvinced staff can consistently recognise new forms of phishing, social engineering, and fraud that use AI-generated content.
Insider concerns
External threats remain the main trigger for deploying security awareness training. More than 40% of respondents cited external threats, previous breaches, or incidents in their industry as the primary driver for investing in training.
The research also shows rising attention to insider risk. More than a quarter of organisations cited insider risk as a reason for adopting security awareness training, which Fortinet said is a sharp increase from the prior year.
Training priorities appear to reflect these concerns. Data security and data privacy remain the most common topics covered in awareness programmes. AI-based tools and AI-driven threats also feature prominently.
This shift matters for risk teams because insider incidents span a wide range of scenarios, from inadvertent mishandling of information and unsafe software use to poor password practices or deliberate misuse. Generative AI adds another path for data leakage if employees paste confidential material into prompts or share outputs without review.
Measurable impact
The report argues security awareness training is moving from a compliance exercise to a measurable control. Sixty-seven percent of organisations reported moderate or significant reductions in intrusions, incidents, and breaches after implementing awareness and training programmes.
Respondents also described more structured approaches to measuring outcomes. Common indicators include fewer security incidents, employee feedback, and security audits, suggesting organisations are relying more on operational signals than course completion alone.
Many organisations combine in-person training with computer-based learning, along with simulations, assessments, and reinforcement activities. The report links this with a shift away from one-off annual sessions towards ongoing training.
Completion gaps
Despite progress, completion and consistency remain weak points. Only a small percentage of organisations reported full training completion, and nearly seven in 10 leaders said employees still lack sufficient security awareness.
This creates a challenge for security leaders who need evidence of reduced risk. Training can support a control environment, but it requires broad participation, reinforcement, and regular updates. The report notes that outdated content becomes less effective as attacker techniques change.
Fortinet points to steps to improve follow-through, including shorter and more frequent modules, clearer accountability for completion, and closer alignment between content and current threats. It also highlights visible leadership support and the growing role of regular micro-training as AI evolves.
Shared responsibility
The research also suggests a cultural shift in how organisations view cyber security. Most leaders said security awareness is a shared responsibility across the organisation, not limited to IT or security teams.
Nearly all respondents said they are open to using policy to curb high-risk behaviour, particularly when paired with training that explains the rationale. This points to an effort to combine rules, education, and accountability rather than relying solely on technical controls.
For security leaders planning the next phase of AI adoption, the report highlights tensions likely to persist. Attackers can use AI to increase the volume and plausibility of scams, while businesses continue rolling out AI tools at pace. At the same time, concerns about insider risk appear to be rising. Training may reduce incidents, but gaps in completion and readiness remain.
"To be effective, training has to be continuous, relevant, and treated as a core risk management control, not a side project."
The report suggests security awareness programmes will continue to evolve as AI use expands and governance frameworks mature.