AI ‘agentic’ tools to drive surge in cyber fraud by 2026
Fraud prevention specialists at SEON expect 2026 to mark a sharp escalation in AI-driven cybercrime, as autonomous “agentic” tools make online fraud more persistent and harder to detect, while also reshaping how businesses manage digital risk.
The company forecasts that criminals will rely more on AI systems that can plan and execute scams with limited human oversight. It also expects businesses to respond by combining human decision-making with automated systems across the customer lifecycle.
Industry forecasts suggest merchants face online payment fraud losses of more than USD $360 billion between 2023 and 2028. Analysts also identify fraud detection as one of the most commercially significant applications of AI.
Autonomous fraud agentsHusnain Bajwa, SVP, Risk Solutions at SEON, said AI-driven crime is entering a new phase.
“AI has long been used on both sides of the fraud equation, powering defences and attacks, but in 2026, the balance will shift. We've entered an era of agentic and adversarial AI, meaning that systems can plan fraud, act and adapt without human input. What once took coordinated human effort can now be done by autonomous agents and this will really change the game.
What's more, until recently, AI-driven attacks could maintain a façade for a few minutes before breaking character. Now they can stay in character for hours, maintaining believable conversations across multiple platforms and channels. These systems learn as they go, probing defences, identifying thresholds and iterating in real time. The result is a new breed of fraud that is persistent, contextual and difficult to distinguish from legitimate activity,” said Bajwa.
SEON expects these systems to operate across messaging apps, email, and web interfaces. The tools can test banks’ and merchants’ defences, then adapt their behaviour based on responses.
Human and machineBajwa said AI now underpins many fraud controls, but that people still play a central role in major decisions.
“AI has become a permanent part of the fraud landscape, but not in the way many expected. AI has transformed how we detect and prevent fraud, from adaptive risk scoring to real-time data enrichment, but full autonomy remains out of reach. Fraud detection still depends on human judgment, such as weighing intent, interpreting ambiguity, and understanding context that no model can fully replicate.
Fraud prevention is a complex interplay of data, intent and context and that's where human reasoning continues to matter most. Analysts interpret ambiguity, weigh risk appetite, and understand social signals that no model can fully replicate. What AI can do is amplify that capability. It surfaces patterns, prioritises alerts and reduces manual work so teams can focus on what really matters.
In that sense, the future isn't human or machine, but human plus machine. AI becomes an enabler, not a replacement. The organisations that thrive will be the ones that design systems where humans and machines enhance each other's strengths, pairing computational scale with the intuition and ethical reasoning that only people can provide,” said Bajwa.
The comments reflect a wider shift in financial crime teams. Many banks and fintechs now position AI as a decision-support layer rather than a fully automated gatekeeper.
From bad actors to good usersGeorge Pace, Sr. Manager Product Marketing at SEON, said fraud teams are moving away from chasing anomalies and are focusing instead on detailed models of genuine behaviour.
“The implications of agentic and adversarial AI are significant. Traditional fraud prevention has been built around detection, which means spotting anomalies, scoring risk and identifying signals that don't fit the pattern. But as agentic AI reshapes how those patterns are forged, the industry's focus is starting to flip. 2026 won't be about finding the bad actors, it will be about understanding what good, genuine behaviour really looks like.
The boundary between genuine and synthetic activity is blurring. Generative AI can now simulate human interaction with high accuracy, including realistic typing rhythms, believable navigation flows, and deepfake biometrics that replicate natural variance. The traditional approach of searching for the red flags no longer works when those flags can be easily fabricated.
The next evolution in fraud detection will come from baselining legitimate human behaviour. By modelling how real users act over time and looking at their rhythms, routines and inconsistencies, we can identify the subtle deviations that synthetic agents struggle to mimic. It's the behavioural equivalent of knowing a familiar face in a crowd. Trust comes from recognition, not reaction,” said Pace.
This approach relies on data from multiple touchpoints, including onboarding flows, login sessions and payment activity. It also depends on long-term observation rather than single-event checks.
Fraud as strategySEON expects more companies to treat fraud controls as part of commercial strategy rather than purely as compliance.
“In 2026, companies that treat fraud prevention as a strategic competitive advantage rather than just a compliance requirement will be able to grow more efficiently and outperform competitors. By integrating systems across onboarding, account protection, transaction monitoring and identity verification, businesses can share and leverage data throughout the entire customer journey to gain a holistic, real-time view of risk.
This will empower organizations to offer more attractive customer incentives, speed up onboarding processes, and provide greater value. The companies that excel at weaving advanced security into their operations will be able to take more calculated risks and will consistently gain an edge over competitors who view fraud prevention as merely an operational concern,” said Pace.
SEON anticipates that organisations which combine AI tools, behavioural insights and human oversight will be better positioned as digital fraud techniques evolve in 2026.