Exclusive: Exabeam’s Steve Wilson on AI agents and the future of cybersecurity
Steve Wilson, Chief AI and Product Officer at Exabeam, says enterprises are facing growing pressure to deploy AI agents across their businesses - and security teams must adapt quickly.
Speaking to TechDay during a visit to Melbourne, Wilson warned that organisations can no longer rely on traditional defences.
"I've been deeply involved in thinking about the intersection of cybersecurity and these new generation of AI gizmos based off large language models since 2023," he said.
"We put out the OWASP Top 10 for large language model security, which became a very popular document."
Wilson explained that enterprises have shifted from blocking AI tools to actively rolling them out.
"Back then, security teams pushed back hard. They would take whatever defences they had and attempt to turn off the AI, keep it out," he said. "By and large, that's broken down. They know that they need to let some of this in."
The shift is being driven by hard business cases. "Their counterparts in the business are showing up, going, 'This is going to make me money or save me money. And I need you to figure out how we can do this.' So the pressure has gone up dramatically," Wilson added.
The challenge of visibility
Wilson said security visibility remains a persistent problem despite decades of investment.
"The whole security model was fundamentally based on the fact that you can keep people out. And it's no longer true. People don't work from the office. The servers are no longer in your data centre," he said.
This shift has forced enterprises to rethink how they monitor systems. "We've gone from those log files being gigabytes to being terabytes, and now petabytes," Wilson explained. "What used to work for visibility - putting them all in one place and having a search function - that's had to change."
AI is now being used to sift through that data, but attackers are also leveraging the technology.
"The usage of AI by the hacker community was largely theoretical up until 2025," he said. "Now they're being used for zero-day vulnerability research, reconnaissance and to actively exploit vulnerabilities."
Smarter attackers, smarter defences
Wilson pointed to a significant breakthrough: "The ability for these large language models to write software has skyrocketed. They're 100 times better at it than they were nine months ago," he said.
"The overlap between what you need to do as a hacker and a software developer is very high."
Open-source, unguarded models have accelerated this trend. "What we saw in January this year is the release of DeepSeek from China," Wilson said. "We now have frontier-class models that are completely open source, completely downloadable and have no guardrails."
This has put additional pressure on enterprises to strengthen defences. "Cybersecurity is a game about speed," Wilson said. "If those metrics can be improved by the use of these agents, then the payoff is a more secure environment."
Beyond brittle rules
Traditional automation has its limits.
"The old approach created brittle logic loops that either fail to detect threats or generate too many false positives," Wilson explained. "The poor people working in the security operation centre are just swamped."
Exabeam has responded by adding machine learning and advanced reasoning agents to its platform.
"The people who are trying to run those investigations are three to five times faster than they used to be," he said.
These agents can run full investigations within seconds, presenting analysts with a clear summary rather than raw log data. "Instead of escalating cases to senior staff, level one analysts can ask the agent questions directly," Wilson explained.
"They get an answer, which means they don't have to escalate it. And two, they actually learn from that."
Balancing humans and AI
Wilson stressed that humans remain essential in security operations. "There's a very clear line between what the AI is good at and what the humans are good at," he said. "The most effective thing you can do right now is team your humans with AI agents."
While AI excels at sifting through vast amounts of data, humans are still better at interpreting intent and context.
"Sometimes the right thing to do is have somebody pick up the phone and call another human," Wilson said.
Treating AI agents as insider threats
Looking ahead, Wilson sees enterprise adoption of AI agents as the biggest shift. "So many disciplines inside businesses are now getting their own specialised agents to accelerate them," he said.
He argued that these agents must be treated as users with identities, credentials and tool access - and therefore as potential risks.
"You need to start to think about these radically differently," he said. "We need to think about these new AI agents as the new insider threats."
Exabeam's upcoming collaboration with Google aims to address this issue.
"What we're announcing with them is that we're integrating those functionalities so that we're able to take that telemetry, bring it into our system and analyse the behaviour of those agents and those guardrails the same way that we do for humans," Wilson said.
For Wilson, this holistic approach is key.
"We're able to have that holistic view of your enterprise from a security perspective, and now adding, for the first time, those non-human, intelligent entities that we just call agents for short," he said.