Exclusive: Fastly’s Simon Wistow on AI bots changing the web
AI-powered bots are driving an unprecedented surge in non-human web traffic - accounting for as much as 80% of AI-related hits for some sites, according to Fastly Co-Founder and VP of Strategic Initiatives, Simon Wistow.
In an exclusive interview with TechDay, Wistow revealed how large-scale AI crawlers and less scrupulous automated agents are reshaping the web's traffic patterns, creating dilemmas that threaten businesses, publishers, and the wider online ecosystem.
"The web is seeing a new intensity in automated traffic," he explained. "People building large models are crawling the web really thoroughly, hammering sites to track content changes over time."
This includes crawlers supporting retrieval augmented generation (RAG), where models return repeatedly to live sites to supplement their data with up-to-the-minute content - a process that strains infrastructure more than static crawls ever did.
Wistow likened the moment to the "search engine wars" of the early 2000s, when web publishers were reliant on but wary of traffic from major platforms.
Today, though, the escalation of bot traffic risks overwhelming digital infrastructure. "Sites want AI bots to index them because that's how people find content. But the volume is now so large and so frequent that, for some sites, it's indistinguishable from a DDoS attack. For some sites, it already is."
Fastly's data shows that feature bots can generate as many as 39,000 requests per minute, forcing even mid-sized businesses to defend against the kind of traffic surges that once terrified the biggest players. "It's worrying," Wistow said.
"If only the largest companies can afford to scale, you lose diversity online." The challenge is compounded by the blurred line between 'good' bots that identify themselves and follow rules, and 'bad' ones that ignore industry standards and threaten stability.
"That's the million-dollar question. We spend millions a year on this. You can look at IP ranges and known behaviours, but often you need more advanced behavioural analysis."
He noted that services working across many clients can detect problematic patterns at scale, allowing targeted interventions. But differentiating bots remains an unresolved challenge. Most crawling still relies on voluntary adherence to protocols like robots.txt, which bad actors can easily ignore.
As a result, providers are investing in bot detection, AI-powered fingerprinting, and dialogue with AI firms to push for more responsible crawling.
"Doing this at scale is expensive for everyone: for those running websites and for those doing the crawls. That's why it's critical for the industry to agree standards for identification, verification, and rate limiting," Wistow argued.
The regional bias in AI crawler activity adds another layer of risk. Fastly's data shows around 90% of crawlers originate in North America, which Wistow warned could skew AI model quality.
"If you are only crawling particular types of content, you're going to get a one-sided view of the world. The inherent bias in large language models is a really big threat," he said.
That bias, combined with unresolved questions of intellectual property, is deepening concerns among publishers.
Asked if using web content for AI training is a form of theft, Wistow was cautious but clear about the moral ambiguity: "It feels like theft, like you are extracting content from people's sites and putting it on your own site. But there's a long tradition of artists learning through imitation. What it's going to come down to is: are the LLMs a net benefit to these content producers, or are they a net negative?"
He noted that the legal framework is poorly equipped to handle the issue.
"The laws at the moment don't really feel like they're adequate for this. I think it's up to politicians to figure stuff out, but there's nervousness about stifling innovation or losing out in a global AI race." Possible solutions range from new bot verification standards to licensing requirements or even encryption that blocks access until a micro-transaction is made. Still, Wistow warned such measures will mainly benefit large publishers: "If you are Bob's random cookery site, you're probably unlikely to get a meeting with OpenAI."
Meanwhile, the commercial incentive for AI crawlers to target high-value sectors remains strong. Commerce, media and technology businesses are hit hardest. "They are just the most valuable. Like the old adage: 'That's where the money is.' It's valuable content, so it gets targeted," Wistow explained.
In response, he urged digital businesses to invest in threat intelligence, follow emerging standards, and consider bot mitigation services. "There are companies providing bot protection. We are one of them, but there's plenty of open source and free software to help smaller sites. It's important to keep abreast of new standards, because this is only going to get bigger."
Reflecting on the stakes for content creators and publishers, Wistow captured both the uncertainty and scale of disruption caused by AI bot traffic.
"Nobody is sure what's going to happen, and I think it's going to be a really interesting year. A lot of publishers are already worried about the changing way people read news and content. It's a worrying time for them."