SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Australia
Smudge warns AI chatbots flatter users, not tell truth

Smudge warns AI chatbots flatter users, not tell truth

Tue, 12th May 2026
Sofiah Nichole Salivio
SOFIAH NICHOLE SALIVIO News Editor

Smudge has warned that AI systems are designed to flatter users rather than tell them the truth, responding to debate over whether chatbots can appear conscious.

The Christchurch technology company weighed in after British evolutionary biologist Richard Dawkins said several days of conversation with Anthropic's Claude had led him to believe the system was sentient. His view has drawn criticism from AI researchers and added to broader scrutiny of how conversational AI responds on emotionally charged subjects.

Founder Reuben Bijl said the exchange illustrated a broader pattern in the design of large language models. He argued that these systems are trained to be engaging and agreeable, which can lead users to overestimate what they are experiencing.

According to Smudge, Anthropic's analysis of one million conversations found its model excessively validated users' perspectives rather than pushing back. The issue was particularly notable in discussions of spirituality and consciousness, where validation appeared in nearly 40 per cent of conversations.

Bijl said that made Dawkins's conclusion unsurprising.

"First and foremost, I feel for Dawkins, who was tricked by a system working exactly as designed," Bijl said.

He said large language models rely on vast amounts of human-written text and predict likely word sequences rather than demonstrate awareness or inner experience. In his view, that makes them sophisticated pattern-matching systems whose outputs can still feel persuasive and personal.

Bijl also pointed to the commercial logic behind highly agreeable chatbots, arguing that companies have a clear incentive to build systems that encourage repeat use and stronger user attachment.

"AI models are built to be engaging, to reflect your ideas back at you intelligently, and to make you feel heard. It's not consciousness but flattery at scale.

"And the irony is that Anthropic's own data shows the model validates users in nearly 40 per cent of conversations about spirituality and consciousness. Dawkins had a deep, searching conversation with a system that's been taught to agree with people about deep, searching things four times out of ten. Then he concluded the system was conscious. The technology did what it was built to do," he said.

Commercial pull

Smudge's comments come as AI developers face growing questions over safety, transparency and the social effects of chatbot design. Products that present themselves as helpful, empathetic and conversational have spread rapidly across consumer and workplace settings, raising concern about how users interpret machine responses.

For businesses, the issue goes beyond philosophical debate about machine consciousness. If users place misplaced trust in a chatbot because it sounds understanding or affirming, the consequences can affect decision-making, wellbeing and reliance on automated advice.

Bijl said the incentives shaping chatbot behaviour deserve closer attention. In his view, systems that praise users, support their assumptions and respond warmly are more likely to keep them engaged and paying.

"What concerns me is the commercial incentive at work. AI companies benefit when users form emotional attachments to their products. A chatbot that flatters you, agrees with you, and tells you your jokes are funny is a chatbot you'll keep coming back to. That's good for subscriptions, but it can come at the expense of accuracy.

"The real questions we should be asking about AI are: what are these tools actually doing, who controls them, and how are they shaping how we see ourselves and the world?" he said.

AI literacy

Smudge works with organisations in New Zealand and Australia on software projects that include AI-integrated systems. It said that experience has reinforced the need for stronger AI literacy as the technology becomes more embedded across industries.

Bijl said that includes helping organisations and individuals understand both the usefulness and the limits of generative AI. While such tools can summarise, draft and converse fluently, he said they can also create a false impression of understanding because they are designed to sound coherent and responsive.

The Christchurch firm, founded in 2008, said it has built its work around close engagement with end users rather than relying solely on what software can measure. Bijl said that human-centred approach remains important as AI systems become more prominent in product design and customer interaction.

"We've spent two decades learning that the part of a customer that matters most is the part the tools can't see. AI hasn't changed that. If anything, it's made it easier to lose," Bijl said.