SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Business office software engineers analyzing ai generated patterns on screens symbolizing ai risks

Businesses warned on risks as 'vibe coding' AI tools surge

Thu, 25th Sep 2025

AI-generated code created through 'vibe coding' is coming under scrutiny as businesses increase adoption of these tools in mainstream workflows.

Vibe coding, characterised by the use of natural language prompts to create code without a full understanding of the underlying structure, is finding favour across various industries due to its perceived speed and ease of use. Software such as Cursor, Lovable, and Memex allows users, including those with limited technical backgrounds, to quickly generate working applications.

Chris Weston, Senior Technology Consultant at NashTech, has raised concerns regarding the unchecked deployment of AI-generated code. He argues that while the method may reduce barriers to entry and speed up prototyping, there is a real danger of security flaws, inefficiencies, and reputational risks if such code is used in live business environments without rigorous review.

"AI assistants don't have a reputation to lose, but businesses do. Companies putting AI-generated code into live environments without scrutiny are effectively trusting their future to a black box with no skin in the game," he said.

Weston points out that the transition of vibe coding from specialist curiosity to mainstream practice has been rapid. The accessibility of these tools appeals to non-technical professionals keen to create prototypes and demonstrations that give life to new ideas more quickly than was previously possible. However, the temptation to advance these applications into production carries significant risk.

There have already been high-profile issues reported in 2025. Users of code development platform Replit described a case where an autonomous AI agent deleted an entire codebase. Such incidents, widely discussed on social media, underscore the potential consequences of relying heavily on AI-generated code without sufficient oversight.

Weston said, "These tools are very impressive and can be helpful to a developer in many circumstances. At NashTech we have been researching AI tools in our development teams for nearly two years, and have found benefits in certain use cases. But to an amateur, an application that seems to work fine on the surface can be hiding enormous inefficiencies and security problems that expose you to risk of failure and costly remediation work."

Weston acknowledges that vibe coding lowers traditional obstacles. It allows for faster development of proof-of-concept applications, boosting stakeholder engagement and facilitating innovation. This ease of access also encourages a willingness to experiment with new ideas across organisations.

Hidden costs and risks

Despite these advantages, Weston emphasises several hidden costs associated with AI-generated code. Security is a primary concern, as such applications may inadvertently expose personal data or otherwise fall short of compliance requirements.

Operational efficiency is another issue. Code generated without standards or careful review may lead to inefficient use of computing resources, increasing cloud expenditure and degrading the user experience. The risks of reputational damage are significant as well, as a single incident stemming from unvetted AI-generated code could undermine customer trust and investor confidence.

Industry research reflects these reservations. According to Stack Overflow's most recent developer survey, 84% of developers are now using or considering AI tools. However, 46% report concerns with the accuracy of these tools and state that any perceived time savings are often offset by the need to manually refine or correct outputs.

A balanced approach

Weston advises that businesses treat vibe coding as a tool to enable development, not as a shortcut to bypass established review and quality procedures. He notes that while responsible use can indeed facilitate innovation, the lack of proper oversight creates a risk of building up technical debt, a burden that can prove expensive and damaging in the long run.

Weston recommends a method that integrates innovation with a disciplined approach, balancing the use of AI-generated assistance with human quality controls and established processes.

He said, "The innovation potential is huge, but speed of delivery doesn't outweigh long-term resilience. The winners will be those who combine AI-enabled agility with human oversight and robust review processes. Independent studies show how easy it is to be caught out. Even experienced developers are introducing more errors and spending longer debugging when they rely too heavily on AI. That should be a warning signal that no organisation is immune to the risks."
"We've seen this pattern before with other technologies: the hype comes first, the scars come later. The smart businesses are the ones that explore these tools with guardrails, so they can innovate without exposing themselves to unnecessary risk. This is the start of a global experiment in how software is built. The businesses that succeed will be those that explore the possibilities of vibe coding while keeping security, efficiency and sustainability front of mind."

Weston's comments highlight industry discussions about the advantages and vulnerabilities presented by rapidly advancing AI-powered coding tools. As businesses consider integrating these tools, his advice identifies the importance of combining technological innovation with sound oversight and structured review procedures.