Exclusive: ISACA’s Chirag Joshi warns of 'shadow AI' crisis without risk management
Artificial Intelligence is becoming indispensable in global workplaces, but ISACA Sydney Chapter President Chirag Joshi is sounding the alarm: while organisations rush to deploy AI, few are actually managing the risks.
In an exclusive interview, Joshi - a leading cybersecurity and AI Governance expert who also serves as Founder and CISO of 7 Rules Cyber - delivered an important message on the need to address the widening gap between AI adoption and governance maturity.
"We're driving a powerful machine without training, without visibility, and without a license," he said.
"Organisations are enabling AI use without clear guardrails, visibility into data flows, or understanding of downstream impacts. That's a recipe for operational, legal, and reputational risk."
ISACA's newly released 2025 'Pulse of AI' report underscores the urgency.
While 81% of organisations report using AI in some capacity, only 28% have a formal policy governing its use. Even more troubling, just 22% are training all employees on AI, despite 89% of digital trust professionals acknowledging they'll need such knowledge soon to stay relevant - or even employed.
"AI has reached an inflection point," Joshi said. "It is no longer just a domain for technical teams. Every business function will be impacted, and organisations must uplift their people, not just their platforms."
The statistics paint a jarring picture: 59% of professionals say AI-powered phishing and social engineering attacks are harder to detect. Yet only 21% of organisations are actively investing in tools to combat deepfake threats. Despite 80% listing misinformation and disinformation as top concerns, just 30% feel confident in detecting AI-driven manipulation.
"We're seeing the rise of 'shadow AI' - the unsanctioned, untracked use of AI tools by employees," Joshi explained. "It's a massive blind spot. You wouldn't let every staff member install and run software at will, but that's exactly what's happening with generative AI. And the risk exposure is exponentially greater."
As both ISACA Sydney President and founder of 7 Rules Cyber, Joshi has a front-row view of the accelerating threat landscape.
"There's a misconception that AI risk is purely technical. It's not. It's legal, financial, operational. Boards need to understand that AI is a business risk - just like cyber became one in the last decade."
Only 7% of organisations in ISACA's survey feel "very prepared" to manage AI-related risks.
The lack of preparedness isn't for lack of urgency: 66% expect deepfake cyberthreats to grow more sophisticated over the next year, and 61% are very or extremely worried that generative AI will be exploited by bad actors. But awareness hasn't translated into action. Despite major year-on-year jumps - like a near-doubling of organisations with comprehensive AI policies (from 15% to 28%) - most are still falling short.
For example, while 31% of organisations are actively hiring for AI roles, most existing professionals rate their AI skills as beginner or intermediate. Joshi believes the fix starts with visibility.
"If you don't know what AI tools are being used, or where your data is flowing, you can't govern it," he said. "Inventory your systems. Build policies. Train not just IT, but every employee. Because the people most at risk from AI misuse are the ones furthest from the tech."
He also called out the need for regulatory urgency. "Much of this is legal risk: copyright, data privacy, IP theft. Boards need to treat AI as a strategic governance issue, not an innovation or technology sideshow."
With AI now being used for everything from generating content (52%) and increasing productivity (51%) to automating tasks (40%) and enhancing cybersecurity (26%), the stakes are only getting higher.
"Organisations that delay building AI governance structures risk facing avoidable legal, operational, and reputational fallout. Now is the time to move from experimentation to accountability."