SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Taiwan boardroom ai risk worldmap cyber data sovereignty scene

Trend Micro warns firms on hidden risks of AI models

Thu, 22nd Jan 2026

Trend Micro has published research warning that unmanaged reliance on large language models can create legal, financial and reputational risk for organisations.

The study examined how models respond differently when asked the same questions in different places and languages. It also tested how answers change over repeated interactions with the same system. Trend Micro said the results challenge assumptions that generative AI behaves like predictable software.

Trend Micro Forward Threat Researchers ran more than 800 targeted prompts across more than 100 AI models. The prompts tested bias, political and cultural awareness, geofencing behaviour, data sovereignty signals and contextual limitations. The researchers analysed more than 60 million input tokens and more than 500 million output tokens.

The company said it saw frequent variation in responses to identical prompts. It said geography, language, model design and embedded controls stood out as common factors behind differing outputs. Trend Micro said some responses also appeared inaccurate or out of date.

Regional variation

Trend Micro said the models returned materially different outputs in politically sensitive scenarios. It highlighted disputed territories and questions of national identity as areas where models showed "clear regional alignment differences". The initial prompts in the research included geopolitical questions such as where Crimea lies and which flag relates to Taiwan, as well as cultural etiquette questions such as whether it is appropriate to smile at a stranger.

Trend Micro said the differences matter when organisations use AI systems in customer-facing roles or in decision-support settings. The company said inconsistent answers can undermine trust and create mismatches with local norms. It also said the same behaviour can raise compliance issues for multinational organisations operating under different regulatory regimes.

Precision limits

The report also flagged areas where organisations expect precision from automated systems. Trend Micro said some models returned inconsistent or outdated results in tests involving financial calculations and time-sensitive information.

The company positioned these findings as a risk for businesses that integrate AI outputs directly into workflows without verification. Trend Micro said some enterprises treat AI tools as deterministic systems, which could lead to avoidable errors if outputs feed into customer journeys or internal decisions.

Robert McArdle, Director of Cybersecurity Research at Trend Micro, said organisations often misjudge how these systems behave.

"Many organisations assume AI behaves like traditional software, where the same input reliably produces the same output," said Robert McArdle, Director of Cybersecurity Research at Trend Micro. "Our research shows that this assumption does not hold true. LLMs can shift their answers based on region, language and guardrails, and those answers can change from one interaction to the next. When AI outputs are used directly in customer journeys or business decisions, organisations risk losing control of brand voice, compliance posture and cultural alignment."

Governance focus

Trend Micro said the risks increase for global and distributed organisations that deploy AI services across borders. It pointed to variation in legal frameworks and political sensitivities between markets. It also cited societal expectations as another source of potential friction when AI generates text that users may view as inappropriate, biased or inconsistent with local context.

The company also raised concerns about public sector adoption. It said AI-generated outputs can be perceived as official guidance. It also said the use of non-localised models can create sovereignty and accessibility risks.

McArdle said organisations should adopt governance and oversight processes when they deploy large language models in user-facing roles. "AI should not be treated as a plug-and-play productivity tool", added Robert McArdle. "Organisations need to approach it as a high-risk dependency, with clear governance, defined accountability, and human verification for any user-facing outputs. That also means demanding transparency from AI providers around how models behave, what data they rely on, and where guardrails are applied.

AI can absolutely drive innovation and efficiency, but only when it is deployed with a clear understanding of its limitations and with controls that reflect how these systems behave in real-world environments," said McArdle.