SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
How enterprises can avoid missing out and messing up on AI
Mon, 26th Feb 2024

Few technologies have captured the attention and the imagination of enterprise business leaders in the Asia-Pacific (APAC) region as quickly and completely as Generative AI. The potential applications seem endless, the efficiency benefits could be massive, and the creative possibilities are almost limitless. 

What’s not to like? 

For all its potential, there is unease among IT leaders about the adoption of AI. A recent Juniper Networks and Wakefield Research survey of 1,000 executives who are involved in AI/machine learning implementation revealed that many IT leaders worry that their organisations might be moving too fast, and many might not fully understand what they’re signing up for. 

IT leaders are struggling with a new quandary: how to strike a balance between the fear of missing out and the fear of messing up. But the implementation of AI doesn’t need to be difficult or complicated, and there are many solutions to help navigate these uncertainties. 

Generative AI is a game changer, and APAC is leading the pack. Our survey found that 73% of respondents in the Asia-Pacific region have either “mostly” or “fully” implemented AI (compared to 70% globally). The numbers are even higher in some markets like Australia and New Zealand (80%), Japan (80%) and Bangladesh (93%).     

Striking a Balance Between Caution and Competition 

While the adoption of AI is faster in APAC, it comes with considerable unease on how it can be implemented cautiously and accurately without compromising on being left behind by competition. In fact, 10% of respondents in the region felt their organisation was “not very” or “not at all prepared” for the adoption of AI (compared to 6% globally). And 79% felt pressure “to quickly implement AI technologies and keep up with competitive trends”, while 81% felt departments other than their own “are rushing to implement and use AI without understanding how to properly use it.” 
In Australia and New Zealand: 

  • 98% of respondents say it may not be possible to know if their company’s AI output is accurate. (vs 87% globally and 83% in APAC) 
  • 98% say employees trust AI more than they should. (vs 89% globally and 85% in APAC)
  • 96% of respondents were concerned their organisations were rushing to implement the technology. (vs. 87% globally and 81% in APAC)
  • Only 32% believe their company’s policies are keeping pace with AI innovation. 

Cultivating a Responsible Approach to AI 

These findings might sound like a withering criticism of AI from the people who know it best. In fact, I would argue that a strong consensus on the possible pitfalls of AI is a clear indication that IT leaders understand its limitations. Armed with this knowledge, they can set policies to minimize risks and maximize the benefit. It is both possible and desirable to take a cautious approach. Businesses should be thinking about where AI makes sense and start with high-impact, low-risk areas to lay the foundation for ongoing success with AI. 

Equally, it is possible to put up guardrails to protect against bias. A virtually unanimous 99% believe that “some, most, or all” of their AI outputs are impacted by bias. For generative AI solutions that rely on scraped data, leaders must look at how the data will be consumed into the model, how the AI is tested, and who will use the models once they are complete. Techniques like retrieval augmented generation enable users to turn the model into answers that are confirmed to be accurate. Human-in-the-loop solutions can also help ensure accuracy. 

Specialised AI and machine learning solutions that depend on more reliable data are a different matter. In Juniper’s AI-Native Network, for example, we rely on data that comes from ongoing operations, so we know it’s accurate and arguably biased in a good way.  

Employee trust in AI outputs 

In addition to addressing the challenges of the technology itself, it’s equally important to address human challenges. APAC respondents were concerned “employees trust AI without understanding it” (91%) and also “trust AI more than they should” (85%). 

But there is an obvious solution: training and education. Across most organisations, there’s clear demand, with 82% of APAC respondents saying their organisations should increase the AI training provided. That’s not surprising, given a strong majority (96%) think it affects their career progression. 

Setting clear policies for the use of AI will be equally important. Most companies already have policies on what data employees can or can’t share with third parties. It may be possible to simply expand those policies to cover external generative AI tools. Remember to also consider software purchase policies and add addendums for additional reviews for any solutions with embedded AI. 

IT leaders unanimously anticipate that AI will be adopted by departments outside of IT, and an equal number think those departments would benefit from a better understanding of the security risks associated with using AI. It’s important to ask the right questions and rely on the right people for specific use cases, especially if it involves sensitive information. 

With an AI-driven HR application, for example, you would need to go through a bias exercise based on any applicable laws. It would be important to consult legal and HR experts to ensure the solution is up to standards. It’s also especially important to consider vendor sophistication, including their approach to AI and their track record with previous solutions. 

The promise of AI is huge. The rapid adoption of AI models is clear evidence that businesses everywhere see their value. With the right policies in place, well-prepared organisations will not miss out, and nor will they mess up.