SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers

Private vs Public AI: Appian urges companies to know the difference

Tue, 27th Aug 2024

In the rapidly evolving world of artificial intelligence (AI), understanding the difference between private and public AI is key for enterprises across various industries, according to Appian's Adam Glaser.

This distinction is not merely technical; it carries significant implications for data security, operational efficiency, and competitive advantage.

As Glaser, Senior Vice President of Product Management at Appian, explained:  "Private AI is an approach to using artificial intelligence in any industry, especially in enterprises, where you have the guarantees that your data is secure."

"AI is very dependent on data," he explained to TechDay during an exclusive interview.  "Your data is not being used to improve foundational models or train models that could benefit your competitors." This contrasts sharply with public AI, where data can be utilised to enhance the collective experience of AI users, potentially at the cost of exposing sensitive information.

The implications for Australian companies are clear. As businesses increasingly explore AI's potential, they must also navigate the risks associated with public AI models. Glaser highlighted a critical concern: "If you're going to have your legal department send contracts to ChatGPT for a summary, you're giving the internet some of your most important data." The inherent risks of public AI models are not to be taken lightly, especially when data security is at stake.

One of the primary risks associated with public AI is the lack of control over data once it enters the model.

"The clear and present risk is that your data isn't private the minute it is sent into a public AI algorithm," Glaser explained.

This lack of privacy can lead to unintended consequences, including the potential for competitors or malicious actors to access proprietary information. The dangers are particularly acute in industries where data security is paramount, such as life sciences, financial services, and insurance.

Glaser emphasised that while every industry should be cautious about using public AI, these highly regulated sectors must be especially vigilant. "The industries that we tend to see the most conservatism in tend to be the most highly regulated," he said.

These industries require robust guarantees that their data remains secure, making private AI an attractive alternative.

Private AI offers a solution by ensuring that data used in AI models remains under the control of the enterprise. As Glaser described, "Platforms like Appian provide an opportunity to use AI in a private setting." This can be achieved in several ways, including the use of unique models tailored to the enterprise or by training AI models on proprietary data that remains under the company's control.

Australian companies can learn valuable lessons from the challenges faced by global tech giants like Facebook and Adobe, which have encountered problems with using customer data for AI training. "Companies like Facebook have a slanted interest in using data to improve their ability to attract and provide targeted advertisements," Glaser noted. The misalignment of incentives in these cases underscores the importance of ensuring that data used in AI models is not repurposed for unintended uses.

Despite the potential pitfalls, the benefits of AI are undeniable. "AI is certainly disrupting the market," Glaser acknowledged, pointing to examples where enterprises have successfully leveraged AI to enhance productivity and efficiency. One such example is the Australian company Netwealth, which used a private AI model to automate email classification, achieving a 98% accuracy rate.

This improvement allowed the company to reallocate resources to more value-added tasks, demonstrating the tangible advantages of AI when implemented correctly.

Glaser also shared a positive example from the United States, where a large university deployed generative AI to assist student advisors. By automating routine tasks, the AI helped advisors focus more on students, potentially increasing graduation rates.

"AI can actually help with that," Glaser said, "helping them quickly come up to speed on what's going on and generate a good agenda for the upcoming meeting."

The key to harnessing AI's potential while mitigating risks lies in choosing the right technology providers and platforms. Glaser advised companies to avoid going it alone and instead work with technology partners who can provide the necessary expertise and security guarantees.

"Pick the right technology provider," he urged. "You don't have to guess at which ones are safe."

Ultimately, the message for Australian companies is clear: the benefits of AI are within reach, but only if they approach it with the right tools and knowledge. By understanding the difference between private and public AI and working with trusted partners, companies can unlock AI's potential without compromising their most valuable asset: their data.

Glaser's advice highlights the cautious optimism that should guide AI adoption: "Enterprises have the responsibility of understanding first before they dive into it."

This understanding is not just about technology but about ensuring that AI serves the company's best interests without exposing it to unnecessary risks.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X