SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Australian firms warned of AI legal implications in chatbot fiasco
Wed, 28th Feb 2024

As technology continues to permeate all aspects of business, Australian firms are being advised to consider the legal implications of using artificial intelligence (AI) to interact with customers. Recent examples like the Air Canada incident, where the airline was ordered to compensate a passenger due to misleading information provided by a chatbot, draw attention to the importance of such cautionary measures.

David Fischl, Legal Digital Transformation Lead Partner at Hicksons Lawyers, provided insight into the burgeoning trend of companies employing generative AI technology, such as chatbots, to streamline their logistical processes. He mentioned that while this adoption of technology offers significant benefits, including cost-saving and increased productivity, it also presents new legal quandaries that must be addressed.

In his commentary, Fischl expressed concern over companies rushing to capitalise on the AI boom without being fully cognisant of the potential pitfalls. His warnings come in light of increased legal actions in cases where AI technology has failed or misled consumers. The notable case of Air Canada serves as evidence of this risk, reflecting the liabilities a company can face if its AI tool disseminates false or misleading information.

Fischl expounded that the use of generative AI for customer communication is on the rise, with chatbots being employed to describe products and explain terms and conditions for businesses. However, this comes with the risk of 'mistakes and hallucinations' where the AI could erroneously interpret or provide inaccurate information, leading to serious consequences for businesses. Examples of such faux pas are not just isolated to Air Canada but can also be seen with big-name companies like Chevrolet.

He highlighted that under Australian Consumer Law, companies could potentially face severe penalties, including substantial financial damages if their chatbot makes a misrepresentation. Yet, while the risks are significant, he also emphasised the potential for businesses to take positive measures to avoid litigation and reputation damage.

Part of the mitigation strategy involves taking a proactive stance to ensure chatbots are cooperating fully with Australian consumer laws. Businesses need to thoroughly test and revise their AI technology and employ robust tech checks to ensure the accuracy of the information they deliver.

As businesses strive to stay competitive and capitalise on the benefits of AI technology, they must not overlook the potential legal implications. It is critical to strike a balance between optimising productivity through AI and safeguarding customer interactions to avoid potential liabilities and uphold public trust.