The annual Risk Outlook 2024 report, released by International SOS, the world's eminent health and security service firm, presents a foresight into the chief safety and security concerns organisations should brace for in 2024. The report informs its predictions through surveys with senior risk professionals globally, examination of global medical and security risk ratings, and on-the-ground health and security intelligence.
An integral issue highlighted in this year's report is the burgeoning of artificial intelligence (AI), which complicates the pivotal role of segregating credible information from misinformation and wilful disinformation for businesses. The deleterious influence of AI was substantiated when AI-manipulated counterfeit videos of PM Lee Hsien Loong and DPM Lawrence Wong endorsing investment scams were disseminated on social media platforms in Singapore in December 2023.
The unprecedented potential of AI in initiating a novel industrial revolution is laudable, with its capabilities ranging from automation to unruffled management of heavy data load and facilitating swift decision-making. This intrigue has prompted Deputy Prime Minister Lawrence Wong to announce a strategic manoeuvre on 4 December 2023 to amplify Singapore's AI talent pool threefold, reaching 15,000 professionals. A concurrent $70 million AI initiative has also been unveiled to build the premier large language AI model within a Southeast Asian context.
Nevertheless, AI is proving to be a double-edged sword by hindering the extraction of reliable information due to the increasing proliferation of misinformation and calculated disinformation, particularly for businesses.
So, how can organisations ensure access to accurate information in a world susceptible to AI-induced misinformation and disinformation? The International SOS Risk Outlook 2024, through its extensive survey of 675 global senior risk professionals, ascertains that over 40% are anxious about the impact of medical misinformation and disinformation, whereas 60% express worry about inaccurate political information. These concerns are amidst the top five issues predicted to affect organisations and their business continuity in 2024.
Organisations can implement a multifaceted approach to ensure access to accurate information in response to the challenges posed by AI-induced misinformation and disinformation. Firstly, investing in advanced AI tools for content verification and source authentication can aid in distinguishing credible information from deceptive content. Collaborating with AI experts and cybersecurity firms and utilising cutting-edge technologies can fortify defences against malicious manipulation.
Education and training programs for employees are crucial to enhancing digital literacy and critical thinking skills, empowering individuals to discern and question information sources. Establishing robust internal communication protocols and fact-checking mechanisms within organisations can further fortify information integrity.
Additionally, fostering collaboration with government agencies, cybersecurity authorities, and industry peers can create a collective defence against AI-driven misinformation campaigns. Developing and adhering to industry-wide standards for responsible AI use can contribute to building a trustworthy AI ecosystem.
The International SOS Risk Outlook 2024 emphasises the need for proactive risk management strategies, including integrating AI safeguards, to mitigate the impact of misinformation on business continuity. In an era where AI wields significant influence, organisations must remain vigilant, adaptive, and collaborative to safeguard the integrity of information in the evolving digital landscape.