SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Tech giants to fight AI deception in global elections
Tue, 20th Feb 2024

At the recent Munich Security Conference (MSC), leading technology companies from around the world, such as Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and X, have pledged to combat the deceptive use of artificial intelligence (AI) in this year's worldwide elections, in which more than four billion people from more than forty countries are expected to vote.

Announced as the 'Tech Accord to Combat Deceptive Use of AI in 2024 Elections', the agreement is a commitment to utilise technology in countering harmful AI-generated content intended to deceive voters. The accord is hoping to improve the detection and address the online distribution of such AI content, push educational initiatives and provide transparency.

Ambassador Christopher Heusgen, Munich Security Conference Chairman, said, "The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices." The accord will help combat AI-generated audio, video, and images that falsely represent political candidates, election officials and other key democratic stakeholders, or which misinform voters about election procedures.

Dana Rao, General Counsel and Chief Trust Officer at Adobe, emphasised the importance of transparency, stating, "That's why we're excited to see this effort to build the infrastructure we need to provide context for the content consumers are seeing online." Rao added that with elections happening globally this year, it is crucial to invest in media literacy campaigns to educate the public about the reliability of online content.

The signatories of the accord have agreed to eight specific commitments, which include developing and implementing technology to mitigate risks related to Deceptive AI Election content, establishing models to understand associated risks, detecting and addressing such content on their platforms, fostering cross-industry resilience and providing transparency to the public regarding how each company addresses this issue. The commitments also stipulate continued engagement with civil society organisations, academics and fostering public awareness to build resilience.

Kent Walker, President, Global Affairs at Google, highlighted the necessity of this accord, stating, "Google has been supporting election integrity for years, and today's accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust." Walker reiterated that this commitment is crucial to prevent digital abuse from threatening AI's potential in improving economies, creating new jobs and advancing health and science.

Speaking on the initiative, Christina Montgomery, Vice President and Chief Privacy & Trust Officer at IBM, acknowledged the magnitude of this year's elections and the intensified risks of AI-generated deceptive content. Echoing her sentiments, Nick Clegg, President, Global Affairs at Meta, underscored the collective effort required to combat deceptive artificial intelligence and hoped that this accord would serve as a significant step from the industry in answer to this challenge.