SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

AI deepfakes threaten public trust, warns Media Medic

Mon, 11th Nov 2024

Media Medic warns that AI deepfakes pose a substantial risk to public trust, disrupt legal proceedings, defame individuals, and spread disinformation that could potentially incite violence.

Ben Clayton, Chief Executive Officer of Media Medic, expressed concern over the escalating sophistication of AI technologies, noting their impact on legal and public sectors. He commented, "AI deepfakes have evolved beyond simple internet hoaxes. They now represent a profound risk to public trust, capable of disrupting legal cases, defaming individuals, and even spreading disinformation with heavy impact afterwards."

The company has seen a notable surge in AI-generated content aimed at influencing public opinion and undermining the credibility of political figures. Clayton further explained, "In recent months we have seen a spike in AI-driven content aimed at influencing public opinion and discrediting political figures. The ability of deepfakes to mimic real people with uncanny accuracy has created fertile ground for disinformation campaigns that can mislead voters and stoke social tensions. Media Medic has been fielding numerous calls from legal firms and public advocacy groups concerned about identifying and countering these threats."

Deepfakes, as described by Media Medic, are becoming increasingly sophisticated. They are being used in disinformation efforts capable of inciting social and political unrest. According to Clayton, "Deepfakes are increasingly being used as powerful tools for disinformation that can incite chaos and hatred. Fabricated videos or audio clips that falsely depict inflammatory statements or actions have the potential to trigger unrest, particularly in already tense environments. When manipulated media circulates widely on social media or within specific communities, it can amplify anger and provoke violence in real life."

With the technology advancing rapidly, the challenges that legal analysts face in verifying the authenticity of digital content continue to grow. Clayton predicted, "If deepfake technology keeps advancing without any checks, we could soon find ourselves in a world where telling what's real from what's fake becomes almost impossible for most people."

This development raises profound issues concerning the erosion of trust in media and communication systems. Clayton said, "This loss of trust in media, public figures, and even basic communication could throw society into turmoil, with everyone doubting what they see and hear online. Important messages and public figures could constantly be questioned, leading to frustration and less faith in leaders and justice systems."

He added a warning, "If we don't take action now, we'll see more disinformation campaigns that stir up violence and social unrest. Deepfakes could easily become a go-to weapon for bad actors looking to create chaos and discredit people."

In response to these threats, Media Medic is enhancing its forensic analysis capabilities. Clayton stressed the urgency for legal sectors to remain vigilant, saying, "Legal firms can't afford to be complacent. The stakes are too high. We need to recognise the threat AI deepfakes pose and take immediate steps to ensure that justice isn't compromised by this digital deception."

To assist industries in identifying AI-generated content early, Media Medic has recommended three tactics: examining unusual artefacts, cross-referencing with known data, and using advanced AI detection tools. These methods can help detect the subtle glitches and inconsistencies typical in AI-created media, safeguarding against misinformation's potential harm.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X