SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

Australian firms urged to bolster defences against AI risks

Mon, 14th Oct 2024

Concerns over cybersecurity and artificial intelligence (AI) are rising among Australian organisations, with commentary emerging on how to navigate these challenges.

As Australia marks its annual 'AI Month', the intersection of AI advancements and cybersecurity risks has drawn attention. Reuben Koh, Director of Security Strategy for Asia Pacific & Japan at Akamai Technologies, has provided insight into the measures organisations can take to protect themselves against AI-related threats.

Reuben Koh stated, "Artificial Intelligence (AI) is reshaping the way we live, work and play, and its impact on cybersecurity is profound and complex. It has become more important than ever for organisations now to update their security strategy to include protecting AI systems, countering AI-driven attacks and managing data security used for AI-Ops."

Koh identified two main areas of focus at the intersection of cybersecurity and AI. The first area concerns protecting AI systems themselves. The widespread adoption of AI across sectors brings new security challenges, prompting a need for enhanced security measures.

He outlined three major security risks associated with AI systems. The first is AI data poisoning, where attackers deliberately corrupt the AI's training data, potentially leading to biased or dangerous outcomes.

The second risk is prompt injection. This occurs when attackers manipulate AI models like chatbots to perform unintended actions by inputting crafted data, potentially bypassing safety measures.

The third risk concerns data privacy. AI models require extensive data, including sensitive information, which raises concerns about data protection, misconfiguration, and unauthorised access throughout AI processes.

The second area of focus is protecting against AI-powered attacks. As cybercriminals leverage AI to enhance their tactics, organisations must prepare to defend their systems from increasingly sophisticated AI-driven threats.

Koh explained that AI-enhanced malware is becoming harder to detect and stop as it grows more sophisticated and evasive.

Additionally, AI-powered social engineering poses a threat, with AI tools producing highly realistic phishing scams, including deepfake videos and voices, thereby increasing the danger of such attacks.

AI automation of large segments of the attack chain further enhances the efficiency of cyberattacks, offering cybercriminals reduced time to exploit vulnerabilities, which in turn shortens the window for incident response by defenders.

Koh emphasised, "The adoption of AI can and will inevitably lead to additional security risks. At the same time, the volume of AI-driven attacks is increasing as the technology matures and becomes more mainstream. As AI becomes more and more ubiquitous, understanding how to defend against AI threats and securing AI systems must now become a top priority for organisations."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X