sb-au logo
Story image

When AI goes rogue - a look into its possible futures

28 May 2018

What happens when artificial intelligence (AI) goes bad? According to the Electronic Frontier Foundation, AI and machine learning will bring benefits in diverse areas such as transport, health, art and science, but we’ve already seen things go horribly wrong.

Today’s computers are inherently insecure so they’re a poor choice for high-stakes machine learning systems and AI – and according to the Electronic Frontier Foundation, we need to consider the implications these new technologies may have for computer security.

Earlier this year the Electronic Frontier Foundation was one of six institutions that released a report called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Also involved in the report was the Future of Humanity Institute, the University of Oxford, the Centre for the Study of Existential Risk, the University of Cambridge, the Center for a New American Security, and OpenAI.

The report looks at AI’s potential impact on digital security, physical security, and political security.

It says there are specific security-relevant properties of AI, including its dual use for civilian and military purposes; its scalability; the ability for its algorithms to be rapidly distributed; and its ability to exceed human capabilities.

They can also expand existing threats, introduce new threats, and alter the typical character of threats, allowing attacks to be more versatile, effective, and targeted.

In terms of digital security, AI could influence email attacks such as spear phishing to become more automated – and it could even eliminate the need for the attacker to speak the same language as the target.

“Many important IT systems have evolved over time to be sprawling behemoths, cobbled together from multiple different systems, under-maintained and — as a consequence — insecure. Because cybersecurity today is largely labour-constrained,” the report notes.

AI could also target malware’s behaviour that it becomes impossible for humans to control in a manual way. The Stuxnet malware is a clear example of how a malware cannot receive commands once it infects computers.

In addition to automation of social engineering attacks, AI could also automatically discover vulnerabilities, automate hacking processes by evading detection and responding to behavioural changes from the target; it could mimic human-like denial-of-service attacks, and exploit legitimate AI itself.

Althrough offensive use of AI has only publicly been disclosed through experiments by white hat hackers, the report says it’s only a matter of time before it is used for malicious consequences – if it is not already happening.

AI could disrupt physical security by repurposing commercial AI systems for terrorism – for example using autonomous vehicles to cause crashes. It could enable distributed swarming attacks for surveillance, and it could increase the scale of attacks.

AI could also affect political security by allowing states to automate surveillance platforms.

“State surveillance powers of nations are extended by automating image and audio processing, permitting the collection, processing, and exploitation of intelligence information at massive scales for myriad purposes, including the suppression of debate,” the report says.

It could also achieve highly realistic videos to support fake news reports, manipulate information availability; automate influencing campaigns; and hyper-personalise disinformation campaigns.

The report recommends four approaches to responsible AI use:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Story image
Report: 151% increase in DDoS attacks compared to 2019
It comes as the security risk profile for organisations around the world increased in large part thanks to the COVID-19 pandemic, forcing greater reliance on cloud technology and thrusting digital laggards into quick and unsecured migrations.More
Story image
Video: 10 Minute IT Jams - Who is CrowdStrike?
Today, Techday speaks to CrowdStrike ANZ channel director Luke Francis about the company's key products and offerings, its upcoming annual security conference, and the infrastructure it leverages in the A/NZ region.More
Story image
Five security challenges for the Enterprise of Things
Many enterprise networks aren't adequately managed, creating risk for businesses that don’t have full visibility into all of the devices on their network, writes Forescout regional director for A/NZ Rohan Langdon.More
Story image
Proofpoint and CyberArk extend partnership to further safeguard high-risk users
“Our CyberArk partnership extension provides security teams with increased detection and enhanced adaptive controls to help prevent today’s most severe threats."More
Story image
Security training and tech: Empowering staff in a hybrid work environment
As employees travel back and forth between home and the workplace, are they walking through the door with cyber threats sitting on their devices?More
Story image
75% of IT execs 'worried' about being targeted in cyber-attack
A new report from ConnectWise has shed light on the widespread concern about cyber-attacks, with 91% of SMB executives considering a move to an MSP if it provided the 'right' solution.More