Integrity360: IT decision-makers concerned by AI advances
Integrity360, the cyber security specialist, has announced findings from independent research into the risks and advantages of AI's impact on cyber security.
The survey of 205 IT security decision-makers was conducted between 9th and 14th August 2023. It highlights mounting concerns over the use of AI, particularly deepfakes, as 68% of respondents noted concerns about cybercriminals using deepfakes to target their organisations.
Brian Martin, Head of Product Development, Innovation and Strategy at Integrity360, says: "The use of AI for cyber attacks is already a threat to businesses, but recognising the future potential and the impact this can have is just the start."
"We've already seen the potential for deepfake technology with the video of Volodymr Zelensky telling Ukrainians to put down their weapons and spreading disinformation."
"This is just one example of the nefarious means in which it can be used, and businesses need to be prepared for how to defend against this and discern what is and isn't real to avoid falling victim to an attack," says Martin.
A significant majority (59%) of respondents also agree that AI is increasing the number of cyber security attacks. This aligns with the change in attacks that have been noticeable over the past year as 'offensive AI' is being used in instances such as malware creation. It is also being used to create more phishing messages with content that accurately mimics legitimate emails' language, tone, and design.
In line with this, the survey also indicates that businesses recognise the impact that AI will have on cyber security, as 46% of respondents disagreed with the statement that they do not understand the impact of AI on cyber security.
However, when breaking down the findings by specific job roles, the survey suggests that CIOs appear to have the least understanding of AI's impact on cyber security, with 42% indicating disagreement with the statement.
This highlights a potential gap in knowledge among C-level executives, which may have implications for organisations' cyber security strategies and the importance of educational efforts to ensure they are well informed about the role of AI in cyber security.
Furthermore, 61% of respondents expressed apprehension over the increase in AI, indicating that this is an area of concern within the industry.
Martin says: "AI's role in cyber security is not only a matter of perception but a tangible reality. Conventional cyberattacks will ultimately become obsolete as AI technologies become increasingly available and more appealing and accessible as attackers look to expand their use for AI-enabled cyberattacks."
"As an MSSP, it's essential to ensure businesses are considering how this can be used against them and putting processes in place to protect against these growing threats."
Despite concerns, most respondents (73%) agree that AI is becoming an increasingly important tool for security operations and incident response. This reflects the industry's growing recognition of AI's potential to enhance security practices and the perception that AI can be used defensively and offensively in cyber security.
Moreover, 71% of respondents agree that AI is improving the speed and accuracy of incident response. Likely due to AI's ability to analyse vast amounts of data and identify threats in real time, contributing to its effectiveness in incident response.
More than two-thirds (67%) of respondents also believe that using AI improves the efficiency of cyber security operations. AI can automate routine tasks, allowing cyber security professionals to focus on more complex and strategic aspects of their work.
"As AI technologies continue to evolve, their integration into cyber security will follow. Organisations must remain proactive in embracing AI while also addressing the challenges it presents, ensuring that their cyber security defences keep pace," adds Martin.