Cybersecurity teams preferring human results shows mistrust in AI
FYI, this story is more than a year old
Mistrust in artificial intelligence (AI) continues to manifest itself as the emerging technology spreads to industries spanning the tech world, with cybersecurity being no exception.
A report released today by WhiteHat Security has revealed while over half of surveyed organisations use artificial intelligence (AI) or machine learning in their security stack, nearly 60% are still more confident in cyberthreat findings verified by humans over AI.
The research is based on a survey of 102 industry professionals at RSA Conference 2020.
The survey also suggested 75% of respondents us application security tools as part of their security infrastructure, and 40% of these applications use a hybrid AI and human-based verification system.
WhiteHat says the combined factors of advancing and growing security threats and the technology talent gap has meant the need for AI and machine learning tools in security protocols is essential.
This is somewhat backed up by the research, which found 45% of respondents’ companies lacked a sufficiently staffed cybersecurity team.
More than 70% of respondents agree that AI-based tools made their cybersecurity teams more efficient by eliminating over 55% of mundane tasks.
AI and ML can be a powerful stress relief agent too, as nearly 40% of respondents also feel their stress levels have decreased since incorporating AI tools into their security stack
Of this number, 65% claim AI tools allow them to focus more closely on cyberattack mitigation and preventive measures than before.
But the perceived gulf between human and machine intuition is proving to be the primary barrier to more widespread adoption, with a majority of respondents emphasising skills that the human element provides that AI and machine learning simply cannot match.
Despite the number of advantages AI-based technologies offer, respondents also reflected on the benefits the human element provides cybersecurity teams.
30% of respondents cited intuition as the most important human element, 21% emphasised the importance of creativity, and nearly 20% agreed that previous experience and frame of reference is the most critical human advantage.
“With the growing cyberthreat landscape, it is imperative for security tools and organisations to have a combination of both AI and the human element so there can be continuous risk evaluation with verified results,” says WhiteHat Security chief technology officer Anthony Bettini.
“For all its advantages, AI is still heavily reliant on humans to be successful. Human monitoring and continuous input are required if AI software is to successfully learn and adapt.
“This is why the human element will never be completely eradicated from the security process.”