The Equifax breach has underlined the problem with security testing, whether it is penetration testing, vulnerability scanning, bug hunting, source code analysis or another method.
As John Chambers, CEO of CISCO, once said “There are two types of companies: those that have been hacked, and those who don't know they have been hacked.” Yet, despite being a multibillion-dollar company with a no doubt significant security budget, Equifax has become the latest company to know it has been hacked.
The details of how this happened are still emerging. However, what we do know at this stage is that the Equifax breach was caused by a vulnerability in its website in July. This vulnerability was in the Apache Struts web application framework, which is a third-party library used by developers to create web applications.
There is speculation that the vulnerability was caused by a critical remote code execution that allowed a hacker to take control of a server remotely. The vulnerability and patch was disclosed in March 2017.
In 2016 alone Equifax spent USD$173 million on capital expenditure for “new products and technology infrastructure” including system security. The global benchmark for cyber security expenditure is seven percent of infrastructure spend, so Equifax potentially spend in the order of US$12m annually. Despite this spend, they were unable to secure their data.
Equifax hires external security experts to do penetration testing, vulnerability scanning and on site audits. With all these different methods of testing their security, why was the attack not prevented?
The problem with security testing
Expert penetration testing is the most accurate way to find security vulnerabilities. The challenge however is that penetration testing is a manual process, which doesn’t scale with the speed of technology development. Also, it is solely a point in time exercise, making it obsolete as soon as the system changes or when a new vulnerability is disclosed. Moreover, manual penetration testing is very time consuming, taking anywhere between two to four weeks to be completed.
Recognising these limitations, the cyber security testing industry has turned to other solutions including vulnerability scanning and bug bounties. Although these solutions do scale, they are noisy and provide inaccurate results in the form of false positives and alerts, resulting in alert fatigue, and decreased attention.
We have reached a tipping point whereby the increasing volume of security data can no longer be handled effectively by humans. If an organisation with the resources of Equifax can be hacked, what hope do smaller businesses have?
Artificial Intelligence may hold the answer
The cyber security community is increasingly turning to AI and machine learning in its search to improve detection and response capabilities to security threats.
Cyber-attackers are already one step ahead, leveraging automation technology to launch strikes, while many organisations are still using manual efforts to identify security threats. These traditional methods can take weeks or months to detect problems, during which time attackers can exploit vulnerabilities to compromise systems and extract data.
To address these challenges, and rebalance the playing field, many researchers and technologists are exploring how artificial intelligence (AI) can supercharge day-to-day cyber risk management operations.
While it takes 10,000 hours for a human to become an expert at a particular skill, machine learning algorithms can obtain a lifetime of experience within days of training. Leveraging these algorithms to automate complex cyber security tasks promises to uncover vulnerabilities without requiring additional headcounts or improved skills. It is the only way that organisations both big and small can hope to avoid becoming the next Equifax.
Article by Daniel Johns, CTO, NIMIS Cybersecurity.