SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

A deeper dive into the false positive quandary

Mon, 2nd May 2022
FYI, this story is more than a year old

False positives are often used as a key metric by decision-makers trying to identify the effectiveness of a cybersecurity solution. The reasons for this are understandable: organisations are struggling with overstretched and burnt-out teams with ever-increasing workloads, and know the pain involved in dealing with thousands of alerts each day.

With a lot of the traditional security protections, there never seem to be enough people or hours in a day to go through every alert and identify whether they are genuine threats or if they can simply be dismissed.

Disruptive false positives

False positives may arise from a binary approach to security that focuses exclusively on matching a signature, or identifying something coming from infrastructure which has been deemed malicious.

In the case of 'bad' public IP addresses, these can be burned down by attackers once they have been identified, and are often later acquired by legitimate businesses. The ownership changes. So it's no longer 'bad', but it may take days or weeks for your threat feeds to tell you this is the case.

This is the type of false positive organisations should be worried about – not only is it creating overhead for security teams (who then have to manually perform checks), but it results in business disruption at a critical time when new relations are being formed with partners, suppliers or clients.

Interesting false positives

Compare this with an approach that uses AI to understand how every device normally behaves in order to identify deviations that indicate a threat. Unusual activity may not always indicate a live threat, but can point to interesting findings from a compliance perspective.

Having passwords stored in plain text, using insecure protocols, DNS being tunneled over HTTPS and even pinpointing the use of shadow IT in the cloud are all activities that a security team should have visibility over. Many of these findings often come as a surprise, since an organisation's existing security stack often doesn't provide this level of granularity.

How AI filters itself

Managing false positives is time-consuming. For every red flag, security teams must correlate various alerts together and build out a timeline of an attack to identify if and how an environment was infiltrated and what happened thereafter. This process can take hours with traditional tools.

With AI, that situation changes dramatically. AI models are prioritized according to severity, and human operators can easily filter out lower-severity alerts that may point to compliance issues or other interesting-but-not-immediately-threatening events. On top of that, the AI does another level of analysis, asking additional questions about individual alerts to try and work out if they are a part of a broader attack – and stitching multiple alerts together into an overarching incident for a human to review.

So instead of dealing with hundreds of alerts, an analyst may simply see 5-10 on a daily basis, with all of the investigation already having been done, with all the context immediately available.

For example, in the case of ransomware, a single-point-in-time analysis may result in a tool detecting encryption and raising an alert to the security team. But ransomware is a multi-stage attack, and encryption on its own is not necessarily malicious. An AI-powered investigation will ask additional questions such as:

  • How many files were encrypted? 
  • Who initiated the file encryption?
  • How quickly did this happen? 
  • What was seen immediately before and after this activity? 

Why catch rates are misleading

The notion of false positives has become such an issue in the cybersecurity industry that security solutions are often tested independently and assigned a 'catch rate' score that measures what percentage of alerts generated are 'real threats' – defined by whether they appear on Threat Intelligence lists. Solutions which have a higher catch rate are often viewed more favorably than others, but this loses perspective that there is far more to establishing whether something may or may not be malicious than whether a specific signature exists in a database. What about cases of zero-day threats? If those signatures have not been pushed down to security solutions yet, how are they going to be able to identify such threats?

Summary

It's understandable that false positives play such an important role in the decision-making process, but metrics must be carefully considered in the wider context of what they represent. Not every false positive is the same: a useful insight on something that isn't a live threat but may represent a significant hygiene issue should not be measured on the same ground as something that is disruptive and time-consuming to manage and fix.

The best way to judge a technology is to see how it works in your own environment, to see how it copes with rich and complex datasets that arise from dynamic organisations.

Find more insights and solutions from Darktrace here.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X