SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
The call is coming from inside the house
Fri, 1st Apr 2022
FYI, this story is more than a year old

Security and privacy breaches are accelerating, and the external forces we have been applying to slow the rate are hardly making an impact. The traditional treatment of risk – to try to reduce the likelihood– can never be the complete answer.

Some breaches are the result of targeted attacks by bad actors. But most breaches originate from ‘trusted insiders'. In some cases, they can be compromised by bad actors. And in many more cases, they can be reckless, or feckless, and spill data by accident. We can reduce the likelihood by careful hiring, training, and observation. But we can't eliminate it.

Risk is a product of likelihood and impact. Something can have an extremely low likelihood of happening, like your parachute failing. But if the impact of that failure is catastrophic, then the risk itself is not low.

We can vet and monitor staff so that there is a low likelihood that they will breach our security. But if that once-in-a-blue-moon spill is of our most sensitive data, the impact will be disastrous. So even with the best personnel and perimeter security, more challenging than ever post-pandemic, we can't manage the risk to an acceptable level.

What we must do, and can now do using artificial intelligence, is reduce the impact.

That means knowing where our highest-risk data is and who is doing what to it. We need to allow people to work and collaborate effectively, but limit their access to the riskiest data. ‘Risk' can be many things, not just security-classified data or Personal Identifiable Information, and to date, it has been hard to quantify. We have relied on individual staff to understand what ‘risky data' looks like and to mark or label it everywhere it appears. But AI is changing this.

  • One (unclass) federal department has identified a range of specific topics in their business that would have adverse outcomes for international relations, for example, if spilled into the public domain. As such, it has used AI to automatically detect any instances of those across the network.   
  • One state government department has used AI to find everything related to sexual assault across their legacy child protection databases, detecting 60,000 flags in previously unsearchable systems so that they can be preserved and properly protected.
  • One university has used AI to map its secrecy obligations under Acts and Regulations to identify which data would have civil or criminal penalties for unauthorised disclosure. And many councils, regulators, and critical industry providers are now using AI to identify spills specific to their risk context so that they can be immediately treated.  

With AI, we can now know what we have, where it is, and its inherent risk (based on its content, not just a label). We know who is interacting with it and, importantly, what business and regulatory rules apply to it (and whether they are being met). We can do this automatically, invisibly, across the enterprise.

Ultimately, it means we can harden (or dispose of) the riskiest data, significantly reducing the impact of an inevitable breach.