SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Flipping the traditional security model on its head: taking a data-first approach to cybersecurity
Tue, 30th Nov 2021
FYI, this story is more than a year old

As organisations become more data-driven, they store more data in on-prem and cloud stores that employees can access from anywhere with phones, tablets and laptops. The security perimeter is much less defined, and endpoints are fungible — very little data “lives” only on your phone or laptop these days.

This digital transformation has flipped the traditional security model, which focused on perimeter and endpoint, on its head. Instead of focusing on outside in, organisations are starting to think inside out, or data-first security.

Data protection is intuitively simple but immensely complex. Why is data protection intuitively simple?

Because if you can answer “yes” to these three questions, and you can answer yes continually, then your data is safe:

  1. Do you know where your important data is stored?
  2. Do you know that only the right people have access to it?
  3. Do you know that they're using data correctly?

These are the three fundamental dimensions of data protection — importance, accessibility and usage.

If you work in IT or IT security, you know that understanding these dimensions isn't simple at all.
You probably also know that if you can't answer yes to these questions, they lead to other questions that have urgent ramifications for CISOs, compliance personnel, boardrooms and shareholders. Questions such as “where is our sensitive and regulated data?” and “where is it overly accessible and most at risk?

The answers don't come any easier as data keeps growing in on-prem and in the cloud, in applications and data stores that each have their own security models.

Where is our data supposed to be?

The number of places we can put data has exploded over the past few years, and it's common for users to access their data across multiple devices and endpoints.

Most organisations now rely on a combination of cloud applications and infrastructure to function in addition to their on-premises infrastructure.

Where is the important data?

Even in the realm of these sanctioned applications, the attack surface is large and difficult to visualise and assess in terms of risk. As a result, some organisations choose to focus their efforts by asking employees to tag files or by using automation to identify or classify regulated or sensitive data with the hope of being able to prioritise data protection efforts.

It certainly makes sense to break an enormous problem down into smaller pieces, but the problem has become so large that even the smaller pieces can be overwhelming. Most organisations are surprised at the number of sensitive files and records they find. Thousands of files here —  tens of thousands there — and the list will be different tomorrow and the next day.

Those that arrive at this place without a clear plan of action can get stuck over what to do next. Some consider a brute force approach, like moving everything they find somewhere else, deleting everything possible, or encrypting everything, thereby restricting access to employees only or a small group who then inherit a large problem.

These solutions don't solve the core issue, ensuring that data is accessible to only the right people, also known as the principle of least privilege, or more popularly known as Zero Trust.

To make sure access is correct for any data, sensitive or otherwise, you first need to see who has access to it in the first place — this is almost always harder than people realise, especially in the cloud.

Who has access to our important data? Who should have access?

Without the critical context of understanding what data is regulated or sensitive, organisations are making decisions about access. What often comes as a surprise is how hard it is to see who has access in the first place.

Access to data is handled by permissions or access control lists. While the logic is pretty uniform across applications and data stores, the implementations are all different. The specific actions users can take are described differently in each application, even they though mostly fall into the categories of creating, reading, updating, deleting, or sharing data.

On top of these differences, calculating effective rights for a given object or user can be very complex and varies greatly between data stores. Attributes such as object-specific permissions, group relationships, hierarchal inheritance, roles and role hierarchies, and system-wide settings can increase the complexity.

To correctly understand each user's access permissions, all these attributes and functional relationships must be normalised across data stores and applications. Without this kind of automation, determining who has access to data is an impossibly time-consuming task. It also impairs other day-to-day tasks such as incident response, troubleshooting and audit reporting.

Is understanding access activity any easier than understanding permissions?

It is not. When considering data security, there are several types of events that pertain directly to data protection, including:

  • Data access events - where users create, read, change/update, delete, or share data.
  • Access control changes and configuration changes - which affect the accessibility of data.
  • Authentication events – which users connected to the data store, from where, and with what kind of authentication (e.g. single or multi-factor).
  • Perimeter events – perimeter signals from DNS, VPN gateways and proxies that provide insight into unusual connections.

Because data stores and applications describe these events so differently, it is very difficult to answer questions across them.

If any of the three dimensions – sensitivity, permissions, and activity – are missing, your security falls flat.

Say you only have the activity dimension and not the sensitivity or permissions dimensions.  You might be able to see what data has been taken after a breach, but you won't know how sensitive that data was, who else was in a position to access it, or whether it's been incorrectly exposed to everyone on the internet in the first place.

When it comes to data protection, each of these dimensions are needed to ensure robust security.