SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Technology should enable people & improve processes
Tue, 2nd Aug 2016
FYI, this story is more than a year old

The security industry and security practices in general are obviously far more mature than they were just five years ago, let alone ten years ago. However, whether buying, building or deploying, being able to rely upon security technologies isn't as mature as we'd all like. While the security industry preaches a combination of people, process and technology, the focus remains on technology, and yet, technology alone has been proven time and again to not be enough.

Too often, when a technologist or company looks to solve the next big security problem the people and processes are often an afterthought. Many long-time security practitioners have learned this critical lesson the hard way and many seasoned CISO's have as well. As the severity of security problems have escalated and recruiting and keeping skilled staff has also become more difficult many have found that they don't exactly have a Cap-X problem, but instead have a head count problem! If this new reality is even half true, how do we leverage technology to truly empower people and improve process rather than relying on the old crutch of a technology-driven silver bullet?

Australian organisations not only need a detection stack, but also a response stack with perhaps equal investment. The response is as much about people and process as it is technology. As an industry, we have not recognised (or listened to our collective customers telling us) that workflow and usability are every bit as important as the latest threat detection bell or whistle. Certainly that is not as sexy or exciting to talk about, but it is far more valuable to our customers. It's time to realise that the shiniest, most amazing toy is useless if nobody knows how to play with it.

Start At The Network

It's not that the network is superior to an endpoint or other approaches; they all have their pros and cons. However, the network enables insight and inspires action. From the network, you can see all communications immediately and everywhere. You can look behaviourally and with pattern matching, you can look back in time, you can interrupt command and control, and you can perform tests.

No matter how dramatic changes to network infrastructure have become, all roads lead back to the network. Whether we are talking virtual, wireless, cloud or otherwise, the network is the one place where you can see it all happen. The Internet is the Enterprise network today given the conglomeration of third party cloud and infrastructure partners there are today.

Now more than ever, it is critical to be thinking about how your network interacts with the rest of the Internet. For instance, are you hosting P2P or Tor eating up precious bandwidth or even worse, hosting botnet command and control allowing for possible data extraction or even DDoS attacks originating from your network? Understanding your environment's internal and external relationships can be one of the most beneficial aspects of preparing for both security and availability threats.

Another great aspect about networks is that there are so many layers to monitor, and so many different deployment models depending on which layer you are interested in. Whether you are interested in NetFlow for monitoring a heavy SSL environment, monitoring from a span port, capturing packets in real-time, or a traditional in-line detection/prevention model, the network provides a diverse environment to customise monitoring and response.

A Campaign Approach Rather Than a Malware Sample Approach

Some in the industry will tell you vulnerabilities aren't necessary for malware to be successful. Others will tell you malware isn't needed for a breach to be successful. What I would tell you is that regardless, an orchestrated attack will not only have a motive, tactics and procedures and all the other kill chain lingo, but will commonly have a way for information to get back to the attacker/s. Yes, there are many infection vectors, but attackers still need to be able to get information back to themselves and this is almost always done via the network. As I mentioned earlier, one of the most beneficial aspects of the network is being able to take quick and decisive action based off clear insight into network activity.

Many vendors and technologies are focused on seeing a piece of malware and alerting on it. While this is a tried and true approach it has created some of its own problems:

  1. Many of the events are unknowns, or labelled as generic Trojans leaving analysts and responders with a lack of priority behind the alert.
  2. The pure number of events with a simple ranking of Low, Medium or High has left most Australian organisations with an unmanageable number of events to investigate or respond to. This is common messaging for many vendors right now as it's a serious pain point for many organisations.
  3. If one is dependent on endpoint detection then the challenge becomes covering every endpoint, a potentially insurmountable task.
  4. If one relies upon common network detection, generally via a proxy or slower line speed, the challenge is watching only a limited part of the network.
  5. The primary problem with this model is that it only detects a malware sample at a certain point in time. Whether that sample is important to you can be lost with so many generic events coming in, and to make matters worse, you have little clue as to whether the command and control or originator of the malware is even still active.

With so much focus on detection, storage and sifting of events, the aspect of response hasn't had nearly enough focus. Thankfully the tide is turning but again, only because of so many epic fails on the part of security vendors and practitioners along the way. Almost every organisation, regardless of size, seems to have head count problems. It's high time that we used technology to make our people and processes more efficient.