Long gone are the days when the internet first came into our lives. In this digital-first era, along with the explosion in the number of devices and websites comes the unprecedented number of cyber-security attacks. The question now is how experts can decipher which websites are considered good, which are bad, and which are neither good nor bad, but fall somewhere in between. Even more importantly they need to consider what the best approach is when it comes to maintaining the security of their own domains.
To put things into perspective, 35 years ago when the internet first sprung into existence, there were a mere 4.29 billion domain addresses in existence and it would have been unfathomable to imagine that in 2018, Internet Protocol version 6 (IPv6) would be capable of supporting an unbelievable 340,282,366,920,938,000,000,000,000,000,000,000,000 - 340 undecillion Internet domains.
In the current environment where DDoS attacks on DNS systems have skyrocketed in the past year, these considerations around the good and bad domains are crucial. According to a recent Neustar survey, 82% of APAC-based organisations reported experiencing at least two DDoS attacks within the past 12 months, and nearly 45% being attacked more than five times.
This is further supported by data from Arbor Networks, who reported that over 2.25 million DDoS attacks hit APAC organisations in 2017. This particular frequency and intensification of DDoS attacks highlights the volatile and unstable potential of cyber-attackers and their willingness to wreak havoc on an organisation's DNS.
The current scenario
In the past, the most effective way to protect organisations online presence was to separate all web traffic into two separate categories, ‘good' and ‘bad' through the practice of whitelisting and blacklisting. Blacklisting works by disallowing known malicious sources access to an organisation computer system or network.
It is traditionally considered to be inferior to whitelisting technology which enables an organisation to select and approve processes and trusted sources. Many cyber-security experts recommend whitelisting as the best approach to keeping an organisation safe from malware and other cyber-threats.
However, both practices raise the question: where does this leave websites that fall into neither category? Furthermore, how do we know what to protect against these types of sites? These ‘grey websites', as they are commonly known, are becoming increasingly prominent, resulting from the adoption of ‘cloud-first' strategies and the strong push towards digitalisation. While overall this is seen as a shift in the right direction, it reinforces the vital importance of protecting the businesses Domain Name System (DNS).
Being able to identify these grey entities is the first and most important step in ensuring the security of your DNS. It is, therefore, vital that an organisation takes a holistic approach to monitoring their inbound traffic. In addition, they need to have processes in place to monitor all web activity so they are able to determine a comprehensive database of DNS names, IP addresses and timestamps, which can then be used to automatically identify if the web traffic is legitimate or suspect.
In light of this, organisations should not only adopt a multifaceted approach to DNS protection but also, and in most cases, layered defenses. This includes protections to guard against network layer attacks and application layer attacks. A combination of hardware and cloud-based mitigation is the way to go to ensure a better protection from all angles.
Ultimately, the ability to improve decision-making about DNS activities is only one of many tools that a cyber-security professional can access to ensure the security of their organisations DNS. As the nature of the cyber landscape continues to evolve and the number of websites and devices ever increasing being able to identify and handle black, white and shades of grey will be imperative in maintaining the security of an organisation's DNS and online assets.