SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image
Snowden: 4 big security and privacy assumptions he undermined
Tue, 20th Sep 2016
FYI, this story is more than a year old

I am eager to see Snowdenthe movie for a number of reasons, not least of which is the fact that I have enjoyed some of Oliver Stone's other films and respect his military service. So, I promise to review the movie as soon as I get to see it, but right now I want to discuss four significant security and privacy assumptions that Snowden's actions, and the resulting revelations, have undermined.

Assumption 1: Organisations can keep secrets digitally

Before June, 2013, most Americans who had heard of the NSA knew it was very secretive. Accordingly, most assumed that it was a very secure organisation, known for hiring the brightest minds in cryptography and other security disciplines.

Indeed, keeping secrets was implied right there in the mission statement: “to protect US national security systems and to produce foreign signals intelligence” (as recorded in an archive.org snapshot of the nsa.gov homepage taken in May, 2013). The agency refers to itself as the National Security Agency / Central Security Service (NSA/CSS). In other words, it is upfront about its responsibility to be a source, if not the source, of cybersecurity, for the US government and the nation. Hence the tagline: “Defending our Nation. Securing the Future.

Consequently, the fact that Snowden, an IT professional working for the NSA as an outside contractor, was able to gather together and exfiltrate huge amounts of very secret information from his desk within the agency's Hawaii facility, was a shock to a lot of people. How was this even possible?

Well, Snowden was a trusted user, with extensive access to sensitive systems, and it is hard for any organisation to keep secrets when a trusted insider decides to expose them. Yet this challenge is not new, it predates computers and network connectivity. What the Snowden breach showed us was how much technology has increased the difficulty of meeting this challenge; it is many orders of magnitude

 Just think about the time and space and resources required to copy and store and move 400,000 pieces of paper from Hawaii to Hong Kong versus doing the same with 400,000 pages of electronic documentation.

Unfortunately, I'm not sure that all organisations have factored the full implications of this massive digital transformation into the way they do business. I would argue that whenever any part of your operations or business strategy depends upon maintaining digital secrecy, there is a serious risk. This could be something seemingly as trivial as the use of email to discuss clients, if exposure of those emails could jeopardise the business. And I would hope that the Snowden breach has deterred companies from storing details of questionable business practices in PowerPoint slides.

The point is that companies need to make sure that their risk analysis of business decisions is both thorough and realistic when it comes to the possibility of digital compromise. The ease with which digital copies of information can be made and disseminated – even by relatively unskilled individuals (think of Manning copying hundreds of thousands of classified documents onto a CD labelled Lady Gaga) – has created an entirely different reality from the analogue world we worked in just a few decades ago. The wealth of options that an insider has to choose from if he or she decides to share digital secrets with the outside world is long and continues to grow.

Assumption 2: External attackers are the biggest threat

These days, any computer system attached to the Internet is subject to external attack. For example, if you put up a website in the US it is usually just a matter of minutes until it is hit with unauthorised attempts to access the server on which it is running.

These attempts often come from IP addresses registered in distant lands, like China and Ukraine, and represent automated scans initiated by folks who want to gain access to other people's computers for nefarious purposes. That is why the external threat to an organization's data security, the threat from the outsider, often appears to be greater than the threat from the insider. Or at least, that has been the case ever since companies started to connect their systems to the Internet in large numbers. But that was not always the case.

If you've followed the history of information security you know that some of the very first “computer crimes” were committed by insiders (there are quotes around computer crimes because some of them were committed before computer crime laws were in place).

The first person convicted for damaging data with malicious code was an insider, a programmer at an insurance company who in 1985 wrote and deployed a logic bomb that destroyed 168,000 records after he was fired. Computer security history buffs will also be aware of the longstanding insider/outsider debate.

As best I can tell from my research, this started with a comment by someone in law enforcement back in the 1980s who applied the Pareto principle to what was then called computer fraud and abuse, saying that 80% of it was down to insiders, and only 20% was outsiders.

When the Computer Security Institute (CSI) started conducting an annual ‘Computer Crime and Security Survey' in 1996, a recurring theme was this ratio between internal and external threat. When would external overtake internal? As internal systems were increasingly exposed to the outside world and reports of external attacks – hackers breaking into company computers – began to multiply, security professionals were caught in a dilemma that persists to this day: how to get organisations to ramp up defences against external attackers while not losing sight of the internal threat.

The dilemma is still with us, and Snowden reminded us all that we neglect the insider threat at our peril.

Assumption 3: Digital communications are private and secure

Some of my oldest friends in the information security community have always operated under the assumption that anything they communicated digitally could be intercepted.

Personally, I recall my frequently reiterated advice to readers back in the early days of email: never send an email that you wouldn't want your mother to read. My thinking at the time was influenced more by the unreliability of email systems and email users than the machinery of state surveillance.

But in the late 1980s I had studied up, as far as possible, on the activities of the NSA and GCHQ as I researched my first computer security book (I remember naively calling GCHQ to ask for information about TEMPEST and getting this response: “There is no such thing sir and what did you say your name was?”). I read Bamford's Puzzle Palace and Schwartau's Information Warfare, and anything I could find about Echelon. But I quickly learned that most citizens of the US and the UK were not ready to hear that their government was eavesdropping on them at scale.

And so in some circles there was quite a lot of “I told you so” going on after Mr. Snowden starting releasing internal NSA and GCHQ documents to the press. Here at last was solid confirmation of suspicions that many had kept to themselves for fear of being dismissed as paranoid. Yes, parts of our governments really were trying to monitor all digital communications, seeking to “collect it all” as then head of NSA/CSS Gen. Keith Alexander put it.

As the revelations of secret mass surveillance rolled on throughout the second half of 2013, there was quite a lot of reaction that can be summed up with this phrase: “If you've done nothing wrong, you have nothing to worry about.” That statement is unhelpful in too many ways to count here, but consider just one: accumulating information about people is a risky business if you're doing it digitally, even when you're doing it legitimately, just ask data breach victims. How do we know it is not going to be compromised?

Assumption 4: Technological innovation is bound to produce solutions to these problems

The Snowden revelations are ongoing, as is the NSA's struggle to keep secrets, including some of its efforts to secretly subvert commercial products to achieve its goals. Recently we have seen some of its hacking tools exposed, reminding us that not all malware comes from criminal gangs in foreign lands.

Our government writes and deploys code designed to gain unauthorised access to systems – something that criminals also do – and code like this doesn't magically become benign just because you think you have the right to gain unauthorised access.

Righteous malware is an intentionally oxymoronic term intended to capture the reality that righteousness is in the eye of the beholder; where the beholder could be either the writer of the code or the owner of the system that is under attack.

And that may be one of the biggest lessons to be learned from the Snowden revelations – it is a mistake to assume that good people will do the right thing, in the right way, with no unwanted consequences, if you just give them the authority to proceed in secret, along with a massive budget. Regardless of whether or not any laws were broken, I would argue that NSA and GCHQ ended up contributing to a perceptible erosion of trust in the privacy and security of digital technology, a phenomenon which threatens to undermine hopes of a better tomorrow through digital technology.

Add in the problem of cybercrime and the current shortage of cyber-skilled workers and what do you get? Some rather dire scenarios start to look plausible. Sure, the world may just limp along, endlessly frustrated by successive generations of flawed technologies that are routinely abused by opportunistic cybercriminals.

Or the world's economy becomes mired in endless recession because its citizens have turned their collective back on the productivity promised by digital technologies, the benefits of which were finally eroded to the tipping point by rampant criminal abuse and unfettered government surveillance.