sb-au logo
Story image

Overcoming deepfakes with deep learning

04 Jul 2019

Article by Deep instinct regional Australia and New Zealand sales manager Christopher Brown

If the public outcry against fake news wasn’t enough to destabilise and imbue deep mistrust of the media, business and our democratic systems, there are now deepfakes. 

In the past couple of weeks, there have been high-profile examples of deepfakes, where deep learning, an advanced subset of AI, is used to manipulate videos to create fake videos that look authentic.  

Most notably was the video of Facebook CEO Mark Zuckerberg declaring “whoever controls the data, controls the truth”.

This video was produced in response to a prior deep-fake video made of Nancy Pelosi, where the House Speaker was made to appear as if she was drunk, by having her speech slurred. Facebook refused to have the video removed.

To test Facebook’s resolve not to remove posts despite the misinformation, the video of Zuckerberg was created where he appears to be talking about Facebook’s plans of world domination.

In its most sophisticated form, deepfake refers to the AI rendering of fake videos which can’t be detected by human analysis with the naked eye, as videos look very authentic.

This is a real source of concern, because if you can’t distinguish between real videos and AI-generated ones, how do you know that anything you watch is real or fake?

As technology improves, this becomes a greater risk, because current discernable differentiations from the original footage will become further indistinguishable.

The methodology for creating deepfake videos varies considerably.

Some of them are trivial, such as the video of Nancy Pelosi, where the effect of her slurred speech was achieved just by slowing the video speed to 70%.

However, the most sophisticated deepfakes use deep learning.

For instance, one method involves an auto-encoder architecture that “compresses” an image to its basic features (the latent vector) and then “decompresses” it back using different basic features of the face you want to paste on the original image.

The threat potential

Deepfakes have the potential to be used for both good and for bad.

The increasing dependence of people in social and digital media as their only source of information about the “outside world” can cause the political effect of such videos to exponentially increase.

If you can generate a video of Barack Obama cursing Donald Trump,  what stops you from generating a video of a politician cursing minority groups before elections?

Fake news is effective even after proven to be false.

Thus, the public stain might never be cleansed completely. But politicians aren’t the only ones to be afraid: what happens if your ex-boyfriend or girlfriend decides to put your face in a revenge porn scene, as done to Gal Gadot?

The usage of deep learning is growing exponentially on day-to-day tasks and would be much harder to regulate.

Paving the way forward

The process of detecting between real and fake footage, requires analysis of the output footage where inconsistencies can be identified in the low-level features.

Deep learning classifiers fit this task, due to their ability to inspect the raw features of the image to detect tell-tale signs of fake images or videos.

For instance, in the autoencoder method mentioned above, the decompressed image is of poorer quality (e.g., some pixels lose their actual colour) and these differences create inconsistencies, that while invisible to the naked eye, can be detected by a deep learning classifier.

A research group from Adobe and UC Berkeley trained a convolutional neural network classifier to distinguish between real and fake images with 99% accuracy, as opposed to humans which achieved only 53% accuracy.

However, like in the cybersecurity domain, the first step towards the solution is the understanding of the problem and its ability to affect someone.

Once the risk of deepfakes is known, the same way the risk of malware is known, one can look for deep learning experts to assist in implementing solutions that would outperform human capability and solve this challenge.

Download image
Workforce demographics and culture is changing. Management must too
The way we work is changing, and so is the make-up of the workforce. To get the best results, businesses need to take on dynamic workforce management.More
Story image
Fortinet: Distributed networks driving enterprises towards consistent security
Jon McGettigan, Fortinet A/NZ Regional Director, explains how consistent security services can protect and help manage distributed networks.More
Story image
CrowdStrike recognised as leading endpoint security vendor on global scale
IDC's report shows that CrowdStrike demonstrated a 2018-2019 growth rate of 99% and close to doubled its market share, while the market shares of the top three vendors in the corporate endpoint segment declined.More
Story image
Security threats endanger tens of millions working from home
There are pressures to stay trading, but it is imperative to ensure that actions taken now don’t encourage security disasters in the future.More
Story image
What is the future of SD-WAN security? integration
"While SD-WAN is still a relatively new technology, it makes sense to start the convergence process sooner rather than later to avoid potential security risks and deliver a better user experience and simplify operations.”More
Story image
Okta, CrowdStrike, Netskope and Proofpoint create shared zero trust security strategy
Okta, CrowdStrike, Netskope and Proofpoint have joined forces to develop and launch an integrated, zero trust security strategy, stating that this is crucial for today’s digital and remote working environments.More