Deepfake Videos: When Good Technology Turns Bad

Ben Lorica

Ben Lorica

More than a decade ago, leading UK investigative journalist Nick Davies published Flat Earth News, an exposé of how the mass media had abdicated its responsibility to the truth. Davies argued that newsroom pressure to publish more stories faster than their competitors had led to journalists becoming mere “churnalists”, rushing out articles so fast that they could never check on the truth of what they were reporting, writes Ben Lorica.

Shocking as Davies’ revelations seemed to be back in 2008, they do appear pretty tame by today’s standards. We now live in a post-truth world of fake news and ‘alternative facts’ where activists don’t just seek to manipulate the news agenda with PR, but also employ advanced technology to fake images and footage.

A particularly troubling aspect of these ‘deepfake’ videos is their use of Artificial Intelligence (AI) to fabricate people saying or doing things with almost undetectable accuracy.

The end result is that publishers risk running completely erroneous stories – as inaccurate as stating that the world is flat – with little or no ability to check their source material and confirm whether it’s genuine. The rise of unchecked ‘fakery’ has serious implications for our liberal democracy and our ability to understand what’s truly going on in the world. While technology has an important role to play in defeating deepfake videos, we all have a responsibility to change the way we engage with the ‘facts’ we encounter online.

Faking the news 

The technology to manipulate imagery has come a long way since Stalin had people airbrushed from history. Creating convincing, yet fake digital content no longer requires advanced skills or a well-resourced (mis)information bureau. Anyone with a degree of technical proficiency can create content that will fool even the experts.

Take the faked footage of Nancy Pelosi earlier this year, which was doctored to make her look incoherent and was viewed two-and-a-half million times before Facebook took it down. This story shows how social media is giving new life to the old aphorism that “a lie can go halfway around the world before the truth has a chance to put its boots on”.

The propagation of lies and misinformation is immeasurably enhanced by platforms like Twitter and Facebook that enable virality. What’s more, the incentives for creating fake content now favour malicious actors, with clear economic and political advantages for disseminating false footage. Put simply, the more shocking or extreme the content, the more people will share it and the longer they will stay on the platform.

Meanwhile, counterfeiters can manipulate the very tools being developed to detect and mitigate deepfake content. Just as the security industry inadvertently supplies software that can be misused for cyber crime, so we also risk the emergence of a parallel media industry – one focused on obfuscation and lies.

Combating the counterfeiters

None of this is inevitable. There are plenty of advanced tools for detecting faked content, including machine learning algorithms that analyse an individual’s style of speech and movement (known as a “softbiometric signature”). Researchers from UC Berkeley and the University of Southern California used this technique to identify deepfakes – including face swaps and ‘digital puppets’ – with at least 92% accuracy.

Technology such as AI, machine learning and generative adversarial networks are, of course, crucial in the fight against deepfakes, but just as important is that we all learn to think critically about the content we view.

Sadly (but necessarily), we all need to become better at questioning the provenance of videos, articles and imagery. In many cases, this can be as simple as never sharing content that we haven’t actually read or watched ourselves – something that six-in-ten of us do. We also need to interrogate what we consume, for example by investigating metadata. If you watch a video titled “Boris Johnson in Sri Lanka, June 2014”, it’s well worth checking out that our Prime Minister was actually there during that month.

Similarly, snippets can be misleading. It’s always worth watching extended or complete segments, as the Lincolm Memorial video incident shows. Always resist the temptation to pile on when something goes viral: taking the time to investigate content properly can save you from serious embarrassment or even prosecution for defamation.

Sophisticated as deepfake technology has become, there are still some tell-tale signs that the vigilant viewer can use to identify footage that has been manipulated. These include infrequent or entirely missing eye-blinking, odd-looking lighting and shadows, discoloration, blurriness and distortion and a failure perfectly to sync sound and video (ie “read my lips”).

Unite for truth 

It may be everybody’s responsibility to check content before we share it widely on social platforms, but it will take much more than individual effort if we’re to stamp out the scourge of deepfakes.

At the moment, the odds are stacked firmly towards the fakers. As digital forensics expert Hany Farid has pointed out, for every person involved in detecting misleading content there are another 100 creating it.

Improved regulation of media platforms and other publishers will be key, with sufficient sanctions against content creators who have a record of creating and disseminating fake content. We’re seeing regulators beginning to grope towards a solution, for example with the Deepfakes Accountability Act. Problematical as this proposed legislation may be, it at least shows that lawmakers are aware of the problem and are committed to tackling it.

To counter the threat of deepfakes effectively, we need to see much better data sharing so that regulators and researchers can fully understand the nature of the challenge, build better solutions and craft truly effective regulations.

This battle will not be won quickly or easily, but it’s one that we all need to fight. Everyone can do their bit by remaining vigilant to the threat of faked content. If we train our brains to think critically about everything we consume online, we can all help to minimise our involvement in sharing counterfeit content.

The next time you see a video that shocks or surprises you, do a little background digging and watch out for the signs of ‘fakery’ before you share it – and tell your friends and followers whenever you find content that’s as fake as the claim that the world is flat.

Ben Lorica is Chief Data Scientist at O’Reilly

About the Author
Brian Sims BA (Hons) Hon FSyI, Editor, Risk UK (Pro-Activ Publications) Beginning his career in professional journalism at The Builder Group in March 1992, Brian was appointed Editor of Security Management Today in November 2000 having spent eight years in engineering journalism across two titles: Building Services Journal and Light & Lighting. In 2005, Brian received the BSIA Chairman’s Award for Promoting The Security Industry and, a year later, the Skills for Security Special Award for an Outstanding Contribution to the Security Business Sector. In 2008, Brian was The Security Institute’s nomination for the Association of Security Consultants’ highly prestigious Imbert Prize and, in 2013, was a nominated finalist for the Institute's George van Schalkwyk Award. An Honorary Fellow of The Security Institute, Brian serves as a Judge for the BSIA’s Security Personnel of the Year Awards and the Securitas Good Customer Award. Between 2008 and 2014, Brian pioneered the use of digital media across the security sector, including webinars and Audio Shows. Brian’s actively involved in 50-plus security groups on LinkedIn and hosts the popular Risk UK Twitter site. Brian is a frequent speaker on the conference circuit. He has organised and chaired conference programmes for both IFSEC International and ASIS International and has been published in the national media. Brian was appointed Editor of Risk UK at Pro-Activ Publications in July 2014 and as Editor of The Paper (Pro-Activ Publications' dedicated business newspaper for security professionals) in September 2015. Brian was appointed Editor of Risk Xtra at Pro-Activ Publications in May 2018.

Related Posts