‘Everything you see on the internet is not true’, just got a new meaning with deepfakes.
The spread of misinformation on the internet is no longer limited to texts on websites. Now you can’t even trust the audio and video clips out there. The proliferation of deepfake technology has enabled people to alter audio and video footage using advanced AI in just a few clicks.
Just think about it; using this, you can make people appear to say or do something they did not. It is the most powerful weapon if you will.
What is Deepfake?
In simple language — Deepfake is the 21st century’s version of photoshopping or misinformation on steroids.
You must have seen videos where Barack Obama calls Donald Trump a “complete dipshit”, or Mark Zuckerberg boasting about having “total control of billions of people’s stolen data.” At first, we all thought that was real, but they turned out to be fake. That is what a deepfake does; it makes you believe things that are not real by altering information. Using this, you can literally put words into people’s mouths.
Not just that, you can even create fictional people from scratch using deepfake.
Maisy Kinsley and Katie Jones are examples of how you can create convincingly fictional characters using deepfake technologies. The former is a non-existent Bloomberg journalist, while the latter is a made-up Center for Strategic and International Studies employee.
Sounds crazy? It gets crazier…
You can even deepfake voices and create voice skins and voice clones. Voice skins allow you to convert the sound received by a mic to that of another person. At the same time, voice cloning uses bits of a recorded text and applies artificial intelligence to dissect the speech patterns from the sample.
The chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice.
Isn’t this a technology that has existed since the 1990s? Why has it become ‘the problem’ now?
Yes, deepfakes have been used in movies and conferences for a while now. But access has made this a bigger concern today. The technology that was once only affordable to a niche audience has become a software that anybody can use.
What Is the Deepfake Challenge?
The increasing usage of this technology to make people say things they would never have has a very insidious impact.
It can lead to a zero-trust society where people can no longer distinguish the truth from falsity, which can further translate into fake events influencing stock prices, voters and inciting religious tensions. Deepfake also poses a security threat with the ability to deceive systems that rely on facial and voice recognition.
Realizing how daunting the effects of such a technology could be, Facebook came up with a challenge to solve this — The Deepfake Challenge.
Why the deepfake challenge? Not that we didn’t have a technology helping us spot fake videos before, but the problem was that they worked only on videos modified using celebrity images. The reason was the massive availability of footage to train algorithms. Blockchains are also used to trace back the records of videos, pictures, and audio.
However, this open challenge aims to create a technology that everyone can use to spot AI-manipulated videos. It should be as easy to spot a fake video as creating one.
Coming back, Facebook hired 3500 actors to record 1,00,000 videos, of which some clips were altered using other actors’ faces. The data set was intentionally made to be more inclusive than other available data sets because an AI is only as good as the data it’s fed.
Researchers were then given access to this data set to train their algorithms. Impressively on this data set, they achieved an accuracy rate as high as 82.56%. But when the same was tested against a dataset of unseen footages, they performed poorly with an average accuracy rate of 65.18%.
What’s Next?
Over 35000 detection algorithms were submitted for the challenge, out of which the top-performing ones will be released as open-source codes to help researchers build on. With the insights from this challenge, Facebook is also developing its deepfake detection technology.
But for now, Deepfake is still an ‘unsolved problem.’
However, the good part is we are getting there. This challenge has made us realise that there is a possibility to solve a problem like deepfake, and the participation has indicated the interest in solving it. So it’s only a matter of time before we will get better at detecting deepfakes.
At the same time, knowing that technology as novel as AI could foster a trustless society screams the need for regulations.