Technology is powerful, it can make you believe in something that doesn’t even exist!
If you thought technology is all about the positives, then welcome to the dark realities it has to offer. Technology can be dangerous, digital representations, manipulated videos pave way to fabricated settings that insanely appear real, supported by sophisticated artificial intelligence, to serve a frightening negative purpose.
Deepfake, a technology pioneered in 2014 by Ian Goodfellow is based on generative adversarial networks (GANs) a class of machine learning frameworks designed by Ian Goodfellow and his colleagues. If we break Deepfake, into deep and fake, it simply means a model that uses deep learning technology (a branch of machine learning) applying neural net simulation to massive data sets to create fake audio, video, image against the real ones used as model input.
Exhaustively how does a Deepfake model wok? A GAN pits two artificial intelligence algorithms against each other, the first algorithm (generator) is fed random noise turning it to an image. This synthetic image is added to a stream of real images fed into the second algorithm (discriminator). The synthetic images will not look anywhere close to faces initially but when the algorithm is iterated countless times, the discriminator and generator both improve. After many iteration cycles and feedbacks, the generator will start producing a persuasive counterpart.
Voila, you got a Deepfake!
Dangerous and Fraudulent
How dangerous is a Deepfake? Technology experts believe that deepfake could be a potentially lethal weapon for fake news purveyors who have malicious objectives in mind, right from influencing stock prices to elections
Deepfake technology is weaponised in labs to make voice clones or voice skins of public figures. Who can forget the transfer of £200,000 into a Hungarian bank account in 2019? The CEO of a U.K.-based energy firm thought the person on the other end of the call was his boss, the chief executive of the firm’s parent company. The Deepfake imposter asked the UK subsidiary firm’s CEO to send the funds to a Hungarian supplier urgently within an hour. Scams using AI are the new threat technology offers mankind.
The former president of the USA even fell prey to this malicious technology. Deepfake algorithms pasted Jordan Peele’s (American actor) mouth over Obama’s, replacing the former President’s jawline with the one that followed Peele’s mouth movement. This model was then refined by FakeApp for more than 50 hours of automatic processing, and the results astonished the world.
In another Deepfake attack, Bill Posters and Daniel Howe in partnership with advertising company Canny created a sinister video of Facebook founder Mark Zuckerberg. This video was uploaded on Instagram, showing Zuckerberg saying “whoever controls the data, controls the future”, something which was entirely fabricated.
How to Spot a Deepfake?
Deepfakes created through deep learning models powered by AI do not require considerable skills that would be needed to create realistic videos. Unfortunately, this means fraudulent videos, audios can be created by anyone from amateur enthusiasts to academic and industrial researchers to cause mayhem.
In 2018, US researchers discovered that deepfakes don’t blink as a normal human eye would do. Though initially, this finding looked like a triumph, no sooner deepfakes appeared with blinking eyes, and it gets worse.
Spotting a deep fake is not that difficult. Though poor-quality deepfakes are easier to spot, bad lip-synching, irregular jawline or patchy skin tones. However, as technology progresses, Deepfakes are becoming increasingly malignant with tough detection features.
AI can make deepfakes and detect them too. It’s a two-sided coin!
Large corporations have pledged their support to detect and remove deepfake videos. Facebook, AWS, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY have collaborated to build the Deepfake Detection Challenge (DFDC) the goal of this challenge is to make researchers around the world innovative new technologies which can help to detect deepfakes and manipulated media videos.
The urgent need of the hour is to fix accountability on deepfake makers and promoters. A thin line exists between the good and the bad, policymakers and technology honchos must ensure, there are enough positive use cases to outweigh the negatives. Otherwise, we may find ourselves trapped in a bullying cyberwar started by an unknown hacker on a fabricated AI. Are we prepared to handle that?