Experts predict that deepfake videos will be the newest way false information is spread. Some researchers even have a wager going on whether they will impact the upcomming elections.
What is DeepFake ?
Deepfakes are a new breed of fake videos that use artificial intelligence (AI) to make a falsified video virtually undetectable by swapping out someone's face and voice with an imposter's. In short, a computer program finds common ground between two faces and stitches one over the other. If the source footage is good enough, the transformation is nearly seamless. Developers aren't deterred by the controversy surrounding deepfakes - videos in which people's faces are digitally pasted onto the bodies of smut stars and other performers using machine-learning software. Sure, adding Nicholas Cage’s face randomly into movie scenes is pretty funny. But merging the faces of celebrities, politicians, or ex-girlfriends onto the bodies of porno actresses? Not so much.
In other words, the content that is transferred from one video to another not only relies on mapping the space but also the order of the frames to make sure both are in sync. The researchers use the comedians Stephen Colbert and John Oliver as an example. Colbert is made to look like he is delivering the same speech as Oliver, as his face is use to mimic the small movements of Oliver’s head nodding or his mouth speaking.
Despite all this, many are still pushing for new algorithms that create fake videos that are even more lifelike. Researchers from Carnegie Mellon University and Facebook Reality Lab are presenting Recycle-GAN, a generative adversarial system for “unsupervised video retargeting” this week at the European Conference on Computer Vision (ECCV) in Germany.
Impact of deepFakes on our Society.
The consensus among researchers is that deepfakes will eventually be used to impact a political election, whether this year or in the near future.
This is much more than a Photoshopped meme or a fake news story. With deepfake videos, algorithms are used to recognize actual audio or visual aspects of a person and then, just as with a fake photo, an actual video of that person is doctored to replace what they really said or did with a false video clip that perfectly mimics them. It's nearly impossible to know that the video isn't real.
Social media platforms such as Facebook, Twitter, YouTube, and Reddit are prime candidates for deepfake creators to target.
It's such a concern that the September congressional hearings with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey included questions about deepfake videos, how they manipulate the public, and what the companies are doing about it.
The threat even led the Defense Advanced Research Projects Agency (DARPA) at the Pentagon to embark upon a Media Forensics project to identify deepfakes and other deceptive images.
Deepfakes gained attention earlier this year when BuzzFeed created a video that supposedly showed Obama mocking Trump. The truth was that deepfakes technology was used to superimpose Obama's face onto footage of Hollywood filmmaker Jordan Peele.
While deepfakes began as a way to clumsily misrepresent celebrities in spoofs and sexually explicit videos, it is actually very complicated to create an undetectable deepfake video.
Only a few labs around the world have the capacity because the tools to create deepfake videos are expensive, but much less so than in the past.
Despite this, some researchers have a friendly wager on whether deepfakes will be an impact by the end of this year, with a political candidate being the subject of a deepfake video that receives more than 2 million views before it's determined that it's not real.
Tim Hwang, director of the Ethics and Governance of AI Initiative at the Harvard Berkman-Klein Center and the MIT Media Lab, started the wager to begin a debate to see if his colleagues believed deepfakes would become a threat before the end of 2018, and possibly impact the midterm elections. Hwang said he is in the camp that doesn't believe deepfakes will cause a huge impact before the end of the year.
"It's not ready for primetime yet," Hwang said of deepfakes. "I think people who want to spread disinformation are pragmatic in what's the easiest way to have the biggest effect. And right now, machine learning isn't like that."
The creators of deepfakes use adversarial training to learn how to beat the fake detector techniques, said Paul Resnick, founder and acting director of the Center for Social Media Responsibility at the University of Michigan.
But even if no one could change the masses' minds about a video's veracity, it's important that the people making political and legal decisions-about who's moving missiles or murdering someone-try to machine a way to tell the difference between waking reality and an AI dream.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.