Deepfakes as a service has been tendered by certain vendors could bring potential dangerous to the technology era. The impact of fake video and audio could stretch beyond propaganda as cybercriminals leverage deepfake-as-a-service toolkits to wage disinformation wars on corporates and, worse, to power sophisticated phishing attacks. The challenge is on how to deal with deepfake attacks. It could ruin your reputation, and no one knows how to stop it.
A report says, TikTok has built technology to let you insert your face into videos starring someone else. The parent company ByteDance has developed an unreleased feature using life-like deepfakes technology that the app’s code refers to as Face Swap. Code in both TikTok and its Chinese sister app Douyin asks users to take a multi-angle biometric scan of their face, then choose from a selection of videos they want to add their face to and share.
As per the MIT Technology Review, we are witnessing an arms race between digital manipulations and the ability to detect those, and the advancements of AI-based algorithms are catalyzing both sides,” says Hao Li, a professor at the University of Southern California and the CEO of Pinscreen who helped develop the new technique.
Hao Li says it will be particularly difficult for deepfake-makers to adapt to the new technique, but he concedes that they probably will eventually. “The next step to go around this form of detection would be to synthesize motions and behaviors based on prior observations of this particular person,” he says. Li also says that as deepfakes get easier to use and more powerful, it may become necessary for everyone to consider protecting themselves. “Celebrities and political figures have been the main targets so far.
Enterprises should understand the various dangers of deepfake technologies, including reputation damage and new security vulnerabilities. A recent survey during the Grammy 2020 awards, Cybercriminals are actively abusing the names of artists and songs nominated for a Grammy 2020 award, in order to spread malware. Kaspersky detected a 39% rise in attacks (attempts to download or run malicious files) under the guise of nominees’ work in 2019, compared to 2018. Ariana Grande, Taylor Swift and Post Malone were attackers’ favorites, with these nominees’ names used most often in 2019 as a disguise for malware. Criminals use popular artists’ names to spread malware hidden in music tracks or video clips.
The connection between the rise in popularity and malicious activity is very evident in the case of newer artists such as Billie Eilish. The teenage singer became hugely popular in 2019, and the number of users who downloaded malicious files with her name has risen almost tenfold compared to 2018 – from 254 to 2171, the number of unique distributed malicious files – from 221 to 1,556.
“Cybercriminals understand what is popular and always strive to capitalise on that. Music, alongside TV shows, is one of the most popular types of entertainment and, as a result, an attractive means to spread malware, which criminals readily use. However, as we see more and more users subscribe to streaming platforms, which do not require file download in order to listen to music, we expect that malicious activity related to this type of content will decrease,” –comments Anton Ivanov, Kaspersky security analyst. The underlying AI technology that enables deepfakes will certainly facilitate more intensive cyber-attacks.
Combatting deepfakes won't be easy, they warned. Although tools for detecting deepfakes are getting better, so is the deepfake technology. In the long run, the best protection against deepfake problems is likely to be ongoing monitoring of processes.Experts fear deepfake technology could be used to damage brands, create stock scares or sully the reputation of executives. Deepfake technology could also be used to compromise cybersecurity.
AI software creates deepfakes of people - often politicians or celebrities - by merging, replacing, or superimposing content on to a video in a way that makes it look real. These deepfakes are computer-generated clips that are designed to look real. These videos distort reality and present a "significant challenge" for the technology industry. While deepfakes are still relatively uncommon on the internet, they are becoming more prevalent.
Facebook bans deepfake videos ahead of the 2020 election and announced it will remove videos modified by artificial intelligence, known as deepfakes, from its platform. There are otther platforms were also caught in the crossfire following the Pelosi video, including Twitter. Twitter announced that it would be developing a deepfake policy and asked users to help make its final decision on the new rule.
Twitter didn’t have a deepfake policy when the Pelosi video went viral, and now it looks like the company is moving to change that. On November 27th, Twitter will close its commenting period and announce an official policy 30 days before it rolls out. The deepfakes ban comes after an altered video of House Speaker Nancy Pelosi (D-CA) went viral on social media platforms last summer. This video was widely viewed on Facebook. the company said that it did not violate any of the company’s policies. That video wasn’t created by AI, but was likely edited using readily available software to slur her speech.
Facebook announced it was contributing $10m (£7.6m) to a fund to improve deepfake detection technologies. Mark Zuckerberg, Facebook's chief executive, has himself featured in a deepfake video. The clip featured a computer-generated version of Zuckerberg crediting a secretive organisation for the success of the social network. Facebook said it plans to work with academia, government, and businesses to expose the people behind deepfakes.
William Tunstall-Pedoe, a computer scientist who sold his AI company to Amazon, told BBC News that Facebook deserved credit for trying to tackle the "difficult area". Other tech giants including Google and Microsoft are also trying to combat deepfakes.
Cybersecurity is another area that must adapt to the rising use of deepfake technology. The biggest danger of deepfakes is that as they become absolutely indistinguishable from real video or images, society will not be able to trust the authenticity of the video and images it sees and enterprises must also protect themselves from being identified as an originator of deepfakes.
In summary, Technologically speaking, there is nothing can do. The technology is out there, and people can start using it in whatever way they can.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.