Deepfake porn videos are now being used to publicly harass ordinary people
To discern what’s bogus from real, we typically rely on concrete proof, like a video, for example, to prove that something was actually said or done. But thanks to the unstoppable march of technology, videos are now at risk of being faked convincingly.
With emerging and terrifyingly advanced face tracking and video manipulation techniques, the age of “Deepfake” videos is upon us.
It turns out, beyond celebrities and political figures, Deepfake videos are now being used to harass ordinary people for profit.
What is Deepfake technology?
Deepfake technology is an emerging technique that uses facial mapping, artificial intelligence and deep machine learning to create ultra-realistic fake videos of people saying and doing things that they haven’t actually done.
And the scary part? The technology is improving at such a rapid pace that it’s getting increasingly difficult to tell what’s fake.
Now, with the use of deep learning, all it takes are computer scans of multiple images and videos of a certain person. Deepfake software will then process this information and mimic the target’s voice, facial expressions and even individual mannerisms. In time, without the proper equipment, these Deepfake videos will become indistinguishable from the real deal!
Click here to see how far Deepfake technology has come.
Deepfakes are now being used for profit
You may have seen a few Deepfake videos of celebrities, but these type of videos are now reportedly being used against ordinary women, too.
These non-celebrity Deepfakes are being posted on popular porn sites as a means to humiliate and shame specific targets. Even worse, since creating these videos is now relatively easy, they are being made for as low as $20 per Deepfake clip.
How are these Deepfakes created? Well, since the technology requires multiple photos and videos of the intended target, images are lifted off social media sites like Facebook and then sent to Deepfake creators who peddle their services on web discussion boards and private chats. The more images there are, the more accurate the Deepfake likeness will be.
So if you’re a fan of selfies and post regularly on social media, watch out! An abusive partner, a co-worker or social media contact can download all your images and send them to an enterprising Deepfake creator to fabricate your very own pornographic video.
The mass accessibility of Deepfake software has many worrying implications that are hard to ignore, as well. Now, even your regular Joe can create realistic fake videos of anyone saying anything that they want them to say.
With this technology in everyone’s hands, it will be increasingly confusing to filter out the truth from the lies.
And it’s not just misinformation that we need to worry about. Realistic Deepfake videos can also be used in blackmail attempts, phishing links and extortion scams.
What is being done to combat Deepfakes?
The U.S. government is already knee-deep in developing technologies that can detect Deepfake videos.
For example, the U.S. Defense Advanced Research Projects Agency is already two years into its four-year program to find methods to combat fake videos and images.
Other programs like Darpa’s MediFor are also developing an automated system that can detect fake videos. Another program is being developed at Los Alamos National Lab that tackles the issue in the pixel level.
Google is also taking steps in the fight against Deepfakes by banning “involuntary synthetic pornographic imagery” in its search engine.
On the legal side of things, although Deepfakes have not been tested in court yet, legal experts are saying that if these fake videos were built on publicly accessible photos, they may be protected by the First Amendment.
However, these forged videos can also be taken as fraud, defamation or identity fraud, similar to online harassment, cyberstalking and revenge porn.
How to spot Deepfakes
Fortunately, Deepfake technology is not perfect in its current form yet. There are still tell-tale signs like lifeless, unblinking eyes and jerky facial movements.
In short, they still look unnatural, and like most CGI, they are still stuck in “uncanny valley” territory. Unnatural blinking, in particular, is a good indicator of a Deepfake video.
This tells a lot about the technology itself in its current form. Do a Google Image search of an individual and you’ll rarely see them with closed eyes (if ever.) If you’ve ever taken a portrait or a selfie, you know that a shot of the subject with their eyes closed is a big no-no and it’s usually marked for deletion.
Another factor that causes Deepfake rendering flaws is human psychology itself. Similar to other animation programs, you can’t just cobble together a large number of snapshots and have software and artificial intelligence perfectly mimic the personality and respective idiosyncrasies of a human being.
As we enter this new era of video misinformation, perhaps our biggest weapon against Deepfakes is awareness.
If we start recognizing the fact that powerful video manipulation is now widely accessible and can be easily done by anyone, we should start being more critical and mindful of all the video content that we encounter every day.
Tags: deepfake videos, Facebook, Google, misinformation