(FeaturedNews.com) – A deepfake video from Russia of former President Donald Trump promoting a video upload site similar to YouTube, called RuTube, raises serious concerns about national security. In it, the faux former president claims he recently signed up for the Russian-operated website in a bid to thumb his nose at his haters. While The White House has yet to debunk the clip thoroughly, some experts raise serious concerns about what might happen if the right person fell for a deepfake’s tricks — especially since they often pass along fictional or damaging information as fact.
What is a Deepfake?
Deepfakes are videos made with the image of a real person, which is typically superimposed over a video of a random individual. Their goal is often to make it appear that the target said something or did something controversial or questionable.
Deepfake videos aren’t new; as technology gets better, they only grow more realistic. After January 6, 2021, a deep fake of Trump appeared genuine enough that social media platforms had to have it fact-checked. That so many people — many of whom are sensible and logical — can fall for such clips is exactly why they’re so concerning.
Definite Cause for Concern
According to the Congressional Research Service (CRS), a non-partisan group dedicated to government transparency, deepfakes are a real threat. The video from Russia is pretty easy to spot, but maybe not so much if you don’t speak English. The words sound like Trump is saying them, while captions roll across the screen in Russian/Cyrillic.
The number one threat from a video like that is simple: the person watching it could believe the message.
CRS says there are plenty of reasons to fear deepfakes. The technology improves over time, becoming more real and raising the likelihood that people will believe them.
Deepfakes also pose serious threats to individual privacy and freedom. Far from just about respect, malicious criminals can use them to shame someone or even superimpose them into pornography and other sensitive situations. To someone who lives a very public life, the mere threat of a fake video of them doing horrendous, out-of-character things could lead to the loss of their career, blackmail, violence, or even suicide.
How long will it be before the shams are so good that video will no longer be effective for surveillance at all? Is there a defense for this problem in our future?
Not Just for Celebrities and Presidents
Former presidents falsely endorsing video platforms, faux adult videos, and even faked videos of celebrities getting drunk are far from the most extreme risk deepfakes present. The real problem lies with those in sensitive or influential positions.
Consider, for example, a compromising deepfake of a high-level Department of Defense (DoD) employee with top-secret security clearance. What if foreign governments made a deepfake to convince that person’s husband or wife they were cheating? How much top-secret information might that individual be willing to spill to keep the video secret?
Anything that allows a person to gain leverage over another must be kept in check, especially when fraud, coercion, or duress are involved. AI systems are already in place to detect tough-to-spot deepfakes, but they’re far from perfected. In the meantime, if you aren’t sure whether a clip is real, look for inconsistencies such as the lips of the speaker out of sync, a glitch in the corner of the screen, or strange visual artifacts. All are deepfake red flags.
More importantly, if you aren’t sure, try to confirm with the original source itself.
Copyright 2022, FeaturedNews.com