By Aaron Kesel
Videos utilizing Deep Fake technology, are spreading at a rampant rate online, as social media users are finding themselves questioning whether something is real or edited by the tech.
In recent weeks, there was a Facebook scandal where the company knowingly allowed a Deep Fake video of Nancy Pelosi slurring her words to spread on the social network. Following that was a scandal on Instagram where Facebook’s founder Mark Zuckerberg himself was suddenly proclaiming the shocking truth about Facebook being used as a spy tool against its users. However, there was just one problem — that video was too good to be true and was also a Deep Fake.
In 2017, a startup called “Lyrebird” made headlines with AI-generated replications of celebrity voices that were extremely convincing. (This appears to be one of the first real Deep Fakes that was uploaded to the Internet.)
All over the Internet companies are popping up that specialize in Deep Fake voices and videos. Activist Post previously told readers about Lyrebird, one company that makes these fake voice recordings.
One such voice recording is a conversation between Donald Trump and Barack Obama where fake President Trump’s voice says, “They can make us say anything now.”
That’s exactly what another company did with Facebook’s CEO Mark Zuckerberg, Donald Trump (again) and Kim Kardashian advertising some software called Spectre, Mashable reported.
The scary lifelike video, created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny, shows Zuckerberg sitting at a desk, giving a sinister speech about Facebook’s power over its users. The video is framed with a fake CBS broadcast with the banner, “We’re increasing transparency on ads,” to make it look like it’s part of a news segment.
“Imagine this for a second. One man, with total control of millions of people’s stolen data, all their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future,” the fake Zuckerberg appears to say in the video.
The Mind Unleashed previously reported that Joe Rogan was the first celebrity target for AI developers wanting to show off how far this technology has come in just two years since 2017. “A video released last month features Rogan talking about training a hockey team made up of intelligent chimps, among other equally ridiculous and amusing rants,” John Vibes wrote.
“I just listened to an AI generated audio recording of me talking about chimp hockey teams and it’s terrifyingly accurate. At this point, I’ve long ago left enough content out there that they could basically have me saying anything they want, so my position is to shrug my shoulders and shake my head in awe, and just accept it. The future is gonna be really f***ing weird, kids,” Rogan said on Facebook this week.
The prospects are worrying as Deep Fake tech advances into artificial intelligence, and now LiveScience reports that researchers are using voices to reconstruct faces by utilizing a technology called Speech2Face.
Thankfully, AI doesn’t (yet) know exactly what a specific individual looks like based on their voice alone. The neural network recognized certain markers in a speech that pointed to non-identifying factors such as gender, age and ethnicity — features that are shared by many people, the study authors noted — expressing the results of the experiments were a “mixed performance.”
“As such, the model will only produce average-looking faces,” the scientists wrote. “It will not produce images of specific individuals.”
With such technology as Face2Face, can we even trust what we see anymore being “live”? Like Benjamin Franklin said, “Believe none of what you hear and half of what you see.” Hell, you shouldn’t even believe that’s a quote from him, but whoever said it is right with this technology; and if it’s all combined, no one will be able to tell the difference between what’s real and fake. A German team wrote that the Face2Face program in the wrong hands would allow anyone to change the mouth and words of a person speaking in a video even if it’s live. In their clip Face2Face: Real-time Face Capture and Reenactment of RGB Videos, they demonstrate how this technology works on video recordings of world leaders Bush, Obama, and Putin.
Watch the video below by Truthstream Media from Melissa and Aaron Dykes for why you can’t trust anything you see on the news, even far beyond just astroturfing.
In a SecureTeam video uploaded in 2017 to YouTube titled “These People Don’t Exist” (below), you can see a slew of faces which have been fabricated with a software called pix2pix. In reality, none of these people are actually real, they are computer generated CGI. Further, if you look closely, some of the faces display choppiness, others are strange and disproportionate; however, the rest are eerily lifelike to a scary extent.
Pix2pix allows a user to sketch any object (e.g. a person, place or thing, etc.). The AI then takes that input and renders it to produce a colorful, lifelike version, complete with depth – so real that, in the case of half the examples in the video, it is highly doubtful that anyone would be able to tell the difference. With the few examples that look fake, it is only a matter of time before the AI gets good enough where it can fool anyone.
The third advancement shown in the video is Diminished Reality software that takes video footage and can actually erase objects from footage in real-time. The way it does this is by taking a frame, lowering the resolution, isolating the object, deleting it, using the surrounding pixels to fill in the gap, then bringing up the resolution again. It can do all this in real-time without you noticing. The software allows the user to circle an object he/she wants to be removed from the video, and it’s gone, then filled in with the same background that surrounds it.
So this technology is not only a danger for editing words and faces, it can also be used to edit objects … and it’s only a matter of time until it can be used to edit live news footage.
Activist Post reported in January 2019, that’s exactly what happened when one of U.S. President Donald Trump’s speeches was doctored live and rebroadcasted to millions of people, repainting him orange and showing him sticking out his tongue.
RT did a comparison of the since-deleted videos noting that, “Q13 Fox in Seattle appears to have edited its coverage of Trump’s address, turning the president’s skin color a ludicrous shade of orange. In between sentences, the station seems to have doctored the footage to show Trump sticking out his tongue and licking his lips,” RT wrote.
That employee was later fired from the station following an apology letter from the broadcaster.
“We’ve completed our investigation into this incident and determined that the actions were the result of an individual editor whose employment has been terminated,” said Q13’s news director to KTTH.
“This does not meet our editorial standards and we regret if it is seen as portraying the President in a negative light,” Q13 told KTTH and MyNorthwest.
All of this tech has its hurdles and is far from perfect yet. But the days of being able to trust photo evidence and video evidence are disappearing – and the implications for human knowledge are far-reaching.
However, as Activist Post previously reported researchers at NYU Tandon School of Engineering are turning to that same artificial intelligence to thwart artificial intelligence by using digital watermarks to verify the authenticity of photos and videos. It’s important to note that they do not address audio production, only image, and video.
For more on tech to identify Deep Fakes watch this video by Full Measure with former CBS Sharyl Attkisson describing how we can identify them … for now at least.
Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.
Our IP Address: