New deep-seated videos of actor Tom Cruise came to TikTok under the handle @deeptomcruise, and a boy really looks like them. They are so realistic, in fact it is possible that you would not even know it was computer generated if you were not alerted by the account handle. And they were made using not much more than sample material from Cruise and diefake technology that makes it easier for everyone to use.
Not even two years ago, it would have been easy to distinguish between a real and an AI-generated video from someone. But technology is advancing so fast that we have reached an escape rate, and it’s clear that theft is not just going to be used for innocent purposes, such as animating photos of your family members.
This video of Tom Cruise? This is a computer-generated fake.
To see is to believe – In a series of tweets, Rachel Tobac, CEO of SocialProof Security, warns that thieves like @deeptomcruise threaten to further weaken public confidence in a world where media literacy is weak and people can no longer agree on what is true or false. Like the black and gold dress, where one person may notice the giveaways that the Tom Cruise videos have been synthesized, the other may not know the signs of a fake and swear up and down that it really is.
“Just because you feel you can personally distinguish the difference between synthetic and authentic media, does not mean we are good to go,” she says. “It matters what the general public believes.”
Deepfakes are especially dangerous because video is widely regarded as indisputable evidence. A prominent individual may fall deep into a hate crime, or someone who has legally committed an unjust act may use deep fakes to make an alibi.
AI versus AI – As deepfakes get better, companies, including Microsoft, have built tools that can detect them, and generate trust figures based on signals that a piece of content is being manipulated, such as the appearance of subtle blur or grayscale elements visible to the naked eye is not visible. And in fact, a website called CounterSocial was able to detect the Tom Cruise videos as fake when it was managed by its own artificial intelligence algorithm.
But TikTok is where the videos are presented, and there is no in-depth detector built into the app. The problem with retrospective detection is that by the time false information is tagged, it may be too late to reverse the damage. Microsoft is trying to address this with a digital signature tool. The idea is that content creators will sign the metadata of a video as it is encoded, and a tool that users install in their browsers will then alert them if the metadata has been altered in any way. This is still an experimental concept.
Time for labels – Tobac says that programs like TikTok should have integrated tools for detecting and labeling deep-pockets. And until then, Cruise and others will have to verify themselves on these social networks just to get out before fake.
Even after adding such detection layers to programs like TikTok, the deep trap technology will continue to improve, and that probably means there will be a constant cat-and-mouse game from now on. Microsoft has acknowledged that even the detection technology will need to be constantly updated. This is simply the nature of technology.