'Deepfakes': we have a problem

A hot topic that seemed about to explode, the prosecution of Donald Trump for one of the accusations he faces, and the new version of an image-generating artificial intelligence, have caused a fire that reveals how this technology is blurring the increasingly tenuous line that separates reality from manipulation.

Oliver Thansan
Oliver Thansan
25 March 2023 Saturday 21:45
64 Reads
'Deepfakes': we have a problem

A hot topic that seemed about to explode, the prosecution of Donald Trump for one of the accusations he faces, and the new version of an image-generating artificial intelligence, have caused a fire that reveals how this technology is blurring the increasingly tenuous line that separates reality from manipulation. Little did Eliot Higgins, founder of the journalism site Bellingcat, know that when he typed into the Midjourney platform “Donald Trump falling as he is arrested” he was going to open the box of thunder. Artificial intelligence began to produce a series of photorealistic images that, on closer inspection, are flawed, but appear fascinatingly real.

Higgins has confessed in an interview that he thought "maybe five people" would retweet those false images uploaded to Twitter. She was wrong. They were 5.6 million. The digital journalist cheered up, because in his opinion "he was just being silly." So he asked Midjourney for images of Trump in custody, in court, his arrival at jail, his life as a prisoner and even his prison break.

The first realistic images of people created by artificial intelligence, known as deepfakes, began in 2014 with a technique called generative adversarial networking, which compares models of real images to create ones that do not exist in reality. Since then, technology has gained a lot. Midjourney, the platform for Trump's controversial images, ended up suspending Higgins' account, but this was not the problem.

It was immediately possible to verify that what was failing were the AI ​​barriers, because anyone could continue requesting false images of real people. It did, for example, Jack Posobiec, a former military activist of the extreme right in the United States and compulsive tweeter, who posted an image of Hillary Clinton being detained by the police as well as a video of President Joe Biden announcing a forced recruitment for the War from Ukraine. All fake, of course.

But the misinformation and polarization of society, faster with AI than it already was with social media without it, is just one of the great dangers of deepfakes. Porn is another one of them and it targets women especially. Blaire is a popular Twitch streamer known as QTCinderella who discovered this month that someone had put her features through an AI on a porn actress. "For every person who says that it's not that bad for her," she explained, "you don't know what it feels like to see your family sent a picture of you doing things you've never done."

The cases are endless. A student who was angry with her teacher used artificial intelligence to make her the protagonist of a porn movie. The woman was fired because her parents did not want her to work with her children. The list of women assaulted in this way points especially to politicians, such as Alexandria Ocasio-Cortez, Lauren Book, Sarah Palin, Katie Hill, Nancy Pelosi, Marjorie Taylor Greene, Hillary Clinton and Michelle Obama, among others.

One of the latest occurrences has been to put the voice of the late Apple co-founder, Steve Jobs, to ChatGPT. The result is a bot called Forever Voices for Telegram that advertises itself as follows: “Experience the magic of participating in two-way voice conversations with iconic stars like Steve Jobs, Taylor Swift. Get inspired, entertained and enlightened with our AI voice conversations with the legends you've always admired."

Until recently, creating these fake images and videos required some computer skills. Now, not only are counterfeits better, but they are available to just about anyone with an interest in using them. Regulation is one of the solutions to the illicit uses of technology. Authorities can impose it, but it requires companies to put up their own barriers. Some already do, although they generally adopt a policy of solving problems only as they arise.