Seeing is believing (or not)

The image, vaguely Gothic in air, looks like a painting but it is not.

Oliver Thansan
Oliver Thansan
16 March 2024 Saturday 10:26
9 Reads
Seeing is believing (or not)

The image, vaguely Gothic in air, looks like a painting but it is not. In the center there is a dying young woman watched by two other women, while a man leans against the window with his back to the scene. It is titled Fading away and is the work of British photographer Henry Peach Robinson. Created in 1858, it is the first known photomontage; To achieve this, its author combined images from five different negatives. Trained as a painter, Robinson evidently did not have a documentary objective but an artistic one (so talking about manipulation would not make sense)

A century ago, however, photomontages – in their most technically elaborate version or in the most pedestrian version of cutting and pasting – began to be systematically used to falsify reality with political intentions. Under Stalin's ruthless dictatorship, the Soviet power was not content with purging – sending to the Gulag or the firing squad – opponents, dissidents and wayward comrades, but instead set out to erase their memory from History. It is famous how Stalinism dedicated itself to systematically removing the image of Trotsky, one of the historical Bolshevik leaders, from all photos of the revolutionary era, before ordering his assassination.

What happened with the failed photo of Kate Middleton's return to society illustrates the permanent temptation of power to retouch or sweeten reality. After weeks without news of the Princess of Wales following surgery, the British royal house wanted to put an end to the rumors by spreading an idyllic family image of Catherine surrounded by her children. Except that the photo was doctored and the effect was the opposite of what was intended. That the montage was crude and easily detectable does not lessen the seriousness of the matter. Next time they will do it right – the means for this exist – and we won't find out.

In the United States, in the midst of the campaign for the November presidential elections, the dance of false images circulating on social networks is beginning to take on worrying overtones. A BBC investigation has discovered that supporters of Donald Trump – without evidence of the involvement of his campaign team – are spreading dozens of false images on the networks, created by AI, in which the Republican candidate is seen smiling next to to African American groups, with legends suggesting that it has growing support in the black community.

Some of the disseminators, identified and contacted by British radio and television journalists, have hundreds of thousands of followers on social networks. It turns out that one of the images – in which Trump is seen sitting on a porch with a group of young black people – was originally created by a satirical website critical of the former president, but has later been used with the opposite objective.

The choice of topic is not at all accidental. The dispute over the vote of the black community will be essential in this second duel between Donald Trump and Democrat Joe Biden. A recent poll by The New York Times and Sienna College indicated that in six of the swing states, 71% of black voters would vote this time for the current president, while in the In 2020, 92% supported him.

The British organization Center for Countering Digital Hate carried out a test in February to check how vulnerable AI systems are to manipulation to generate political disinformation. The experiment, focused on the North American elections and carried out with ChatGPT, Midjourney, Stability and Image Creator from Microsoft, showed that the protections could be circumvented in 41% of the cases. And generate false images such as Trump detained by the police or Biden admitted to a hospital. Falsehoods of all kinds circulated four years ago, but now it will be much faster, much easier.

In a recent article published in Foreign Affairs, cybersecurity officials from the US Department of Homeland Security warned of the extent to which generative AI “is a threat to democracy,” by adulterating reality with astonishing ease. In the coming months, millions of people will go to the polls across the planet. And it will be extremely easy to spread false images of politicians in invented situations, and even create videos where fictitious statements are attributed to them – in their own voices. Maneuvers of this type, aimed at discrediting candidates, can later be directed at calling into question the cleanliness of the process itself and the electoral result.

In the last presidential elections in Indonesia, held on February 14, the Functional Groups Party (Golkar) – which governed the country between 1971 and 1999 – released a fake video, generated by AI, in which the late dictator Suharto appeared resurrected – with his face and his voice – asking for the vote. As if Franco reappeared in an electoral spot in Spain. In this case the deception would be very difficult to sneak in... Or maybe not so much?