ChatGPT: "AI can undermine democracy"

Robots have been obsessing – if not distressing – human beings for a century.

Thomas Osborne
Thomas Osborne
14 February 2023 Tuesday 03:29
7 Reads
ChatGPT: "AI can undermine democracy"

Robots have been obsessing – if not distressing – human beings for a century. The first to use the word 'robot' to designate an artificial being endowed with almost human qualities was the Czech writer Karel Capek, who in 1921 premiered a play in Prague – R.U.R. (Rossum Universal Robots) – where artificial humans appeared, made to work, who ended up rebelling against their creators. Robots, as a danger to the existence of humanity.

Since then, robots have populated literature and cinema. In the film Metropolis (1927), Fritz Lang put on celluloid a dystopian city-state in the then distant year 2026, segregated between an all-powerful ruling class and an enslaved working class, where a humanoid robot was used to spark a violent rebellion. The first encounter of my generation with another cinematographic robot, endowed with intelligence and free will –and also duplicity and evil–, was with the HAL 9000 supercomputer that governed the Discovery spacecraft in Stanley Kubrick's 2001 film, A Space Odyssey. , released in 1968. HAL, who even remembered a song from his childhood in Chicago (“Daisy, Daisy...”), engaged in a fight to the death with the astronauts whom in principle he had to serve and obey.

The rapid development of artificial intelligence (AI), with the recent and spectacular appearance of the OpenAI ChatGPT chatbot – behind which Microsoft stands – and the announcement of its next competitor –Bard, from Google–, is a true revolution. that opens the door to far-reaching changes in our societies. And it puts back on the table the risks of an increasingly powerful technology. ChatGPT, which stores and processes billions of data, is capable of writing texts on any topic and in all kinds of styles, solving problems, and conversing with the user almost as if they were a person. Its power, still in development, raises many questions in all human domains. Including politics.

What role can AI have in the management of public affairs? How far should your role go? Who would control his performance? What risk is there that it interferes and distorts the political debate? Could in the future end up making decisions instead of a human being? It sounds like science fiction, but surveys carried out in different European countries indicate that between 25% and 40% of citizens could come to accept the replacement of politicians by an AI.

Some have already played with this possibility. In 2018, in the Japanese city of Tama (on the outskirts of Tokyo), two technology gurus submitted a female-looking robot – named Michihito Matsuda – to local elections, promising that an AI could run the affairs. locals in a more reliable and fair way than the usual politicians. The robot ended up getting 9% of the votes, coming in third place. And in the Danish legislative elections last November, a collective of artists tried –unsuccessfully– to present a new political force, the Synthetic Party, with a robot as the headliner, a chatbot named Leader Lars to which they introduced the demands of the parties. Danish minorities from 1970 to today (which gave rise to implausible proposals such as establishing a universal minimum income of 13,440 euros per month)

Jokes aside, the matter is extremely serious and the warnings about the risks numerous. The Council of Europe's human rights commissioner, Dunja Mijatovic, summarized in June 2021 the threats in the political field in one: the "manipulation of public opinion". Mijatovic recalled that new digital technologies and social networks have spread disinformation and incited hatred and violence, "instilling fear in the population and fostering anti-democratic movements of the extreme right." And she called for the approval of a regulation so that big technology companies act in accordance with the legal framework of human rights.

What do the robots themselves think of all this? Asked directly about the issue, ChatGPT believes that, on the one hand, artificial intelligence can be of great help –through the analysis of large amounts of data and information– to support political decisions. But he also warns of its dangers: “AI also has the potential to undermine democracy if not properly regulated and used ethically. For example, AI can be used to manipulate public opinion, suppress dissenting voices, and undermine privacy and civil liberties."

ChatGPT seems, today, attached to democratic principles. And he does not believe that AI can ever replace politicians: "It lacks the capacity for empathy, creativity and moral judgment that are essential to make complex political and ethical decisions." “The idea that AI systems could make decisions on behalf of citizens, without any significant human oversight or accountability, would undermine the foundations of democratic governance,” he adds.

But, how far do the convictions of a chatbot that admits “not having personal beliefs or opinions” go? Not too far. It all depends on who is behind your programming. "If it were programmed with another ideological orientation," he admits, "it would be able to generate responses consistent with that ideology." There are no further questions, Your Honor.