The problem for mental health of creating AIs in our image and likeness

Last March, the Belgian newspaper La Libre published the news of a man who had committed suicide after several weeks of isolation and interaction with artificial intelligence, motivated by growing pessimism and anxiety regarding climate change.

Oliver Thansan
Oliver Thansan
03 May 2023 Wednesday 22:03
35 Reads
The problem for mental health of creating AIs in our image and likeness

Last March, the Belgian newspaper La Libre published the news of a man who had committed suicide after several weeks of isolation and interaction with artificial intelligence, motivated by growing pessimism and anxiety regarding climate change. The chatbot, named Eliza, became his confidant during this time.

Eliza was precisely the name of the first technology of this type, devised in the 1960s by Joseph Weizenbaum, a computer science professor at the Massachusetts Institute of Technology, as a parody of the questions that certain types of psychotherapists asked their patients. Weizenbaum became critical of the artificial intelligence he himself had helped create when he saw how some people, including his own secretary, tended to trust the program and express their concerns.

The danger of falling into the emotional networks of artificial intelligence has also been explored in fiction. In the chapter "I'll be right back" of the Black Mirror series, the protagonist replaces her boyfriend who died in a traffic accident with a bot with the same personality as hers created from the boy's fingerprint. The girl had traded a loved one for a series of algorithms that seemed to love her.

“Vulnerable people, for example, children and people with pre-existing mental health conditions, can be easy victims of such bot behaviour, which can carry dire consequences,” says digital ethicist philosopher Mark Coeckelbergh in his commentary. on the Belgian case.

They certainly aren't the only people at risk. There is evidence that those with lower working memory capacity and low attentional control are more likely to trust artificial intelligence. The same goes for those who are more extroverted compared to introverts. The literature has also found that Asian cultures are more accepting of artificial intelligence and other emerging technologies than individuals from Western countries.

"There's certainly a propensity to rely on artificial intelligence for certain groups of people outside of what might be considered 'vulnerable people,'" says Arathi Sethumadhavan, a former Director of Research at Microsoft, where she has worked at the intersections of ethics, society and product innovation.

It's no wonder, however, that anyone misses out on such apps given their ability to mimic the human. Artificial intelligence has created conversational algorithms that easily trick people into impersonating a peer, or pass on fake images or videos as real.

“When you anthropomorphize artificial intelligence, the user assumes or gives it more human attributes. So when he does make mistakes, they are much more forgivable as we know that humans fail all the time,” says Sethumadhavan.

According to the expert, one of the ways to exercise a certain retaining wall comes by dehumanizing chatbots. “When designing these kinds of interfaces, you should try to avoid using expressions like 'I'm thinking' or 'I'm feeling' to describe processes. Instead, opt for more technical terms like 'generating responses' or 'writing responses for you,'” she notes.

For Albert "Skip" Rizzo, director of the Medical Virtual Reality team at the Institute for Creative Technologies at the University of Southern California (USA), it is the human tendency to create representations and suspend disbelief when we interact with technologies. what is behind the phenomenon. "If you treat someone with a fear of heights by pretending that he is in a skyscraper, the person does not move forward, even though he knows that he is on solid ground," says the psychologist.

Despite anthropomorphizing machines and trusting them, their intelligence does not include abilities such as empathy or sensitivity towards the interlocutor. As much as they appear human, they do not understand or feel, but merely identify patterns based on some input data and respond accordingly.

“No matter how good these programs are, deep down they function as linguistic calculators. Some have described them as stochastic parrots. ChatGPT does not understand what it is saying, it only calculates the probability of putting one word after another”, says Antonio Javier Diéguez Lucena, professor of Logic and Philosophy of Science at the University of Malaga. “We know that they can do very intelligent tasks, but without intelligence,” he adds.

For Coeckelbergh, the implications of the interaction with intelligent machines go further. “Artificial intelligence is not ethically neutral. Something like a chatbot is not 'just a machine' and not just 'fun to play'. Through the way it is designed and the way people interact with it, it has important ethical implications, ranging from biases and misinformation to emotional manipulation,” says the philosopher.

In 2020, Robert Julian-Borchak Williams was wrongly stopped by Detroit police in front of his home and in front of his family because a facial recognition algorithm had identified him as the perpetrator of a crime committed by another person. Williams was an African-American man, as was the suspect. Traditionally, algorithms of this type are not as reliable with black people or with women, with the corresponding negative impact on those affected.

"Most of the biases shown by these algorithms is because they have been trained with data that does not represent the group," says Karina Gibert, professor and director of the Center for Research in Intelligent Data Science and Artificial Intelligence at the Universitat Politècnica de Catalunya. . According to Gibert, to solve the problem, in addition to using representative data, it is necessary to have diverse teams behind this technology that offer "an extra guarantee that it is more difficult for bias to slip through."

Recalling Coeckelbergh's quote that artificial intelligence can seem like something "fun to play", such entertainment is easily scalable to technological dependency. More than well-known are the pernicious effects of addiction in social networks on people's well-being. Anxiety, stress, depression or social isolation are some of the risks associated with uncontrolled use.

In this case, one could speak of a negligent development of the algorithms, regarding the well-being of people, for the benefits of companies: users are encouraged to spend more time in environments such as social networks or metaworlds. “In other words, they seek to create addictive behaviors,” Sethumadhavan says.

When it comes to misinformation, fake news spreads faster than true. Generative artificial intelligence, which can create realistic-looking images, videos, or text, makes it even easier for fakes to spread.

Prominent voices in Silicon Valley have risen to call for the pause and regulation of advances in artificial intelligence. However, for some like Rizzo, artificial intelligence is not going to stop. In this way, the priority of regulating it emerges, according to the experts interviewed.

The European Union is currently processing a bill on artificial intelligence that should be completed this year. It is possible that this new legal framework will not be in force this year or the next.

Partly because of the lagging progress of standards, Sethumadhavan and Gibert speak of a shared responsibility of both regulators and users to ensure the protection of people.

"We would have to incorporate patterns of relationship with machines into people's education," says the professor. “We are in a period of digital transformation. This is going at a speed that, even if we regulate, there will always be new proposals. That is why it is so important in the basic training of people to introduce elements that help to relate to artificial intelligence and new technologies to generate responsible and critical citizens”.