This is how artificial intelligence can help mental health care

The news of artificial intelligence brings together multidisciplinary teams of professionals who seek to take advantage of this technology also in the field of mental health.

Oliver Thansan
Oliver Thansan
07 June 2023 Wednesday 10:22
8 Reads
This is how artificial intelligence can help mental health care

The news of artificial intelligence brings together multidisciplinary teams of professionals who seek to take advantage of this technology also in the field of mental health. Psychologists, computer scientists, doctors, linguists and a wide etcetera of experts add their knowledge with the aim of advancing in this intersection of artificial intelligence and the psychological.

Karina Gibert, professor and director of the Center for Research in Intelligent Data Science and Artificial Intelligence at the Polytechnic University of Catalonia (IDEAl-UPC), is one of them. “The first thing we developed was an expert system that, by analyzing a patient's data, was capable of offering a diagnosis,” she says. Gibert is referring to the Mental-AI project promoted in the 1990s in collaboration with the Hospital de Bellvitge and the Hospital Clínic de Barcelona.

An expert system consists of a program capable of emulating the reasoning and decision making of a human expert. A flesh and blood specialist will have a certain amount of knowledge based on their level of experience which, along with their own personality or biases, will lead them to reach certain conclusions. The training of technology with large amounts of data will allow us to overcome limited human knowledge, in turn helping to achieve greater objectivity.

One of the main functions of a clinician is precisely the establishment of a diagnosis. In the specific case of Mental-IA, clinical data from patients from both hospitals was available. The different use of certain terms in the two centers brought to light the importance of harmonizing terminology for this purpose.

Artificial intelligence has greatly helped this performance by also using what people say or write. Here natural language processing would come into play, a plot focused on understanding human language and processing it. By analyzing speech, it is possible to establish patterns that lead to a diagnostic category.

The information to be analyzed can be extracted from any writing. A very valuable one is social networks. A study published in the Journal of Medical Internet Research in 2020 by several Catalan research centers evaluated the suicide risk of Twitter users. In this case, they characterized the users taking into account, in addition to their writing, their publication patterns, relationships with other users, and published images, exceeding the accuracy of models based exclusively on text.

Natural language processing is still a developing field and is still in its early stages of implementation, although it is established as a tool with great potential. Something similar occurs with the use of biomarkers, such as blood tests or brain images, a resource for establishing diagnoses that is rarely applied in the clinic.

Part of the problem lies in ignorance. “Mental disorders are disorders of the individual and it is still not well known physiologically what processes occur there. You see that the person does not reason or is distressed, but you do not know very well what is happening in that brain. From artificial intelligence, we can help analyze data from patients who suffer from this type of problem to try to shed light on the more physical genesis”, says Gibert.

An article recently appeared in Frontiers in Psychiatry exploring the application of a machine learning-based recognition model of depression. To do this, they analyzed brain images and the levels of certain stress-related hormones in depressed and anxious people, and their effects on mental state. The team concluded that depression was more accurately identified than without the help of artificial intelligence.

A fairly advanced field is that of treatments. Some algorithms are capable of directly interacting with patients in the form of therapeutic chatbots while others operate behind the scenes to predict health risks or recommend personalized treatment plans.

The work of Albert “Skip” Rizzo, director of the Medical Virtual Reality team at the University of Southern California (USA), focuses on the development of the former. He and his team are working on designing online characters to talk to about mental health issues.

“We are now developing a mobile app for military veterans at risk of suicide. It has a virtual human with whom they can communicate at any time. The app could even track the user's smartwatch and know if he's having a panic attack because his heart rate has increased,” he explains.

The advantages that artificial assistants of this type offer over a human therapist, according to the psychologist, are that they never tire, do not judge, are always available and offer help to those who otherwise would not have it. “There are a lot of people who, because of stigma, will never go see a therapist. Or it may be that they don't have the necessary money, ”he says.

Despite their remarkable qualities, these artificial therapists are not designed to replace the flesh and blood ones. It is a complement, a help to, ultimately, encourage a visit to a health service if someone needs it.

Regarding the application of personalized plans, Gibert talks about some cognitive rehabilitation software capable of analyzing how the person is responding to therapy and configuring it accordingly to adjust it to the patient's needs. She herself worked on the automation of personalized sets of exercises in collaboration with the Institut Guttmann in Barcelona.

Another notable tool within the therapeutic options are companion robots. An example is the new EBO X, available from this month. This robot from the Enabot company has been designed as a "protector, companion and playmate" for families. EBO X has the ability to, in addition to interacting, detect crying, warn of falls or remember to take medication, all in an automated way. Also, the robot is set to get even smarter with the inclusion of ChatGPT.

Treatments generally also include the use of drugs, psychoactive drugs in this case. Artificial intelligence is used in turn to evaluate new drugs or even to model the harmful interactions between them and thus help clinical staff in drug prescription tasks.

Beyond person-focused efforts, society-focused actions can be taken with the help of such technology. Artificial intelligence has served the World Health Organization (WHO) to detail the situations in which low- and middle-income countries found themselves in relation to their mental health systems. As a result, seven profiles were prepared, which are the ones used by the WHO to design the intervention and development plans of said systems in these countries.

Something in which the expert voices agree regarding the use of artificial intelligence in the field of mental health is that there should always be a human supervising.

“The last word must be held by a specialist human being. You cannot let a machine have it”, says Antonio Javier Diéguez Lucena, professor of Logic and Philosophy of Science at the University of Malaga.

For her part, Karina Gibert states: “The first axis of the ethical model of artificial intelligence developed by the European Union says that artificial intelligence should never make decisions alone. And much less in the field of health, and much less in mental health. We must not put artificial intelligences in places where they make terminal decisions.”

In this document, prepared by the European Commission in 2020 and entitled Assessment List for Trustworthy Artificial Intelligence (ALTAI) or Evaluation List for Trustworthy Artificial Intelligence, a series of issues related to artificial intelligence are established to assess whether it follows the requirements specified ethics. The first and mentioned by Gibert establishes that said technology must support the agency and decision making. Artificial intelligences have to guide, influence or support people in decision-making processes, and do so under human supervision.