Artificial intelligence is also sexist: this is how machines incorporate biases and stereotypes

Artificial intelligence (AI) floods our lives: it is used to create personalized recommendations that reach our mobiles, improve medical diagnoses, write essays, find errors in texts, create games or even decide on the granting of a mortgage.

Oliver Thansan
Oliver Thansan
10 March 2023 Friday 16:46
3 Reads
Artificial intelligence is also sexist: this is how machines incorporate biases and stereotypes

Artificial intelligence (AI) floods our lives: it is used to create personalized recommendations that reach our mobiles, improve medical diagnoses, write essays, find errors in texts, create games or even decide on the granting of a mortgage. Everything is done by a perfectly trained machine, capable of processing huge amounts of data, looking for a pattern and offering the optimal solution.

But they are not perfect machines; they can also become macho or racist in their conclusions. Now that ChatGPT is on everyone's lips for its incredible features, many will remember Tay, the Microsoft robot capable of having a conversation with Twitter users. It didn't even last 48 hours, because the machine went from tweeting things like “humanity is cool” to others like “Hitler was right and I hate the Jews”, in addition to endless sexist proclamations.

The responses from these machines are biased because the data it collects is biased as well. “AI works through the use of algorithms and mathematical techniques, which allow patterns to be extracted from large amounts of data. It is what we call learning. And they are not representative of the phenomenon or of the population studied. Therefore, they have a partial vision of reality”, explains the director of the Master's in AI at the UOC, Josep Curto.

“For example, if a bank wants to use AI to determine who gets a mortgage and who doesn't, the machine takes previous data. Based on these data, men will have more chances than women, because they have historically granted us less and because the system assigns us a risk profile”, points out the PhD in Technological Policies, algorithm auditor and founder of Eticas Research and Consulting, Gemma Galdon.

The expert gives another example, in this case in the health field: “If we use an AI to detect what certain symptoms correspond to, it will surely never tell you endometriosis, because traditionally female diseases are much less studied and diagnosed than male ones. It will also be less accurate in diagnosing a woman's heart attack. The symptoms are different from those of a man, and that is why it is likely that the system works worse in their case, because it has been fed with other types of data.

"If you use historical data and it's not balanced, you'll probably see negative conditioning related to black, gay and even female demographics, depending on when and where that data comes from," continues Juliana Castañeda Jiménez, a UOC student and lead researcher. of a recent Spanish investigation published in the Algorithms magazine.

To find out the scope of the problem, the researchers analyzed previous works that identified gender biases in data processes in four types of AI: the one that describes applications in natural language processing and generation, the one in charge of decision management, and the one that facial and voice recognition.

In general, they found that all of the algorithms better identify and classify white men. In addition, they observed that they reproduced false beliefs about what the physical attributes that define people should be like based on their biological sex, ethnic or cultural origin, or sexual orientation. They also stereotypically associated masculinity and femininity with the sciences and the arts, respectively. The problem also came with the image or voice recognition applications: they had problems with the highest pitched voices, which mainly affects women.

“AI is always limited by the past. You cannot generate new things. You can only make patterns based on the data that has been given for training. Anything new will always be under-represented, which is a terribly conservative force, which leads us to reproduce what already exists and not to create new things”, says Galdón.

The problem is not only in the data that these machines collect, plagued with biases or stereotypes, but also in those who are in charge of them. "AI system designers include these biases throughout the project: in the preparation of the data, in the model, in the interpretation of the results and/or in the presentation of the results," adds Curto.

ChatGPT, the conversational chatbot that dazzles the planet for its advanced features, can also fall for these biases. One of the main advantages of this tool developed by OpenAI is the number of sources it uses. Not only does it use the data that circulates through forums or social networks, which is a priori of poorer quality, but it also draws on news from the press or even doctoral theses.

But you can also fall into this error. “This (and other systems) are susceptible to veracity problems, include biases and even perpetuate stereotypes. In fact, this is known by the company behind it, OpenAI, which has dedicated efforts to alleviate these problems through human healing. However, there is still a lot of work to be done, because problems can emerge throughout the entire cycle of the creation of the AI ​​system (from the generation and capture of the data to the presentation of the answers) and, sometimes, they are detected when these systems are opened to the public”, warns this teacher.

These biases can (and should) be avoided. “The solution goes through multiple tasks. The first is to improve the data sets with which we train the system at the level of quality, veracity and identification of biases. The second is to include problem monitoring systems throughout the life cycle of the AI ​​system. Lastly, interpretability and explainability mechanisms must be included in the results to understand where they come from and why they propose a particular response”, says Curto.

Galdón talks about algorithmic audits, essential to guarantee the equality of all groups. “It allows, in a specific context, to see how an algorithmic system is impacting and ensure that those especially vulnerable or discriminated groups are protected, systematically and constantly measuring what those impacts are and ensuring that they are fair”, he concludes.