The AI ​​regulation has holes

If you can't be the first to create, try at least to be the first to regulate what others create.

Oliver Thansan
Oliver Thansan
16 February 2024 Friday 09:30
12 Reads
The AI ​​regulation has holes

If you can't be the first to create, try at least to be the first to regulate what others create. This is the reasoning of the EU, which last week, after a tortuous legislative path, approved the first European regulation on artificial intelligence (AI).

Its objective is to protect citizens from the possible abuses of this technology, which Europe currently does not directly contribute to developing and which is currently in American hands (perhaps in the future it may change hands again). Gets it? To a large extent, yes, but with nuances.

To begin with, there are things that, according to the regulations, this technology will not be able to do, because it is likely to affect some fundamental rights. We are talking about facial recognition in real time in public spaces (exceptions are planned for security reasons). Or the elaboration of data referring to the intimate sphere of people, such as emotional recognition.

The regulations address sensitive issues, such as cases in which AI, after collecting data, could provide information to evaluate a worker's performance, such as their productivity. Which could give arguments, in extreme cases, for a dismissal or a professional promotion. The same occurs when AI serves as a tool to evaluate the suitability of a candidate who aspires to a certain job position.

The regulation says that a technology with these characteristics could be sold in the European market. As long as there is a posterior validation of the results by a human team. And as long as the AI ​​company self-certifies that in its system there is no violation of the dignity of the person or access to sensitive data.

And here the problems come into play. Adrián Todolí, professor of Labor Law at the University of Valencia and author of the recent book Productive and extractive algorithms (Ed. Aranzadi), gives a clarifying example. “It is as if a pharmaceutical company decided to put a medicine on the market and declared that it has no harmful effects. Without going through the filter of an independent authority, such as the drug agency.” This self-certification is also ambiguous, because it is not known who to send it to and where to store it.

Behind this option of soft control (in fact, in the hands of the controlled party) is the EU's will not to stop investments in technology in European territory, leaving protection in limbo. “Under current conditions, citizens are not adequately protected,” Todolí maintains.

Without taking into account the possible interferences with other European regulations (data protection), there is a risk of leaving the resolution of disputes in the hands of ordinary justice. Spain has been a pioneer in creating an Agency for AI, but it is neither independent (there are members of the ministry on the committee) nor is it operational yet.