Does regulation protect from the shadow of AI?

* The author is part of the community of readers of La Vanguardia.

Oliver Thansan
Oliver Thansan
28 December 2023 Thursday 09:35
6 Reads
Does regulation protect from the shadow of AI?

* The author is part of the community of readers of La Vanguardia

On November 8, 2023, we witnessed the agreement on the Regulation of the European Parliament and of the Council establishing harmonized rules on artificial intelligence (already called the Artificial Intelligence Law).

Although we will have to wait for the final text, I must point out that this is undoubtedly a notable advance in the regulation of artificial intelligence (AI).

As is known, the future standard, whose full application will not arrive before the end of 2026, establishes a kind of traffic light system identified, we could say, in three colors: prohibited systems, that is, unacceptable risk (red), high-risk systems ( amber) and AI system that cannot be classified as prohibited and/or high risk (green).

Likewise, the negotiation has come to establish specific requirements for the so-called general purpose systems and foundational models such as ChatGPT, Bing, Bard, etc.

In Spain, AI had already begun to be regulated, although through what is identified as "soft law." Especially, among other regulations, we refer to the Spanish Charter of Digital Rights, whose article 25 indicates the Rights regarding artificial intelligence.

Likewise and more recently, we witnessed the approval of Royal Decree 817/2023, of November 8, which establishes a controlled testing environment for artificial intelligence systems (the "black boxes" or sandboxes for their English terminology). ). There are also different strategies both at the national and regional levels.

Well, all this points to the great effort that has been made to regulate a highly disruptive technology that maintains dark points related to its operation and training data.

It is precisely in this scenario that some doubts arise that turn out to be, in my opinion, of some interest. Perhaps there is one that stands out: is the regulation of AI the solution to the "penumbras" that this technology entails? It seems to be a legitimate doubt since in the end this technology has very vulnerable recipients: citizens and its operation is not exactly transparent.

The answer, however redundant it may seem, is that regulation, by itself, is difficult to guarantee, in practice, people's basic rights. The thing is that these basic rights are actually fundamental rights and human rights that precisely the darkness of AI can violate.

Among others: the right to equality, non-discrimination, personal privacy, the protection of our data (by extension), or freedom of expression.

Well, both at the European and national level there are or will be bodies that must control and enforce the standards (in Spain the Spanish Agency for the Supervision of Artificial Intelligence, in the European Union the European AI Office), however, the ramifications of AI are so many that it seems difficult for these agencies to exercise exhaustive control of the practical application of AI systems.

To all this, we must add the problems generated by those systems that use algorithms and source codes that cannot be classified as AI models but that also make decisions.

It is true that the boundary between these two typologies of systems is very subtle, however, a "mere" algorithmic tool is capable of making decisions that can violate people's rights in the same way that an AI application does.

The closest example is the case of the BOSCO algorithm (which did not include AI) and electrical and thermal social bonds. The denial of access to the source code prevented the exercise of the right to effective judicial protection provided for in the Spanish Constitution.

Therefore, it seems clear that the legal regulation of AI has become a necessity, although it is now necessary to look for real control mechanisms.

In this sense and although we should not exclude, for obvious reasons, that ordinary justice will increasingly deal with issues related to AI, the aforementioned mechanisms must be sought within the entities that use AI tools.

I am referring to the existence of specialized figures who provide observation and advisory services on the AI ​​tools used by their own organization. It is about creating a permanent figure that could very suggestively be called the Algorithmic Protection Delegate.

That's right, how the General Data Protection Regulation introduced the Data Protection Officer, a figure that today sits unequivocally in the different entities that operate with personal data (public and private), in matters of AI and algorithmic systems. , the Algorithmic Protection Delegate must monitor those responsible for his or her entity who use algorithmic and AI tools, warning them to comply with existing regulations.

Likewise, the Algorithmic Protection Delegate must carry out algorithmic audits, carry out a constant analysis of the risks of the systems used and implement appropriate security measures for the systems used.

In this same sense, the Algorithmic Protection Delegate must cooperate with the recent control authorities that have been created in the field of AI.

In this way, a surveillance and intervention scaffold is created that helps algorithmic and AI systems respond to the central element that European regulation and any other national regulation wants to provide: a person-centered AI. Precisely, the good work of the Algorithmic Protection Delegate guarantees the rights of people who may be involved in procedures based on AI models.

Others should be added to this proposal, such as, for example, the creation of robust algorithmic public registries or, in the public sector, that the different administrative resolutions in whose decision an AI system has contributed, include an explanation about the role that it has played. had the algorithm when making the decision.

In short, AI is welcome as long as it improves our existence and is never a technology that takes away our rights. The challenge is more than visible, now it's time to act.