"AI has entered without addressing its ethical implications"

Alena Buyx (Osnabrück, Germany, 1977) is president of the Ethics Council that has advised the German Government since the beginning of the pandemic and spoke to La Vanguardia moments before participating in the City and Science Biennale, held simultaneously in Madrid and Barcelona and discuss what it means to "live on the planet" in a society with changes that have been accelerated by the pandemic and the implementation of artificial intelligence (AI).

Oliver Thansan
Oliver Thansan
24 March 2023 Friday 23:53
15 Reads
"AI has entered without addressing its ethical implications"

Alena Buyx (Osnabrück, Germany, 1977) is president of the Ethics Council that has advised the German Government since the beginning of the pandemic and spoke to La Vanguardia moments before participating in the City and Science Biennale, held simultaneously in Madrid and Barcelona and discuss what it means to "live on the planet" in a society with changes that have been accelerated by the pandemic and the implementation of artificial intelligence (AI).

Moral dilemmas such as genetic engineering or stem cell research continue to be the subject of discussion decades after they were raised. Are we at risk of ethical resolutions lagging behind advances in AI?

My answer is yes and no. As for stem cells, when the technology was ready all the ethical questions were raised. But biomedicine has learned a lot. The same scientists, Jennifer Doudna and Emmanuelle Charpentier warned of its ethical and social implications. This was a huge change compared to 20 years ago. And we quickly tackled an ethical debate for which we were no longer starting from scratch.

But it's not always like that.

Indeed. For example, let's talk about AI, which is something that is everywhere. All algorithms are always present and already shape our lives through mobile phones. If we take a look at social networks, they are driven by algorithms that are changing the public debate. We talk about disinformation, but the networks are also polarized because the algorithm is often programmed to highlight the most radical topics.

The face and the cross of the same coin.

In a certain way, yes, in terms of biomedicine we have been really good at European level, assimilating that there are ethical challenges with these technologies and that we must shape them. They have great potential, they can do a lot of good, but they can also do other very problematic things. But with algorithms, we have simply let them enter our lives and it has taken several years to recognize that they also have huge ethical and social implications. So yes, in that case we go behind the scenes to try to regulate them.

Should we, in the meantime, slow down the expansion of AI?

The ideal model today would be combined intelligence. But always focused on the person, on the patient. And not vice versa.

Would the best solution be to have several systems?

I won't answer with a resounding yes. But I insist that from Europe we must maintain our high ethical standards and that they are based on human rights and the protection of ethical principles and the care of vulnerable groups. I don't think we should move away from these standards just because it's not the same somewhere else. We should have a kind of ethics made in Europe and shield the system that we believe is the right one.

A good part of the technological advances are in the hands of a handful of multinationals such as Google or Facebook. How can a universal ethics make its way against economic interest?

We cannot allow a few companies to own this infrastructure that shapes public debates, that influences elections or the way our democratic debates take place. Data protection (GDPR) is not perfect, but I have no doubt that its role is very important, because it shields the right to know what happens when they get our data. We must continue with this tough battle to get these multinationals to subscribe to the rules we set.

This balance may seem easy in theory, but it is difficult in practice.

Let's look at Ukraine. It is worrying that a person, as is the case of Elon Musk, is providing the internet in the middle of this war. It's a good thing that it's happening, because otherwise the country wouldn't have internet, but a single individual shouldn't have that kind of infrastructural power.

Is Europe at risk of becoming a digital colony?

We should make the regulation in Europe simpler, because we have too many rules. But under no circumstances should we kill our own innovation in data analytics. We have built it based on a set of ethical values ​​that are important to us and that we will not part with. That's why we won't become a data colony.