Brussels urges the 'big tech' to mitigate "as soon as possible" the risks of Artificial Intelligence

It is time to move from good words to deeds.

Oliver Thansan
Oliver Thansan
05 June 2023 Monday 11:04
4 Reads
Brussels urges the 'big tech' to mitigate "as soon as possible" the risks of Artificial Intelligence

It is time to move from good words to deeds. It is the message that, through different channels, the European Commission (EC) has conveyed to companies active in the promising and disturbing sector of generative artificial intelligence to get them to commit to following basic principles that guarantee a deployment of these technologies in accordance with the European principles and values ​​of defense of rights and freedoms.

“With technology as powerful as this, we can't wait for things to unfold on their own, we can't take that risk. Just because it is something uncertain and we do not have all the answers, we should not stop doing what we think would make sense”, defended the vice president of the EC, Margrethe Vestager, yesterday in a meeting with several European media at the headquarters of the Community Executive. "We must move from discussions to commitments as soon as possible" to "mitigate the risks" of AI and "enjoy its potential benefits."

The European Union is working on several fronts, through different initiatives that can seem to overlap with each other, to put up a kind of common “guardrails” for the entire industry to guide the deployment of the next generations of AI. Brussels, for example, has decided to incorporate this type of technology among the factors to take into account when working with the large technological platforms in the fight against disinformation, and yesterday it asked companies to "identify and clearly identify" all content that has been generated by machines.

“AI-based technologies can be a force for good for society”, but “their dark side should not be overlooked, because they pose new risks and have possible negative consequences for society, such as misinformation”, argued the Vice President of the European Commission and Head of Values ​​and Transparency, Věra Jourová, after a meeting with the 44 signatories of the code of good practices against disinformation created in 2018, including all the leading companies in the sector (Meta, Google , Microsoft, TikTok...) and representatives of civil society.

“We want platforms to tag AI-generated content so that the normal user, who is also often distracted by many different things, can see it clearly,” Jourová explained, calling on companies to act immediately. "In a matter of seconds, generative AI can generate complex content, images of things that never happened, voices of people based on a sample of a few seconds...", recalled the Czech commissioner, in charge of facing risks from the point of view of view of the fight against disinformation, one of the areas where AI is most concerned.

On the other hand, the proposal for a law on artificial intelligence raised by the European Commission two years ago has entered the final stretch of its processing. The European Parliament will set its negotiating position next week and, if an internal agreement is reached, it will be able to go on to negotiate the final version of the regulations with the member states (the Council) starting in September. A process that will be piloted by the Spanish presidency of the EU. The European Commissioner for the Internal Market, Thierry Breton, responsible for the legislative initiative, has also proposed that companies join a "pact" to prepare for the entry into force of the new law, presumably in 2026.

Precisely because the transposition period of the European regulations will take about three years, Vestager is committed to working with companies so that they immediately assume some commitments regarding what type of sources they can use, what types of tests are carried out or what channels are enabled to supervise these services or solve problems. Thus, the G-7 summit in Hiroshima commissioned the representative of the European Commission and her counterparts in the United States to prepare "before the end of the year" a voluntary code of conduct, with contributions from the industry itself but to a certain extent. spot. While there may be "an alignment of interests" in terms of shared concern about how this technology can be used," it is important "not to let the process lead to lowest common denominator measures, because it won't work," Vestager said.

“I think there is a high level of awareness about how powerful this technology is and how it can make many things easier for us”, but, “at the same time, we know that it can give very bad results”, he stated before recalling the case of a Lawyer who asked ChatGPT to search for legal precedent cases to defend a client who was claiming compensation from an airline and when the other party reviewed the information, they discovered that none of the cases were real. “If you don't trust them, are you going to use it?” she asked.