Companies and the regulation of AI

"All I want to know is where I'm going to die so I never go there.

Oliver Thansan
Oliver Thansan
22 May 2023 Monday 11:30
4 Reads
Companies and the regulation of AI

"All I want to know is where I'm going to die so I never go there." This quote, attributed to the famous Charlie Munger, successful Warren Buffet partner and Berkshire Hathaway vice president, illustrates that surviving to success is often based, in part, on avoiding obvious or big but surprisingly common and highly damaging mistakes.

It would be a notable business mistake to turn away from the opportunities of Artificial Intelligence (AI) but it is recklessly naive to jump into its arms without thinking about how to reduce, eliminate, transfer or accept the risks that its professional use implies. For years, technology consultancies have warned against using AI in business without such risk management. They point to both financial risks (for example, the AIs regulating certain areas of the treasury are not always capable of reacting in a timely manner to unforeseeable events), reputational (decision making with an impact on humans that is not always easily justifiable) or results (if the information that has been provided to the AI ​​to "train" it is incomplete or "correct" can generate biased results (for example, in selection processes the number of female programmers hired by AI was initially zero). Fortunately, technology It already has mechanisms to deal with them, although the associated legal challenges do not have solely technical solutions.

In this sense, the most relevant threat of using AI without risk management is that staff use AI tools, without warning or permission, to perform job functions more quickly and efficiently. This problem became clear a month ago when The Economist published that a well-known technology company had a confidentiality problem because, on up to three occasions, its staff used a well-known AI chatbot, compromising company secrets (from the secret source code of a software, to technical protocols or minutes of meetings). The technology company reacted by prohibiting the use of said technology by its staff.

Said chatbot (which suffered a significant security breach regarding the personal data of its users) was banned and temporarily blocked in Italy until it solved data protection problems, while the Spanish Agency for Data Protection has officially launched an investigation into the respect and the Catalan Authority for the Protection of Dades has advised against its use. Likewise, it cannot be ruled out that a company could be sanctioned by the authorities for the use by its staff of said AI technologies with personal data, if the technology does not offer sufficient privacy guarantees. If the different scenarios discussed above do not sufficiently convince companies in favor of AI regulation, it is necessary to point out that, in Spain, it is already mandatory by law.