We are all at risk of being decoded

For several months, everyone has been talking about artificial intelligence (AI) at all levels, from the comically famous "brother-in-law" to the most powerful rulers.

Oliver Thansan
Oliver Thansan
01 November 2023 Wednesday 11:14
8 Reads
We are all at risk of being decoded

For several months, everyone has been talking about artificial intelligence (AI) at all levels, from the comically famous "brother-in-law" to the most powerful rulers. For example, the White House has just introduced an executive order aimed at promoting safe and reliable AI systems. Its effectiveness will largely depend on how this regulation is applied, and in particular, which aspects are binding and which are merely voluntary.

For its part, the G-7 has just agreed on a code of conduct, in this case purely voluntary, that AI companies should respect. A summit is being held on November 1st and 2nd, at the iconic Bletchley Park (where Alan Turing succeeded in deciphering German military messages coded using the Enigma machine) with the aim of drawing up global rules on an AI safe

In my opinion, these initiatives pay excessive attention to hypothetical dangers of extinction of humanity due to future artificial superintelligences. All of this reflects a long-term narrative, driven from Silicon Valley, about the existential risk posed by AI. This narrative is increasingly dominant in public discourse.

This is worrisome, because focusing efforts on avoiding hypothetical harms that could arise in the very long term, or not, diverts attention from the real harms that AI is already causing today. One of which is the very real risk of being decoded (a neologism coined by AI expert and activist Joy Buolamwini combining the terms “excluded” and “encoded”, the latter in the IT sense of the term).

We run the risk of being decoded when a hospital uses an algorithm that categorizes us as non-priority to receive an organ transplant. We are also decoded when we are denied a bank loan due to algorithmic decision-making or when our CV is automatically discarded by an algorithm. We can also be decoded when a tenant selection algorithm denies us access to the home. They are real examples. No one is immune to being decoded and those who are already marginalized are at greater risk.

It is no coincidence, then, that the narrative of the long term is driven by those responsible for the big tech companies in Silicon Valley, since what they intend is to hide under the smoke screen of long-termism the real dangers and thus avoid regulation with the (doubtful) argument of falling behind in the innovation race.

What should be done is a radical rethinking of how AI systems are built, starting with very strict regulation of data collection practices so that they are ethical and consensual with data owners. Tech companies should be subject to public scrutiny of their AI systems to ensure that their products are developed while minimizing potential harm and assessing, once deployed, how these systems might affect our lives, our society and our political systems.

So far the few voices that have dared to point out the real problems today caused by AI systems have had to face aggressive criticism on social media (as in the case of Joy Buolamwini, who had to defend of Amazon's public attacks) or to the rejection of their bosses, which can even cost them their jobs, as was the case with Timnit Gebru and Margaret Mitchell, fired by Google officials.

It is a shame that voices warning about the real and current dangers of AI are at risk of being silenced by the very companies that hypocritically claim to be very concerned about the risks of AI. Those of us involved in the research and development of AI are morally obliged to inform the public that AI systems can e xcodify us, and to avoid this it is essential to confront the big technologies through regulation strict, even stricter than the one that will be established in the future "AI Act" of the EU, namely, to prohibit the deployment of applications that clearly pose a high risk to society.