“I am not afraid of artificial intelligence, but of the human who programs it”

Isn't the risk of artificial intelligence that it is very stupid?.

Oliver Thansan
Oliver Thansan
29 December 2023 Friday 03:23
9 Reads
“I am not afraid of artificial intelligence, but of the human who programs it”

Isn't the risk of artificial intelligence that it is very stupid?

Artificial intelligence (AI) seems stupid in some features and only for now...

What is she stupid about and when will she learn?

The great language models today seem, at first, naive; But other medical applications, such as tumor detectors, are by no means superficial.

Will AI one day replace the doctor?

If we look at its progress since Alan Turing coined the term artificial intelligence, 66 years ago, and compare it with the learning period of a human from when he was born then until today... Well, hey, the machine hasn't been that slow.

Hasn't the baby learned more?

That was a comparison, not a race. Because we will not understand what AI is if we project it in the future as a race between machines and people, because it is a cooperation. And that is what worries me...

Are you worried that AI will learn more than us and dominate us?

I am concerned that a human programmer projects all his racist, sexist and ageist prejudices onto artificial intelligence and that it increases them and applies them with injustices and discrimination in our daily lives. But I will also tell you what fills me with hope...

That we will work on what we like and the most routine things will be done by AI?

Any 66-year-old human in an advanced country already uses AI technology on their mobile phone and in other areas of their life. So the balance of AI is positive. What worries me is that the great advantages and dangers of AI are still to be discovered.

How many of these prejudices of the AI ​​programmer have you denounced?

Facial recognition programs, for example, that millions of people have incorporated into their mobile phones are fertile ground for these abuses.

Are they common?

I already have a good dossier and some cases that I have taken to court, such as the case of Pasha Woodruf, who was mistaken for a habitual criminal by a facial recognition program and, being eight months pregnant, had to spend hours in a cell while suffering contractions...And she was on the verge of losing the baby, until the mistake was proven.

Why did the machine get confused?

Because Mrs. Woodruf is African-American and suffered the racial prejudices of the programmer, who deemed it unnecessary to dedicate that effort to a group he considered inferior.

It is not an isolated case?

There are multiple cases. Also in Detroit, another African-American, Robert Williams, was mistaken for a criminal by a security camera's facial recognition program and was arrested and handcuffed in front of his wife and children without knowing why.

Any case of ideological perversion?

I monitored a chatbot suicide that led a young Belgian man with a certain depressive disorder to suicide.

As?

The model ended up confirming his paranoias by concluding that, in effect, humans were accelerating global warming, so the young man ended up committing suicide.

Doesn't the energy expenditure of AI itself contribute more to global warming?

Researchers at Cornell University cited Leighton's use of persuasion as a serious risk to these programs...

Why is it so dangerous?

Because, in some way, the program acquires persuasion capabilities adapted to the capacity of the person who uses it.

Don't the large platforms, which use AI the most, suffer its effects?

In 2018, Amazon used a software program to select new employees with technological skills, but it clearly discriminated against women.

Did the programmer discriminate and not the program?

The problem was not AI, in fact, but the machismo of the programmer who wanted to prevent Amazon from hiring women. And they continued using it for five years until vigilantes like me reported it.

Machismo, racism... Are there also programmers with social class prejudices?

There are finance companies that marginalize minorities and by using AI in mortgage access forms without assessing factors in the applicant that human financial managers can consider, such as honesty, ability to excel or values.

Isn't that asking too much from software?

I wish it were a single piece of software, because AI has many layers of programming and programmers; That is why we must also demand several layers of controllers: politicians, legislators, computer scientists, unions... Either we are vigilant or in the end it will be the AI ​​that monitors us.