"Artificial intelligence doesn't scare me, but the human that the program does"

Isn't the risk of artificial intelligence that it is too stupid?.

Oliver Thansan
Oliver Thansan
31 December 2023 Sunday 22:14
9 Reads
"Artificial intelligence doesn't scare me, but the human that the program does"

Isn't the risk of artificial intelligence that it is too stupid?

The artificial intelligence (AI) seems stupid in some features, just for now…

What is ruca and when will he learn?

The great models of language today are, at the outset, naive; but other medical applications, such as tumor detectors, are far from superficial.

Will AI one day replace the doctor?

If we look at its progress since Alan Turing coined the term artificial intelligence, 66 years ago, and compare it to the learning period of a human from the time he was born until today... well, we see, the machine has not been so slow.

Maybe the baby has not learned more?

This was a comparison, not a race. Because we will not understand what AI is if we project it into the future as a race between machines and people, because it is a cooperation. And that's what worries me.

Are you worried that AI will learn more than us and come to dominate us?

I worry that a human programmer will project all of their racist, sexist, and ageist prejudices into artificial intelligence and that it will augment and apply them with injustice and discrimination in our daily lives. But I will also tell you what fills me with hope...

That we will work on what we like and the most routine will be done by AI?

Any 66-year-old human in an advanced country is already using AI technology on their mobile phone and in other areas of their lives. So the balance of AI is positive. What worries me is that the great benefits and dangers of AI are yet to be discovered.

How many of these AI programmer biases have you reported?

The facial recognition programs, for example, that millions of people have built into their mobile phones are fertile ground for these abuses.

Are they common?

I already have a good record and some cases that I have brought to court, such as the case of Pasha Woodruf, who was mistaken for a habitual criminal by a facial recognition program and, being eight months pregnant, had to spend hours in a cell while convulsing... And she was close to losing the baby, until she was proven wrong.

Why was the machine confused?

Because Mrs. Woodruf is African-American and suffered from the racial prejudice of the programmer, who deemed it unnecessary to devote this effort to a group he considered inferior.

Is it not an isolated case?

There are multiple cases. Also in Detroit another African American, Robert Williams, was mistaken for a criminal by the facial recognition program of a security camera and was arrested and handcuffed in front of his wife and children without knowing why.

Any case of ideological perversion?

I monitored a chatbot suicide that led a young Belgian with a certain depressive disorder to suicide.

With?

The model ended up confirming his paranoia by seeing that, indeed, we humans were accelerating global warming, and the young man ended up committing suicide.

Doesn't the energy expenditure of AI itself contribute more to global warming?

Researchers at Cornell University cited Leighton's use of persuasion as a serious risk to these programs...

Why is it so dangerous?

Because, somehow, the program acquires persuasive abilities adapted to the ability of the person who uses it.

Are the big platforms, the ones that use AI the most, not suffering from its effects?

In 2018, Amazon used a software program to select new employees with tech skills, but it clearly discriminated against women.

Was the programmer discriminating and not the program?

The problem wasn't the AI, really, but the masculinity of the programmer who wanted to prevent Amazon from hiring women. And they continued to use it for five years until vigilantes like me reported it.

Masculinity, racism... Are there also programmers with social class prejudices?

There are financiers who marginalize minorities and when they use AI in mortgage access forms without assessing factors in the applicant that human financial managers can consider, such as honesty, ability to overcome or values.

Isn't that asking too much of a software?

I wish it was a single piece of software, because AI has many layers of programming and programmers; that is why we must also demand several layers of controllers: politicians, legislators, computer scientists, unions... Either we watch or in the end it will be AI that will watch over us.