"AI is very difficult to control and is taking over the world"

Gary Marcus is one of the voices that shakes up with solid arguments some of the approaches of generative artificial intelligence companies and, although he is moderately optimistic about the possibility of controlling its effects, he warns about all its risks.

Oliver Thansan
Oliver Thansan
09 October 2023 Monday 11:33
10 Reads
"AI is very difficult to control and is taking over the world"

Gary Marcus is one of the voices that shakes up with solid arguments some of the approaches of generative artificial intelligence companies and, although he is moderately optimistic about the possibility of controlling its effects, he warns about all its risks. Professor of psychology and neural sciences at New York University, Marcus delivered a conference yesterday at the Parliament of Catalonia to open the meeting of the Commission for Technological Assessment of the European Parliament (EPTA), which dedicated a day to AI.

Marcus compared reliable technologies that work, such as GPS, to language models, which he said are "not conventional software tools" and that they are capable of "hallucinating". “You put something here, you have no idea where it will end up, and they are very difficult to control. And, despite this, they are taking over the world", he said.

The US expert, who in May testified before the US Senate Judiciary Committee on the risks of AI, called on European politicians to put "into perspective how we make laws around these things", because " companies will say that they need to be given all the power because they are the only people who know how it works, because it will change the world and because it will generate a lot of money”.

According to Marcus, there are some marketing moves around artificial intelligence to make you believe that they are already close to achieving some results that, in reality, do not occur. He gave as an example the case of self-driving cars, which, according to the neuroscientist, is "a very optimistic view of the fact that things still don't work".

For Marcus, "the field of AI needs some intellectual humility". "We have a human tendency to believe things that are said with confidence, which means we trust ChatGPT and we shouldn't."

The neuroscientist is in favor of keeping the human mind as an example in the case of the development of artificial intelligence. "Not because we have to build an AI that is exactly like the human mind, which has many flaws. But even so, there are many things that humans can do that machines can't. One is abstraction", he observed.

However, regulation of AI is the most urgent. "We need to develop new policies to mitigate the many risks of generative AI," said Marcus, who called in April, in an article in The Economist, for the creation of a global AI agency, despite the fact that he thought "that would never happen". "Now - he commented - it is becoming very popular". He believes that “we need national and global AI agents. Every nation should have its own AI agency, because things are moving very fast.”

In terms of regulation, there is a need to demand full accountability on the data with which the systems are trained. For example, he outlined how a generative AI showed images of white doctors treating black patients, but couldn't get it to reverse the roles because language models "are incredibly sensitive to the exact details they're trained on."

Marcus called for forcing AI companies to cooperate with scientists examining them. He drew two future scenarios. A positive one, where in the next few years AI will start to improve things like climate change, medicine and care for the elderly. In the negative alternative, AI companies will become more powerful than states, cybercrime will open wars against companies, and technology will be used as a weapon to kill people. Chaos and anarchy.

Conclusion: "If we don't have scientists and ethics at the table, our prospects are not good. We cannot allow big business to dictate the rules. We have to do it right. We don't have much time to waste."