"Five years from now there will be a superintelligence"

Susan Schneider researches the mind and artificial intelligence.

Oliver Thansan
Oliver Thansan
27 May 2023 Saturday 05:04
5 Reads
"Five years from now there will be a superintelligence"

Susan Schneider researches the mind and artificial intelligence. He holds the NASA Baruch S. Blumberg Chair at the US Library of Congress; she directs the Artificial Intelligence, Mind and Society group at the University of Connecticut, and is the founder and director of the Center for Future Mind at Florida Atlantic University. She researched superintelligent AI at NASA for two years and is the author of four books. Finally, Artificial Intelligence. A philosophical exploration of the future of mind and consciousness (Ediciones Kōan), explores the possibilities of a conscious AI and the evolution of the human mind with the possibility of artificial implants in the brain.

His book on consciousness and AI was published in 2019. Would it change anything today?

Yes. A couple of developments, because so much has been happening with artificial intelligence... One of the things is that large-scale language models have taken off. We didn't know anything about ChatGPT then, but there is one thing that comes up in this book, which is how to analyze consciousness. And I mention that deep learning systems are hard to test because they are trained on human data. So they can say anything they get from their training data. So it's complicated. These systems are black boxes.

Why is it so hard to figure out if an AI is conscious?

What makes AI consciousness incredibly difficult is that while we can say we are conscious as human beings, we don't know why we are. We do not have a complete philosophical understanding. And we don't know the scientific answer to why we are conscious in detail. We have theories, but they are not incontrovertible.

Is the possibility of replacing the human mind with one of chips, as he explains in his book?

I haven't seen any more success stories since the brain chip book came out. Now, there could be all sorts of things going on that aren't available to the public, like military projects.

What factors does this depend on to be a reality?

One thing that could happen, and I take this very seriously, is that these new large-scale language models are very intelligent, and it may be the case that as they develop, they begin to make groundbreaking scientific discoveries and advise medicine on how to successfully create brain chips. There is a project by Theodore Berger, which I talked about in the book, which is really exciting: an artificial hippocampus for people who have terrible memory disorders. The same for ALS patients. Such research facilitates the kind of projects that Elon Musk is doing, but it is very slow.

The higher the level of technology, the more we talk about ethics. Isn't it a paradox?

It is strange, because we are used to consciousness and rights as a matter of the biological realm. Anyone who goes to the OpenAI website and registers or goes to Microsoft Bing can have a very interesting conversation with the chatbot. And it's easy to wonder if he has feelings. Even I am careful when I interact with it and I thank it.

What do you think about ex-Google engineer Black Lemoine saying LaMDA had a conscience and getting fired?

I respect your opinion. I could be right. There is no professional, philosophical and academic way to demonstrate this possibility, but also to dismiss it as correct. It made me think that Google didn't like it being known. This is wrong. Google should not want to hide these issues. They will come out anyway. It's also a bad idea that Microsoft is coding AI systems to claim they are not conscious. It's a mistake. It's the last thing you should do. If an AI claims to be conscious, society has to contend with it.

What are the main ethical challenges of developing an AI with consciousness?

Those who have been most vocal in these debates have been what we could call defenders of the rights of robots. They explain how terrible it would be to make the mistake of assuming that they are not conscious when they are, because they could suffer and feel a range of emotions. If there is another group of highly intelligent individuals on Earth who are conscious, we would be sharing the leadership of the planet. We would be giving them the kind of rights we give to humans; so one AI, one vote. AI can be smarter than us in many ways. And these are just nascent technologies. I think that in five years we will have a superintelligence. I don't see a strict limit to the development of this kind of large language models. I think they scale them. As you feed them more data, they become increasingly sophisticated.

Is it a real risk that an AI will want to kill humans to satisfy its goals?

I have no idea. At my center we have just organized a talk with Eliezer Yudkowsky. He is absolutely convinced that we are doomed. We understand the logic of his reasoning as impeccable. It's definitely a risk, and that's why we need to create safe AI now. You have to take all of this very seriously. The strange thing is that technology companies are moving forward. I imagine the US Government and Department of Defense, China... are having an AI arms race right now.