Google CEO warns of the danger of deploying AI without supervision

The uses of artificial intelligence (AI) are causing intense debate around the world.

Oliver Thansan
Oliver Thansan
17 April 2023 Monday 00:52
18 Reads
Google CEO warns of the danger of deploying AI without supervision

The uses of artificial intelligence (AI) are causing intense debate around the world. Although its tools fascinate users, experts in the field warn of the danger of job loss or data protection, among others. In fact, at the beginning of April, Italy announced that it was banning ChatGPT for the alleged illegal collection of personal data. And Spain is investigating it through the AEPD.

This Sunday in an interview for 60 Minutes Alphabet Inc. and the CEO of Google, Sundar Pichai, pointed out that AI should be well regulated before being implemented in order to avoid possible harmful effects.

Google is one of the companies that has been among the leaders in the development and implementation of AI in all its services. Without going any further, Google Lens and Google Photos are software that is based on image recognition systems. In addition, the company has been working on natural language processing, something that its Assistant has taken advantage of to benefit from searches.

In the interview, Pichai said that what keeps him up at night about this new technology that is advancing at breakneck speed, is “the urgency to work and deploy it in a beneficial way, but at the same time it can be very damaging if implemented incorrectly. ”.

While Google has been gradually implementing artificial intelligence into its software, ChatGPT or OpenAI's Dall-E has begun a race to advance with this technology at breakneck speed and without control. “We don't have all the answers there yet, and the technology is advancing rapidly,” Pichai said. “So does that keep me up at night? Absolutely."

In this sense, the former executive director of Google, Eric Schmidt, stressed in the same interview that it is important that technology companies come together to develop the necessary protection measures. Of course, he stressed that everything that slows down this growth "would simply benefit China."

"One of the points that they've made is that you don't want to put out a technology like this -- which is very, very powerful -- without giving society time to adjust to it," Pichai said. “I think it's a reasonable prospect. I think there are responsible people trying to figure out how to approach this technology, and so are we."

Recently, so-called deep fake videos have been seen. They are those in which people appear making speeches or comments that they never really made. These practices were used by Pichai as an example of the need to regulate this new technology.

“There have to be consequences for creating fake videos that cause harm to society,” he said. “Anyone who has worked with AI for a while, you know, realizes that this is something so different and so profound that we would need social regulations to think about how to adapt.”